How we test laptops and desktops: our reviewing process explained
Our gauntlet of computing product benchmarks explained
Recent updates
This page has been updated as of 07/06/2024 with the latest information about the benchmark tests we run while reviewing laptops.
TechRadar believes evaluating products based on their value proposition. As such, we review computing devices, including laptops, convertibles and desktops of all sizes against their own price and other options available on the market. Our review process is also underpinned by rigorous tests to determine the quality, performance and battery life of each machine.
Build quality
We begin by examining a laptop's design and build quality to see how solid it feels to the touch and whether it fulfils the role it was designed for.
The overall build quality is important, so we go to great lengths to test the strength and durability of each system. We also assess the functionality of all ports, switches and latches. The quality of the screen is considered, with checks for brightness, evenness of tone, as well as any dead pixels identified.
The final part of our initial tests deals with the weight of the machine and its relative portability. Next, we assess the overall usability of the machine, including the quality of the keyboard, touchpad and overall user interface.
Updating and optimizing
Before we begin testing, every laptop gets updated with the latest patches, firmware updates and drivers. No device stays frozen in time and while this means benchmarking numbers are a constantly moving – and often, rising – target, it's a part of our technological progress.
We also turn on "high performance mode" before testing. This ensures the integrated graphics as well as any other components inside the laptop are operating at their maximum performance. Similarly, we switch laptops to "balanced mode" in the power options before battery tests to ensure they don't run out of juice prematurely in a setting that isn't intended for use on battery.
Laptop and desktop benchmarks explained
As every device is tested using the same suite of benchmark tests, its performance can easily be compared against rival products. Each review is accompanied by the test results for that machine, as well comparisons to its closest competitors.
Get daily insight, inspiration and deals in your inbox
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Before the hands-on part of our testing has been dealt with, the laptop will be run through a series of benchmarks to check overall performance. Each machine is set at the same high performance level for all tests save for battery life. This way, we can judge how effectively it will run at its maximum potential.
We use a number of synthetic tests to measure a laptop's components. The first, PCMark 10 battery life, tests the device's battery endurance. We then follow up with an anecdotal battery test further simulating real life usage: video playback. (Both of these are detailed in sections following this one.)
The PCMark 10 performance is conducted as well at the highest performance settings, testing its CPU and ability to multitask as well as render complex files and graphical elements. Meanwhile, 3DMark is specifically designed to test the strength of the laptop's graphics processor(s) with various 3D modeling and game physics tests. Specifically, we run the Night Raid (aimed at gaming laptops and low-power devices), Time Spy (DirectX 12-enabled) and Fire Strike (pushing the high-end) 3DMark tests on each gaming device.
We then evaluate the CPU's multi-core performance through Cinebench R23, which measures various hyperthreading capabilities. Geekbench 5 is also used to measure the CPU's multi-core and single-core throughput.
We also run CrystalDiskMark 8 benchmarks to test the speeds of the included SSDs and hard drives. The speed of the drive inside a laptop or desktop PC has a huge impact on the overall performance of the device. The faster a drive is, the quicker the laptop boots up, apps open and large files can be saved and transferred.
Generally, if a laptop is meant for general computing use, we also test its "casual" gaming performance using the built-in performance benchmark in Sid Meyer's Civilization VI, at its max settings and its lowest settings as close to 1080p resolution as a laptop will allow. We report the average frames per second for each to give a sense of how well the laptop can handle light gaming.
If we're reviewing a proper gaming laptop or desktop, or sometimes a particularly powerful prosumer device, we use benchmark tools found in PC games like Cyberpunk 2077, Total War: Warhammer III, and Dirt 5 to truly tax those dedicated graphics chips to measure their performance. We run these tests at their highest and lowest settings at 1080p resolution, to give an idea of where the device's boundaries of power are at the most common pixel count.
If we are dealing with a creative workstation which is designed for intense multimedia work like 3D rendering or video editing, we will also run some content creation benchmarks to measure relative performance, such as the Blender Open Data benchmark, as well as encoding a 4K video into a 1080p video using Handbrake and record how many frames per second it was able to process on average.
When it comes to testing Chromebooks, alongside using the device as our day-to-day laptop, we run Chromebook-specific benchmark tests. These are Mozilla Kraken, which tests how well web browsers perform when using the internet (essential considering ChromeOS, the operating system used by Chromebook, are heavily based on Google's Chrome browser), Speedometer, which measures how responsive web applications perform, and JetStream 2 which also looks at web application performance that mainly uses JavaScript and WebAssembly technology.
For testing Macs and MacBooks, we run Cinebench R24, Geekbench 6 and Blender benchmarks to test out CPU and GPU performance.
Each of these performance tests are run at least twice to ensure an accurate result. If two results for a test vary by more than 100 points, then we run the test a third time and take the average as the final score.
PCMark 10 battery life explained
This software tests mobile performance and battery life, simulating popular general tasks such as video chat, web browsing and document creation while the system is unplugged.
Firstly, all laptops are set to Balanced in the power options if possible. We also tweak some advanced settings, including telling the screen and hard drives to never sleep, we set the critical battery level to 5%. Most importantly, this test is conducted with the screen brightness and system volume set to 50%, with no radios active but Wi-Fi. Any additional lighting is disabled as well.
With the laptop's battery fully charged, we disconnect the main power. PCMark 10 then simulates day-to-day use until the battery hits the critical 5% level and the computer shuts down. Before it does though, PCMark 10 will record the time it spent running the test, so we get an accurate representation of the real-world battery life, rather than an estimate.
- 4 hours: This either isn't a very power-efficient machine, or wasn't designed for endurance.
- 7 hours: Generally Ultrabooks come in at around the long end of this time, as well as high-resolution laptops.
- 10+ hours: Only the longest-lasting laptops can achieve this level of endurance or longer.
Anecdotal battery life explained
In addition to using PCMark 10 to create a synthetic measurement of battery life, we also test how long laptops can last through a regular day of use through a common task: video playback. In this real life test, we run a 1080p video on a continuous loop through VLC Player and measure the time until the battery is completely drained in hours and minutes.
We run the laptop from a full charge until the machine ultimately shuts down after the battery is fully exhausted.
This test is also conducted with the screen brightness and system volume set to 50%, with no radios active but Wi-Fi. Any additional lighting is disabled as well, and the power setting is set to Balanced if possible.
How TechRadar ranks its computing buying guides
The way in which TechRadar ranks the products within its buying guides for computing products relies heavily on the scores each product receives. This helps keep our recommendations consistent, but there are a few more details:
- No product that receives fewer than 3.5 stars in a review can be eligible for inclusion in a buying guide.
- Rankings are determined by both score and any awards that products may have received in addition, in descending order.
- Ties in score are broken automatically by any additional awards via the following hierarchy: Best in Class, Editor's Choice, Recommended and Great Value.
- Ties in both score and awards are broken exercising editorial judgment, based primarily on value.
- Any products that have yet to be reviewed in a buying guide are automatically ranked lowest until they have received a full review and are ranked accordingly.
Matt is TechRadar's Managing Editor for Core Tech, looking after computing and mobile technology. Having written for a number of publications such as PC Plus, PC Format, T3 and Linux Format, there's no aspect of technology that Matt isn't passionate about, especially computing and PC gaming. He’s personally reviewed and used most of the laptops in our best laptops guide - and since joining TechRadar in 2014, he's reviewed over 250 laptops and computing accessories personally.