What is speaker burn-in?
I like to think that my beliefs and preferences, when it comes to the world of audio, are based more on engineering data and attempted objectivity than just hear-say. However, sometimes objectivism – that is, taking measurements and producing data – fails to create consensus on a topic; just because we can measure an effect doesn't mean that we can hear it. A prime example here is speaker 'burn-in.'
Burn-in is the belief that the drivers (the parts that make the sound) in speakers and headphones arrive stiff and inflexible from the factory, which makes them sound harsh. It's only after continued use that they loosen up and reach their peak performance. The period of time that it takes for these drivers to reach their intended state is known as the burn-in length, and some high-end manufacturers will even quote recommended burn-in lengths for their speakers.
Following this line of thought, this means that those brand new expensive speakers or headphones that you only started using yesterday are but a glimmer of their true selves, and, in order to hear them in their true intended form, you need to pump sound through them for anywhere from 10 to a few hundred hours.
Fact or fiction?
If you start digging through audio forums (particularly headphone forums), you'll find countless anecdotes that follow the same story arc: a new set of headphones sounded harsh, but after playing white noise (static)/pink noise (fancy static)/audiophile album of choice (Lou Reed) through them for x hours, they opened up/sounded warmer/produced tighter bass/etc. I see this as a tremendous waste of time, and – in the case of speakers – a terrible annoyance.
But is it really a tremendous waste of time? Surely if (some) speaker manufacturers recommend it, it must be true, yes? After all, it's not like it's generating them any extra revenue. Unfortunately, it's not so easy to prove or disprove.
Scanning through the literature, I've found a few particularly well-executed tests that have, indeed, found changes in driver properties over a period of burn-in – but the changes are miniscule; so miniscule that only those with well-trained ears are likely to notice a difference, and then only a minor one.
This is really important to note: even though burn-in appears to have a physical effect on the drivers, it's not going to have the level of effect that so many forum members purport it to. So, why do people say that it does?
Well, I think it comes down to two main reasons: unless someone has two sets of speakers or headphones – one brand new, one already burned in – there's no way for them to compare the two. They instead have to rely on a sonic memory that is days old. It's hard enough comparing two similar sounds separated by seconds, so the idea that someone can definitively say that this new sound is superior to one they heard days ago is quite a stretch.
The other reason is that your auditory system (that is, the complex processing machines that convert sound into information interpreted by your brain) will grow used to a set of speakers' or headphones' sonic signature over time. I've had headphones that sounded terrible when I first got them, but grew better (indeed, superb) over time. Burn in, surely! Nope: they were second hand. It's just that my previous headphones were quite bassy, and these were quite bright, and my brain didn't enjoy the change.
This won't stop a hoard of believers from filling up forums with anecdotes reinforcing their beliefs. And besides, what's the harm? You only defer your use of the headphones/speakers by a few days, and you may just have a better product at the end of it.
Indeed, it looks like there can be no consensus. So, let's take a look at what some of the tests have concluded and allow you to pick your side on this endless debate.
First of all, though, let's examine what happens to drivers as they are first used. The spiders attached to the rear of the cone are usually made of a cloth that is shaped and impregnated with a resin. As the spiders move for the first time, the epoxy cracks and become pliable. The idea is that the epoxy takes time to grow effortlessly pliable, and the sound will suffer until that happens. To a lesser degree, this also goes for the speaker surrounds.
Now, before I go too far down the track here, I feel that it's worth pointing out that while headphones are just small speakers with head straps, burn-in doesn't apply to all of them. As noted by Bryan Gardiner in a 2013 article covering headphone burn-in for Wired, the tiny armature drivers used in quality in-ear headphones just don't have the same potential for mechanical change that normal voice coil drivers do.
Indeed, as Bryan notes, Shure has been running measurements on the armature drivers in its E1 in-ear monitors since they were first launched in 1997, and the results show that the sonic signature of their test pairs hasn't changed in the nearly 20 years of on-going examination since they first rolled off the production line.
So, let's now take a look at some tests completed over the years.
Tyl Herstens at the audio enthusiast publication, Inner Fidelity, decided to see if speaker burn-in exists for a particular set of headphones – AKG's K701 (later rebranded as Q701; the 'Q' is for 'Quincy Jones'), which are notorious for requiring hundreds of hours of burn-in.
I really like Tyl's approach to this experiment, as well as his interpretation of the results. Better yet, he took feedback and criticism from his readers, and then integrated it into a second set of tests and measurements (luckily, he had a few sets of headphones to test). Then, finally, he ran a test not based on objective measurement, but on his response to the headphones: a blind listening test. Thorough.
Now, while changes in the headphones' sonic signature where objectively measured, they were tiny (the differences being within a range of a half a dB). Most notably, however, differences in the headphones' sonic signatures were able to be consistently detected by Tyl's ears when he ran the blind test between brand new and burned in Q701s – but Tyl himself noted that they were very subtle. So, while he found evidence that burn-in appears to exist, he still believes that most people won't notice any difference due to the burn-in effect alone. To quote:
"I think it's important to say that the K701 (and therefore the Q701) are notorious for their need of long break-in. The differences I heard, while evidently fairly obvious to me, were not large. I'm absolutely convinced that, while break-in effects do exist, most people's expressions of headphones "changing dramatically" as a result is mostly their head adjusting and getting used to the sound."
Tyl's bevvy of measurements were performed on a set of headphones, but do the rules change for the larger drivers in speakers?
The ever thorough Audioholics site decided to test for speaker burn in way back in 2005, and have a thoroughly documented experiment that includes comprehensive background information – including a sane argument that speaker burn in completes very quickly (tens of seconds), and thus occurs (and completes) at the factory during the testing stages of manufacturing. To quote:
"Required break in time for the common spider-diaphragm-surround is typically on the order of 10s of seconds and is a one-off proposition, not requiring repetition. Once broken in, the driver should measure/perform as do its siblings, within usual unit-to-unit parameter tolerances."
They ran tests with brand new drivers (as well as broken in ones) mounted in two purpose-built boxes – one vented and one sealed. In both cases, they found that the differences in mechanical compliance (stiffness) measured pre- vs. post-break in were so minor so as to be negated by the influence of the box it was mounted in. Indeed, the article provided the following conclusion:
"From the foregoing analyses, it's reasonable to conclude that suspension compliance changes arising as a consequence of initial driver burn in has little effect on the performance of a loudspeaker system."
Perhaps the most interesting outcome, however, was that the tests showed that variances in amplitude response between individual drivers of the same model – even drivers pulled from the same manufacturing batch – were actually stronger than pre- vs. post-break in changes. In other words, it's easier to tell the difference between two drivers of the same model, pulled from the same assembly line in the same factory on the same day, than it is to tell the difference between a broken-in driver and a brand-new one. I think this translates to about as negligible effect as you could possibly imagine. Quoting again:
"When the test series was run to completion, the resulting amplitude response graphs indicated that an end user would likely encounter larger system-to-system amplitude response differences ( 1.04 dB Spl) owing to normal driver variances than would be encountered breaking in raw drivers."
Indeed, if you're not fazed by the reality that even your left and right speakers will exhibit differences in their sonic signatures, how can you believe that differences even smaller in nature could change your perception after breaking them in?
So, I retain that any differences heard over the first x number of hours of use of your new speakers or headphones aren't actually due to any physical changes in the drivers, but due to your brain adjusting to a new sonic signature.
But hey, if you don't mind putting off hearing your new toys for a few days, there's no harm in having them playback some fuzz for a few days – you just might look like a bit of a weirdo in doing so.