When review samples behave badly: what happens when things go wrong?

When review samples behave badly: what happens when things go wrong?
(Image credit: Future)

Hundreds of products pass through our test rooms every year, so it should come as no surprise to hear that occasionally we come across something that doesn’t work properly or, in the worst cases, not at all. It is a credit to the manufacturers and modern production methods that it is unusual for us to come across anything that flat-out fails to start. Given the complexity of the products we review, that is admirable. 

Most of the issues tend to be minor in nature and software related, which is understandable bearing in mind just how sophisticated that aspect of hi-fi and AV design has become. The issue could be a compatibility problem, a glitchy app or something else that directly influences aspects of performance. Usually, the fix is just a software upgrade away, though it can take some time for that to be available.

When a product doesn’t work straight out of the box the solution is simple – we ask for another sample. Unless there is a known problem with the design, or the company has developed a bad reputation for reliability, we normally give the brand the benefit of the doubt. 

The story changes significantly if the second sample has similar issues. Then we will make a big thing of it and make sure it gets mentioned in the review as a warning to potential customers. The overall star rating could well be affected too, and not in a good way.

But what if a review sample works but performs in a less-than-optimal way? There could be many reasons for this, from an individual component on the circuit board not functioning properly to something as general as slack manufacturing tolerances. The product could have been damaged in transit, or perhaps a previous user had abused it in some way. Regardless, a sub-par sample is tough for any reviewer to spot unless they have had previous experience with the product – say, during a press launch – or have had enough experience with that brand's products to know what they tend to deliver.

Naim NAP 250

The two samples of Naim NAP 250 power amplifier (Image credit: Future)

That was the case when we reviewed the new Naim NAP 250 power amplifier recently. Our first sample was a well-used demo unit that had apparently done the rounds during the product’s launch. There was nothing to indicate anything was wrong: the packaging was undamaged, as was the unit’s casework. That first NAP 250 was used with its natural partner, the NSC 222, and our reference Burmester 088 preamp, and all appeared fine on initial listening.

Its sound didn't suffer from any obvious distortions or buzzes, and its clarity was impressive. Yet, for all the insight, power and punch, we felt something was missing. We expected more expressive dynamics and Naim’s famed rhythmic drive, but what we got was relatively flat-footed in all respects. It was a powerful yet blunt tool and we were disappointed. Given our long experience with the brand's products, the alarm bells started ringing.

We checked the basics, first ensuring that the mains leads were properly seated and then swapped our usual Chord and Vertere reference cables for Naim-branded interconnects and speaker cables, just in case this changed things. It didn’t. 

We have the company’s SuperNait 3 integrated amplifier on-site, so we plugged that into our system instead of the NAP 250 and partnering preamp. Sure enough, the SuperNait did what we hoped and delivered a sound full of verve and drive, though without the outright resolution and clarity of the higher-end pre/power combination.

Power amplifier: Naim NAP 250

It is worth checking that all the cable connections are secure (Image credit: What Hi-Fi?)

Something was clearly off with our review sample and we asked for a second unit. To Naim’s credit, that arrived just a few days later. We plumbed it into the system and made comparisons with the first sample. Those comparisons didn’t last long. The second NAP 250 delivered exactly what we craved. It had all the power and punch of the first unit but added so much more in the way of finesse and drive that it was hard to believe that we were listening to the same product. Sometimes this happens with second samples; sometimes the two samples sound similar after all.

So what was wrong with that original sample? Naim investigated and found that an internal component had become dislodged, probably due to the unit having been dropped hard during transit. Given the company’s track record and famed attention to build quality, we're inclined to give it some leeway before casting any aspersions. 

This isn't about being biased towards Naim or any other company. It is about understanding that even with the best will in the world sometimes things go wrong, and if a company has proven itself over time then we're more confident that any issue is more likely to be fixed. That, in the end, is the important thing.

MORE:

The Mission 770 story takes in the BBC, Spendor and the drive to do better

3 key qualities that make a good reference hi-fi system

How to choose the right speakers and get the best sound

Ketan Bharadia
Technical Editor

Ketan Bharadia is the Technical Editor of What Hi-Fi? He's been been reviewing hi-fi, TV and home cinema equipment for over two decades, and over that time has covered thousands of products. Ketan works across the What Hi-Fi? brand including the website and magazine. His background is based in electronic and mechanical engineering.

  • nopiano
    I really appreciate these insights, Ketan. I know you probably don’t see these comments, but somebody might?

    Ironically, back in day it seemed that brand new kit was almost more likely to expire than well-used stuff, but clearly something was off with your Naim example.
    Reply
  • Gray
    What would be more interesting to me was if Naim were to genuinely find nothing wrong with the product.
    They know they need to give one but, what response would they then give?
    Reply
  • daveh75
    More interesting still would be if they did objective reviews and had measurements from the two units to compare.
    Reply