Once again I find the discussion getting bogged down in detail and circular arguments.
By bringing up and supporting blind testing I was attempting to share some experiences where, clear and obvious differences, clearly observable in sighted tests largely disappear when the listening is blind.
Any reasonably competent blind test will show this and in this context the rigorous analysis of results does not matter, it is simply that most people are shocked by how small these differences become when visual (and perhaps other clues) are removed, I must admit to being thoroughly chastened after my first experience of such testing.
While the science undoubtably matters to academics, for me it is the experience of the listeners that really is to the point. Irrespective of the results the difficulty in hearing differences blind, that are considered 'night and day' in sighted tests should, at the very least, teach that subjective, sighted testing is not to be relied on.
Matt49 - If I impliesd that ABX testing is not double blind, then I was being unclear. As you say double blind is essential in any serious test.
We do so many shows in a row,
And these towns all look the same,
We just pass the time in our hotel room
And wander 'round backstage,
Till the lights come up, and we hear that crowd,
And we remember why we came.
I just going to keep asking until someone give a satisfactory answer - What proof is there that double blind ABX testing works for audio?
It depends what you mean by "works".
The ABX testing that's been done on hifi kit has in most cases produced statistically null results, i.e. the subjects in the tests have displayed preferences (e.g for an expensive amp as against a cheap amp) that are statistically no different from complete randomization.
But (and davedotco may disagree here) this ABX testing has not been carried out to standards that those in the field of psychoacoustics would regard as academically robust.
Also there are vanishingly few hifi "experts" who are willing to submit themselves to these trials, for a variety of reasons.
The conclusion I draw from this is that the jury is still out and is likely to be out for a long time yet.
That may or may not be a satisfactory answer.
I would respectfully suggest that the overwhelming reason is that, 'the golden eared' brigade are just too scared to find out that they are not so golden eared after all. Career over for reviewers, shown to be unable to tell the difference between different equipment and cables and not a whole lot of mileage in participating in the purchase [or otherwise] of publications that use such people.
No point in testing people who do not believe that differences exist because, they will deliberately not hear any difference or are actually hearing challenged and don't know it. A couple of reviewers have 'outed' themselves recently about subjective reviewing, earning a living and giving their readership what they want. The vitriol expressed by people was a disgrace. The truth, even though mildly veiled, was too much for the audiophiles to cope with.
Apple Lossless - ATV3 - AVI ADM 40 also ATV3 into AVI ADM 9T [my wife's system]
and Grado SR80i
I have linked to this before, which is a blind test of cables....and basically shows that they were able to tell the best and worst from each other: http://www.nordost.com/default/pdf/hifiplus_issue34.pdf
If the link stops working, go to Nordost.com -> Reviews -> Speaker Cables listening tests
"Everything has been said before, but since nobody listens we have to keep going back and beginning all over again." André Gide
Double blind and ABX testing are two different techniques, i believe they are quite different in psycological terms, thogh matt49 will put us straight on that.
In both cases the test equipment is set up in such a way that all variables are excluded, apart from the the items being compared, ie in the case of an amplifier test the tone controls would be set flat, the output levels carefully matched and care taken that both amplifiers are working well within their designed capabilities.
In a double blind test the test subjects are played excerts of music on one system, then on the other. They are simply required to say which one they think sounds best. Importantly, the person carying out the test does not know which amplifier is which at any point.
An ABX test is subtly different, in this case music is played on one system then the other, then a third time on either the first or the second system. The subject as asked whether the third sample is a repeat of the first or the second sample.
This is repeated as often as possible, with as many subjects as possible and the results calculated statistically, but I believe differently in the two cases.
There are also variations on these tests where tests are inserted where the two excerts are played on the same system. ie there should be no difference, and the results of these tests analysed too. All of these results can be (and have been) subjected to rigorous statistical analysis.
All of this analysis is of interest to academics but the ordinary hifi enthusiast simply wants to know whether he can tell a difference or not and the answer is often not.
Perhaps more importantly the enthusiast gets to see just how tiny the differences can be between, apparently, quite different products.
I think there are subtle variations on ABX tests but the one I've participated in we were not told exactly what we were testing. We were told that we were going to listen to various pieces of music more than once, that we could take notes, request repeats discuss what we thought, advised to concentrate on one particular aspect when comparing etc. Not strickly ABX but a varient. We were told at the end that we were listening to 3 different music server solutions including a bog standard HDD at varying bit depths.
My understanding of a double blind test is an ABX test (or very similar) where the person conducting the test is also unaware of the exact order of the tests though they will probably have to know what's being tested! The idea being that the testers cannot deliberately or sublimably influence the subjects or be seen to influence the subjects. Sighted formal tests are a complete joke that no self-respecting music lover should defend if they have any respect for impartiality.
I, like many have conducted sighted comparisons at home. These do require a degree of caution to reduce Expectation Bias such as:
Deciding what we want to achieve & to be honest with ourselves - leave that axe in another room!
Acknowledging the principle of bias exists & the probability that no one is immune
Change one thing at a time
Undo any changes
Redo the changes
Not to take the process too seriously - have breaks & have fun
Consider long-term tests that last days, not minutes if possible
In conducting these comparisons with other interested parties, consider introducing blind tests (if only to gauge what difference it makes!)
My suspicion is the the shorter the test period the greater is the likelihood for confusion which may seem counter-intuitive & might also be wrong! The reason for suggesting it is because I've noticed differences months after changing something when I haven't been listening for them but have got confused when listening over minutes such as in a showroom. My opinion is that any who thinks they have perfect ears, are never biased & can pick out the the make of connectors being used are probably fooling no one apart from themselves.
"The optimist proclaims that we live in the best of all possible worlds - the pessimist fears this is true."
James Branch Cabell
MAIN: Apple TV2, Mac Mini & iTunes Match, CA Azur 751BD or Panasonic P42V20B into audiolab M-DAC, feeding a Primare A34.2 via XLRs, 2x 5m of Atlas Ascent 2 firing up Totem Arros.
ON THE HOOF: iPhone 5S/Sennheiser MM450.
Last time you asked the same question I linked to a number of ABX tests which had been passed (eg for speakers). So that rather disproves your notion that a segment of people deliberately always fail them.
If you read this, I apologise. If you link to the tests, I'll read them, I promise.
I'm very familier with this specific test though I did not take part. Two points make the result unreliable for me.
Firstly there was no attempt to equalise levels.
Secondly the 'brighter', more forward cable won.
The term "they would say that, wouldn't they!" springs to mind but I say just because that might be so, doesn't automatically make it so. My position is that I'm not sure if the extra SQ I paid £1000s extra for is all in my mind or not. Does it matter? Well yes it does because any future purchases would be based on other criteria than SQ.
However, I'm suggesting that the effectiveness of the DB ABX method can itself be tested: introduce repeatable & measureable abberations to see if the subjects can hear them. The degree of distortion (I use the word in its broad sense) can be increased. I admit the conclusions would need careful analysis but if the listeners had trouble distinguishing let's say 10% harmonic distortion or a 10dB channel imbalance one would have to acknowledge the ability to hear subtle changes are improbable! Needless to add, any such testing would need to be carried out by disinterested parties.
I've managed to miss BenLaw's post that included a link to an ABX test that showed a positive result - I'm hoping he reposts it/them.
There are those who would say that a cable can't be "brighter and more forward," which is one of the reasons I linked to it.......but I take your point.
However, I'm suggesting that the effectiveness of the DB ABX method can itself be tested: introduce repeatable & measureable abberations to see if the subjects can hear them.
Pretty sure this is how the orginal jitter test must have been done, ie testing different levels of jitter and finding the point at which people are able to tell the difference. ie they failed the tests at jitter leve x but passed it at jitter level y and above.
Collated by an esteemed member: http://idc1966.blogspot.co.uk/
HiFi / A/V / Bedroom
I know I've bored on about this before, but I'll just repeat one point that I think is important re. the various pieces of evidence from blind tests of hi-fi. AFAIK all these tests have been carried out by hi-fi amateurs or engineers (who are of course excellent folk and experts in their own field), but none of the tests (AFAIK) have been done by properly trained psychologists.
There's a lot of scientific literature on "sensory evaluation testing" (taste, smell, hearing etc). It's a very difficult thing to get the set-up of the tests right (as @busb suggests above). I don't think any of the hi-fi tests have properly taken account of these difficulties. I'd love to be proved wrong on this, because I do really believe that blind testing has a role to play in hifi.
OK, rant over.
What classical music are you listening to?
This thread is funny. Some better writing on here - but in the end - still the same outcome. Think for yourself and do some blind testing. Simple.
Cables can be quite difficult, possibly because they are seen as passive items, maybe for other reasons.
Back in the 70s and early 80s there was a technique, popular in the US, called shotgunning. Put simply this was using two runs of speaker cables, usually the same, instead of one. This was before bi-wiring or separete connectors, just simply paralleling two pairs of cables.
This was practically mainstream for a time, advocates talked about a clearer more open sound and many enthusiasts agreed. Eventually someone got around to measuring the voltage 'arriving' at the speaker terminals and found a slight but measurable increase in voltage with two cables rather than one. Given the change in the impedance characteristics and the halving of the DC resistance, this is really no surprise, but for awhile no one thought to check.
When the volume levels were equalised to take care of the difference, all the advantages of shotgunning disappeared and the technique died out.
The same techniques can be applied in the manufacture of normal single cables, impedences can be varied with different styles of manufacture and the response slightly tweaked.
The point being that these different responses are deliberately engineered into cables, simply to make them sound different, ribbon style cables for examble are, in the main, quite forward. This build in difference my not be an obvious frequency response variation but might be a phase shift or perhaps overshoot on transients, difficult to know as such things are rarely measured.
I'm off work probably all week with Chicken Pox (of all the things to get at my age!) So have plenty of time to read up.
Sorry to hear you're not well, hope you get well soon. I can think of better ways to spend my convalescence!
I have to admit I do find this topic interesting - its one of the best in hifi and also one of the easiest to resolve.
imo - try various cable usually its no great spend anyway ( the best cable I have cost £25 - its a single coax and only a metre long, but its paid for itself in terms of satisfaction)
but all I can add to this thread is try the QED interconnects upto the Performance range - £40, its good.
© 2013 Haymarket Publishing