Teresa Cottom, my former Ovum colleague who is now Founder & Chief Strategist at Telesperience, had made some very useful criticism and comment (here, and well worth a read) about our recent chart of the Analyst Value Survey data on analyst independence. Her most challenging point is that it’s not possible to measure independence and, given the length of this post, that’s all we’ll focus on today.
Needless to say, independence isn’t the only thing we asked about in the survey (see the full results here) but independence is one of the most highly-valued attributes of analysts. The survey asks users of analyst research which firms’ research they feel is the most, or least, independent. That is not impossible to measure. We’ve asked the question, and gotten answers. We really have measured how respondents feel about analyst independence. It’s not possible, perhaps outside of a sting operation and the availability of multiverses, to directly measure the independence of analysts. There is no control group of Teresas, and I cannot hire some and not hire some others and then see what the resulting difference is. But it is possible to measure perception: that’s what surveys do, and that’s what we certainly have done.
Is perception correct? That is, literally, a philosophical question. I have a shiver down my spine as I write that, since I happen to be working from the University of Manchester library this afternoon, one of Britain’s five National Research Libraries, and where a thick slice of my youth was taken considering exactly that question through the lens of Fear and Trembling, Søren Kierkegaard’s book. In my opinion, the nature of being is that it is a social reality. That’s a long discussion, and it’s not possible to resolve it here or elsewhere. Is it really the case that respondents to the survey think that Ovum, NelsonHall and Redmonk are more independent than Frost & Sullivan or Aberdeen? Absolutely. There is no doubt about that.
Is their perception based on reality? Perception is partial: because of that, it’s true (I really did see what I saw) but it’s also untrue (what I saw is not the global sum of everything both seen and unseen, and thus may not be representative or correctly understood). Individual perception is less reliable than the average of hundreds of observations, and that’s why the survey’s 352 responses produce results which, generally, make sense to most people.
Can people be misled? Neither Abraham Lincoln nor PT Barnum were Kierkegaard, but both had attributed to them a saying: You can fool some people all of the time, or all the people some of the time, but you cannot fool all the people all the time. In the long run, analysts users’ have pretty stable perceptions of analyst firms’ qualities. They can be fooled for a while but, in the end, users’ collective perceptions realign – and sometimes bite the ass of tricksters. In short, Ovum and NelsonHall come out ahead of Aberdeen and Frost & Sullivan in the survey year after year (we first ran the survey in 2000). Without wanting to jump to conclusions, I think I’m very comfortable saying that participants are much more likely to be right about that than they are wrong about that. I don’t think many respected, independent people familiar with those firms over the long run would come to a different view.
So, I’m left with the questions: is it better to ask these questions, or not? Is it better to publish these data, or not? From Fear and Trembling, I learnt that life has to be lived forwards but is understood backwards. Right now, publishing those slides seems like the right thing to do. There are been 7,281 views of the presentations of the Analyst Value Survey, so people find the insight worth of attention.
Do the respondents know how independent the analyst firms are? Evidently they do, to a degree. They see to be getting the general direction right. Are they ‘right’ on every point? Almost certainly not, but there is no qualitatively-better method available to find a better answer. There are countless ways in which the survey can be improved, but the basic question remains this: is there a better way to measure the relative independence of these analyst firms than with an annual survey of their users? No-one has suggested a better one, and so we have to take the risk – despite the reasonable fear that individual perceptions are often partial – to try to perfect the study we have rather than to abandon it.