I’ve just been reading ARmadgeddon‘s thoughts about analysts and their sample sizes.
It’s a useful example of some generic and specific weaknesses: both in vendors’ tendency to self-orientation (which can lead to rationalisations about whether analysts are good or bad according to whether they meet the vendors’ needs) and in their understanding of analysts.
I’ll try to give a summary of their views, but please read them yourself as well. ARmadgeddon’s worldview is based on the assumption that analysts primarily gather data from client inquires. The assumption leads them to think:
- – Analysts gather their data from just three interactions a day (the average number of end-uder inquries)
- – Those interactions are with a subset of firms
- – … and only with adivisory seat holders
- – … and only with those who use their seats
- – … and analysts don’t follow up (which also assumes that users don’t call back)
- – The data from inquiries is not captured or analysed…
- – … and therefore analysts cannot leverage the knowledge of others except though sharing stories.
Their “bottom line” is that analysts’ data is not valid. “For instance, a vendor might have thousands, hundreds of thousands or even millions of satisfied customers, but the analysts are relying on information from only a few dozen or less disgruntled customers. While the inquiry-based information can provide interesting insights it cannot take the place of fact-based research from surveys and other systematic research.”
I think this line of argument is mistaken, and simply reflects their own self-orientation. As a vendor-side AR person one’s experience is that analysts gather data from verbal interactions with you and your clients. This is accurate, but partial. In fact, analysts spend less than a third of their time doing that. The also spend a lot of time gathering primary data elsewhere: a further quarter of their time. Around 40 percent of the time is spent on analysis and intrepretation (processing, discussing and presenting that data).
ARmadgeddon show no awareness of the reality, which is that analysts don’t principally gather information from advisory calls. It’s like an ostrich, with its head in the sand, that assumes that what it does not see does not exist.
As a former analyst, I feel that their approach misses a few points. Analysts are not only, or even mainly, conducting their research through client inquiries. The idea that analysts simply write up their phone log is mistaken, as I’ve indicated above.
This a very common failure on the vendor side. I remember one major vendor complaining to be about an unfavourable research note that a European analyst had published. “How dare they right this” asked my client “when we haven’t even met them?” Of course the answer is that analysts have many sources of information. Generally, these different sources tend to converge in the information they give. So, just because data from one source is low in volume or even absent that does not mean that the combination of sources cannot give overpoweringly reliable information.
Analysts source case studies and other research data in a number of ways – including research. However, for most client inquiries a personal data-bank of two or three client discussions per day – on top of ongoing research – is very powerful. The idea that these data are not captured is odd. Firstly, it is not self-evident that these data can be comprehensively captured in a normalised structure. However, Forrester actually provides a services based on such data: if one firm is selling such a system, then clearly at least one firm has it and it is probably that others have internal systems (I was told this morning that Gartner is working on such a system).
Indeed, sample sizes are supposed to be samples. Many vendors are very uncomfortable with sampling and, frankly, many communications professionals are not very confident about sampling. As a statistician, I must point out that 230,000 inquiries is a massive number. The risks of systematic failure with such a large sample are very small.
There’s also a specific AR point about this. ARmageddon’s comment has a stong undertone: they seem to be saying ‘analysts are too negative about us: their research process has a systematic negative bias because they only speak to our least successful clients in the largest companies.’
It reminds me of a phrase I learnt at Dartmouth’s Tuck School: Suck it up. Analysts do work with larger firms, and the reality is that solutions that work in the mid-market often just don’t work for large firms. It’s also the case that analysts’s end-user clients have different tolerances for risk. There’s a big difference between one failure in four and one failure in forty or four-hundred. It just doesn’t matter to most analysts if most clients are happy with your firm’s solution because we assumate that: so are most of your competitors’ clients. And you and your competitors will be happy to explain where your solution works: the gap in the market is to get an honest idea of the downside.
Good advising is about understanding the risks and the differences — not only the commonalities. It is no surprise that those are exactly the areas that many AR professionals do not want to discuss: but to see this as a weakness of the analysts, rather than a strength, will lead to frustration and disaster.