When surveying analysts, many AR managers are uncertain about how many analysts need to be surveyed for the results to be significant. Few AR managers are interested by statistical theory, and statistics is an area where intuition is greatly misleading.
When firms are aiming at a small number of analysts, it’s a non-problem. If you’re selling a niche application into a niche market, then perhaps the number of analysts who might express an opinion on firms in your market is small enough for you to survey them all. If that’s the case, then you’ve no need for statistics: your research can find the actual attitude of the whole community you are targetting.
However, if you cannot survey every analyst then you need to consider what kind of sample works best. There are three key variables: randomness, sample size, and bias.
- Statistically random samples tend to show an accurate picture, and non-random samples do not. While totally random and totally biased samples are rare, most tend towards one extreme or the other. Generally, providers and vendors make a very deliberate choice about random samples and tend towards one end or the other along a spectrum. Some vendors and providers want an accurate picture of analyst sentiment, and others do not. For example, selecting participants by hand is quite likely to introduce bias (if only because the selector will tend to select analysts that she or he is aware of, and who typically will be more favourable than a random sample). On the other hand, random selection of analysts will tend to give a more accurate picture.
- Larger samples are more accurate than smaller samples. One of the two principal ideas in statistics is the Central Limit Theorem, which (amongst other things) helps us to sample data which might not be normally distributed (Scott Lynch, a Princeton sociologist, has a painless introduction here; Dartmouth‘s quantitative literacy course covers it well in a good guide published for the American Mathematical Society). One of the key discoveries of the Theorem is that sample sizes below 30 are much less reliable. Above that, larger samples improve their accuracy. For example, Lighthouse’s Analyst Attitude Surveys survey very large numbers of analysts, which gives us quite a powerful sample.
- Bias is the final major variable. As mentioned above, hand-selecting participants biases the results. For example, a firm that is followed by 150 analysts might survey those that it considers to be the top 15. However, its selection of the top 15 is certain to not fully reflect influence on the market with total accuracy. Generally, a range of biases are often seen in surveys, and not only through selection: AR managers tend to be better at getting results from analysts who are more amiable, who are closer to them geographically, who share the same maternal language, who spend more time with them and who have more rapport with them. Bias is also reflected in the analysts’ reactions: for example, analysts tend to give more favourable feedback about a firm if that firm is identified as a primary sponsor of the study, or if the analyst cannot control whether their participation or responses is given to the vendor or provider.
Surveys are not the only way to measure, and the risk of producing poor results with biased data extends into other areas. However. survey data are the more sensitive to bias. The bottom line is this: AR managers need to develop their understanding of statistics to get the best use from statistical data.
P.S. There’s another factor I should mention here, which is the difference between invasive and non-invasive research. Individual analysts do not know it when we analyse research, in our Analyst Index, Analyst Mindshare and Analyst Track services: however, they do know if we survey them. Generally, people are happier about being asked their opinion than not. Taking an interest boosts people’s feelings, as GE discovered in the famous experiments at its Hawthorne plant. Perhaps it’s a modest effect, but it’s real. However, if you use a hand-picked sample, rather than a random one, then you will concentrate this Hawthorne effect into the sub-set of analysts you sample, and make the positive bias in their data even stronger. Ralf Leinemann and Elena Baikaltseva have shown that some relationship managers only want good news, but weaken themselves by introducing such bias.