Because there are no commercially-available databases of analysts’ users, the Analyst Value Survey relies on Panalyst, Kea’s in-house database, and invitations which are shared by our staff and clients. Several analyst firms also share links to the survey. The upside is that if companies share the link then the survey gets much more data about them and the other firms which their clients use. On the downside, it produces some concerns that the results could become imbalanced.
I’ve been running the survey since 2001 and take those concerns seriously. When we compare the responses, we see that respondents who come from invites distributed by analyst firms are not notably different in their thoughts about those firms than those who do respond in other ways. What it does do, however, is push up the share of voice of those firms and their competitors. For example respondents who use the firms that have shared links to the survey are more likely to use IDC than are those who are not. So those analysts are also boosting IDC’s profile, although there’s no notable difference in how those respondents feel about IDC.
After consulting with many people, including members of the Market Research Society, and have designed our survey accordingly. We focus on average scores, so that changing the share of voice doesn’t impact on the outcomes.
Because of this design, we believe that it’s on balance a good thing that supportive analyst firms help us gather more data.
I have shared my thoughts on this with you already. I will reiterate.The invitation from one of your ‘select’ firms hit my desk and it appalled me in its baldness in soliciting a good review.
If you value unbiased research, inviting any analyst or influencer firm to tap their client lists is just wrong. The issue is further compounded by inviting just a few. It should be all or none, preferably none. I wonder how you can truly assess the impact if you don’t try it both ways?
Think of it this way. What if one of the analysts did a buyer perception study of vendors’ end user clients but the analyst asked only a few vendors to promote it? The survey could include potentially glowing references from only the vendors invited to play.
Anyway good luck with it all.
Lisa Rowan, IDC
Thanks for the comment Lisa. Of course it would be better of all the analyst firms asked their clients to take the survey, and they are all welcome do. I can’t make them do so, or write their emails for them. Obviously is really is that case that analysts do pick and choose which vendors to ask about and which to involve. I am sure you have made similar choices. Our survey shows that the clients who respond to the survey links are not notably different in their opinions from those who respond directly. If you can get a list of your firm’s users, I would be happy to test out the two approaches and show you how similar the two groups’ responses are.
@Duncan – good arguments here!
On the flip side, when analysts in the large analyst firms do their vendor scatterplots, they ask the vendors to pony up references, as they don’t have any user contacts of their own, in order to do their scorings. So it’s the same principle here for the analyst firms and their clients.
As long as all the analyst firms have the opportunity ask their clients to vote, then why not? It creates a level playing field. Most research buyers today know which analyst firms they like / which ones they use, so I don’t really see the harm in it at all – it’s representative, credible data – and there isn’t any reason for their to be elements of bias. Either the research is good, or it isn’t. Having analyst firms promote an independent study is just inviting more research consumers to air their views. Most analysts today have good networks, and it’s the power of these networks which add to their influence and credibility. If the analyst has a poor network, doesn’t that say something about their influence and impact of their research?