The IIAR has announced the winners of the 2008 Analyst of the Year award, on its blog.
The leading award goes to R. ‘Ray’ Wang, at Forrester Research – which also wins the awards for Analyst Firm of the Year. Forrester also came top in sectoral studies for the US, for services, for software and for communications. David Mitchell of Ovum came second, with the second spot for Analyst Firm of the Year going to Gartner.
These are, in our opinion, well-deserved awards for two great analysts, each different from the other. R ‘Ray’ Wang’s blog writing gives reader acute, sharp and easy to implement guidance, which is especially useful in responding to short-term opportunities. There’s little of that research at most analyst firms, and readers clearly appreciate it. David Mitchell’s blog continually delights and challenges its readers. It shows a breadth and depth of understanding of the broad industry context but is all the more refreshing because of David’s evident openness to testing and learning.
One question the IIAR will continue to mull over is the campaigning, which I am sure had no effect on the final results, which a few folk took part in. AR and PR professionals will understand that this enthusiasm for participation in the survey is most helpful and that both the survey and any lobbying on whom to vote for should be taken a with a grain of salt.
However, it’s clear from the initial reaction to the results that the survey will be taken very seriously by some. Evidence for that includes the issuing of press releases about the awards stressing Ovum, and one about Forrester (thanks for Hannah for suggesting I should point out that the releases came from the IIAR, rather than those firms).
The initial data were presented at a meeting in April, about which I wrote earlier. While a detailed summary of the results is online, there are some interesting trends in the data that are not visible from the summary. I hope the IIAR is able to release some anonymised data to allow members to make some more detailed analysis.
One fascinating element of the results is the high standing of some smaller analyst firms. Ovum’s press release describes this as a strong performance for boutiques, but the firms that did well in this survey were a very specific subset: they were vendor-funded micro-firms whose businesses stress English-language blogs, high media profile and which are well-connected into the vendor ecosystem. These organisations have differences from each other, of course, but they share exceptionally high profile with AR and PR managers. Firms like RedMonk and MWD are now well-established in vendors’ minds, but remain quite small organisations, each with a handful of people. It must be remembered that the award is based on the opinions of the IIAR’s members, and their contacts, who are principally engaged in representing vendors. As a result, the awards give the views of one tiny segment of the community of users of the analyst firms; the segment that most reflects vendors’ experience of working with analysts.
That said, that the survey is a valuable perception indicator for analyst firms on how they are perceived by one of their most important audiences -and let’s not forget that AR professionals are the ones most often doing the championing inside vendors for more candor with, and support for, industry analysts.
Because of that, the IIAR should be praised for conducting this survey and for making the results public. The study is the result of substantial efforts by a small group of people, led by Jonny Bentwood, who has almost certainly created the initiative will be the most effective at raising the profile of the IIAR.
Dunc,
We’ve always made clear the methodology and the fact it’s only an OPINION (therefore SUBJECTIVE) poll. To me you over analyse what is a candid and quite straightforward exercice: the goal was to recognise analysts who in our humble opinon do a good job. The sample is big enough to alleviate what you point out as a bias: the vote for analysts friendly to some. But we’re not negating this bias, after all it’s common in all polls.
After all, in our jobs, perception is everything -and for boutique firms it’s also their livelihood. It is for instance interesting to see that James Governor got very high scores despite being a controversial figure.
Duncan – I agree with Ludo, and in fact, Barbara French and I highlighted some of the limitations of the methodology before the results were even published. The reason for doing this was not to challenge the validity of the poll per se, but to urge the IIAR team to report the findings responsibly and sensitively, which they seem to have done. I think having had that open and candid discussion in the public domain, both the spirit and limitations of the exercise were pretty clear.
Ludovic, the composition of a sample is more important than the size, and bias can be eliminated as a major factor. Lighthouse in some modest way promoted the survey and enlarged the response rate. However, if the goal of the survey is simply to recognise good analysts, then the IIAR would have found a media partner that could have expanded the survey into the end-user community. We didn’t do that, because the goal is to show who AR managers think are good.
Dale, I think we all agree on the major points: it’s a subjective poll; it’s good to reward analysts; the method does not need to be perfect to be useful; firms that count AR managers as customers and buyer-influencers want to do well in the survey. My point is simply to note that one should not get carried away.
Hi Duncan
Re your phrase: “firms that count AR managers as customers and buyer-influencers”
I am sure the innuendo was not intentional, but for the avoidance of doubt, the majority of players in the analyst space, including Gartner, Forrester, IDC, AMR, etc, fall into this category.
And I don’t wish to sound ungrateful (I really do appreciate the recognition Freeform Dynamics received), but it is probably worth highlighting that the larger firms doing well in the poll seemed to be much more excited about the result than we were.
I’m not sure if there’s any innuendo: my point is that vendor-side AR managers are more likely to vote for analyst firms they have positive relationships with. Different analysts generate very different percentages of their revenues from AR managers. Many analyst firms generate a lot of revenue from vendors, but not all of that revenue is from AR managers or from the rest of the vendor marketing organisation. Some analyst firms are mainly engages in vendor-funded research projects which supports the vendors’ marketing operation. A firm like Quocirca, for example, generates 99% of its revenue from vendors, and much of that is perceptual research which vendors commission to develop their marketing. When AR managers and their colleagues work with Quocirca, they have a really positive experience with a firm that puts an emphasis on making the time to market less frustrating for vendors. That is not the case with the large firms you mention or many boutiques, for example, CMS Watch. A vendor could walk away from them feeling that they have lost a pint of plasma (to take William S Burroughs’ phrase). I really think AR people are more likely to vote for their personal suppliers than for the firms they have no firm relationship with.
Of course, they are also voting for firms with intelligence, that are independent, that offer insight – and the great thing is that both vendor-centric and user-centric analyst firms offer that. But surely awareness, favourability, the depth (and the accumulated reciprocity) in their relationships are also factors in the minds of those who voted.
🙂
[…] centric (like Gartner) and those that are vendor-centric (like the Yankee Group). Duncan Chapple lays out the relationship-between analyst relations organizations – and the vendor […]
[…] of industry analysts mistakenly focusses on a subset of analysts whose most common feature is that IIAR members find them easy to work with. The All The Analyst (ATA) Analyst Web Rankings encourage less experienced AR managers […]
Indeed. Things could be worse: we could take the AR pro list from a certain analyst firm and run a survey on this. Which would introduce a bias. Wait. We could run a survey of users using an analyst’s mailing list. Darn, it’s been done already…
I’m not familiar with either of those. What are you referring to?
Simple questions: how did you acquire the mailing lists and which firms are they clients from?
Of course I don’t have the mailing lists of the IIAR survey, and I have no way of knowing which firms are clients of whom. However, the IIAR explains that they are analyst relations professionals, and so I think it’s reasonable to assume that they are the clients of some of the firms they interact with. Any other questions? I’d love to know how many people take part in the IIAR survey. Can you tell us?
Seems you’ve ripped a page of Loyola’s book… could you answer my question before asking another one.
1. Where’s your end-user list coming from?
2. Did you ask them which firm they were a client of and do a regression analysis to check the confirmation bias?
Are you asking about the Analyst Value Survey? That’s not been mentioned so far on this thread, but I guess you must be. Your questions are already answered on this site. We use Panalyst, a list we have mentioned several times over the years. The Analyst Value Survey asks people which firms they are clients.
Regression analysis would not prove or disprove confirmation bias in the data. I don’t see the comparison with Loyola.
As I say, your questions are already answered on this site. Will you answer my earlier question, which is still unanswered? What were those surveys you were referring in your comment on the 3rd?
The methodology I’ve seen used for some other surveys is equally suspect. Hence we choose not to participate. We can easily flood sample size with having 300 of our clients submit reviews for us. But that’s not a useful methodology. I’ve seen that happen. I know that’s happened. This is why we don’t participate. The sample size is key.
The challenge with all surveys is how to get a fair sample size. Companies use influencer resources differently. We are often surprised when we are put on lists because we have not gone after the traditional buyer. So to us, it’s an interesting data point when vendors, many who vote for us who we may not have a relationship, notice us in their clients.
In any case, it requires a lot of work on all ends and this is why there are multiple lists. It’s why vendors who don’t do well on a Gartner report might want one from IDC. Someone who doesn’t do well on Forrester, may want a Ovum opinion, etc.
In any case, we’re humbled by the mentions. MyPOV: harping on someone’s methodology is small right now b/c these surveys represent different data points and different objectives.
Thanks for this Ray. It’s very helpful. Of course my post is rather old, from 2008, but my point was not about the methodology. It was simply that we should recall that this is a survey of AR people, not of the wider industry. Very little about the methodology is available, but I know Jonny’s approach is thoughtful and fair so I trust him. But I think you miss the point about the IIAR survey, and similar surveys: as an analyst firm you don’t have a choice about whether or not you participate. You are going to be mentioned [as long as the participation in the survey is more than minimal]. The people who get value from Constellation also use Gartner, Forrester, IDC, Ovum etc. Indeed, as you say, they are often not clients of Constellation but – even if they are clients of Gartner and Forrester – they will and do benefit from your firm and will mention it. That’s why your firm and HfS Research do so well in the IIAR survey even if — for now — most of the people who value you are not clients.