Do the IIAR awards simply reward large firms?

AAEAAQAAAAAAAAgYAAAAJGQ1Yjg1N2QyLTM0ZWMtNDgyOS1iYWQ4LTlkYTMyNTQ1ZDcyYw

The 2016 Institute for Industry Analyst Relations’ awards seem to be rewarding firms for the scale of their analyst relations, rather than their quality.

In a blog post on July 6th, the IIAR awarded IBM the status of best analyst relations teams, with Cisco, Dell and HP as runners-up. Together with Microsoft, which outsources much of its analyst relations to WE Worldwide, those are five firms most followed by analysts according to Kea Company’s Analyst Attitude Survey (chart above). It seems too much of a coincidence that the IIAR’s award has gone, yet again, to IBM and the three runners up are the other three with the largest in-house AR teams.

I’ve posed a question about the methodology to the IIAR: does the method track the good, or the large?

Given the general stability in the scale of analyst relations outreach by firms, the value of an annual award focussed on volume seems limited. Wouldn’t it be better to recognize teams that had most improved, or were most successful, such as Amazon Web Services or Ericsson?

Given the IIAR’s track record, I’m not expecting a meaningful response from the IIAR.  The IIAR has not given a meaningful answer to similar questions about its other ranking, the Tragic Quadrant, previously. While I haven’t asked about their analyst team awards before, the initial brief response (“the point is not to compare but to share the good news and elevate the profession”, see comments below) is a refusal to explain the methodology. It seems the IIAR is no more transparent about its rankings of analyst firms (about which it did not answer the question of how the survey calculated ‘Influence’ and ‘Relevance’) than it is about the ranking of AR teams (where again the IIAR is not going to explain its method).

What is certainly the case is that although the IIAR survey claims to be looking at nicely-named like Responsiveness, Relationship, and Results the results show that their method is adding rather than using averages (100 analysts on average rating a large firm at 5/10 will be a better score than 50 analysts on average rating a mid-size firm as 9/10). There’s no other way that those results, which highlight the largest programs rather than the most excellent, can be replicated.

I just think that the best AR teams deserve more meaningful recognition for their efforts.

5.00 avg. rating (87% score) - 2 votes