Not-so-Magic Quadrant: Gartner analysts’ experience falls; vendors’ data collection burden grows

If you only read one post by Simon Levin this year, this is it (Is Gartner research quality under threat?). Simon puts the current dip in the collective experience of MQ authors into context: MQ authors and leads do change regularly, and vendors need an ongoing focus on the MQ to make an impact. 

Andrew Hsu would also make the point that it’s increasingly important to communicate in the ‘off’ season, before the MQ data collection starts.

I think it’s valuable to put this into a very long-term perspective too. In my PhD, I’m researching the ongoing trend for the MQ to be more structured and checklist-y, which accelerated decades ago with the Saatchi and Information Partners acquisitions of Gartner Inc [both of which increased the pressure on the business to cover the cost of the capital used in the purchase]. So it’s been decades over which the reporting burden has been placed increasingly onto vendors, through submissions of data and the collection of reviews in Peer Insights (I presented on ‘How the Magic Quadrant lost its Magic’ at the 4S/EASST conference in 2016: slides here).

The result of this ‘automation’, as we all see, is a substantial uplift in the difference made by careful analyst relations work. Each MQ takes more time than ever (as Ludovic Leforestier often points out) and the favourable impact of extra effort seems to be greater.

Sadly, that does mean the MQ is weighted towards AR leaders and that the MQ becomes more affected by information asymmetry and, at firms that don’t do AR well, their lights get hidden under bushels. That also increases the misinformation about analysts, which CCgroup has written about.

That will only increase the demand for other research formats (See CCgroup’s analysis of Peer Insights, for example) and other firms’ research. 

PS. Averages are broad brushes, and no-one claims that every value is at the average. When Simon points out the falling average experience, he’s pointing to a real trend which is true across most MQs. No-one says that you should not relate to MQ authors as individuals.

I point to one very specific way in which the data collection burden has risen: the rise of Peer Insights. Some analysts are using the reviews for insights in developing Critical Capabilities and other parts of the MS assessment, in many cases, vendors are putting huge effort into generating peer reviews, partly because they see those reviews being used to supplement vendors’ client references. Simon Levin conducted a lovely survey of MQ participants which showed a 30% increase over two years in the time taken by vendors. Simon picks up on the way that vendors are starting earlier on the MQ, and I imagine he might also agree with the observation that more of the smaller firms are starting to ramp up their less substantial efforts to marshal their facts better for the MQ.