Gartner Curries the Magic Quadrant

Gartner has announced a series of changes to its Magic Quadrant (MQ) and MarketScope (MS).

In a nutshell, the key changes are these:
1. Gartner is standardizing some of the elements of the MQ and MS
2. Primarily, workflow, service levels and terminology have beestandardizeded
3. Criteria naming has been standardized, but each analyst can use different elements to form criteria with the same names
4. Vendors will be allowed to comment only after the peer review stage.

It seems to us that the net effect of this is similar to currying meat: By slow-cooking meat in a sauce, curry makes meats of different qualities seem the same. In the same way, introducing similar terminology communication protocols into Gartner’s presentation of the MQ and MS makes them appear more consistent, even while analysts largely continue to decide for themselves how to define ‘vision’ and ‘execution’. Crucially, clients will not be able to see how vendors are scored on each of the criteria.

Gartner pre-announced these changes to Gartner’s Magic Quadrant methods on a call for us and other AR experts last week. Its citation policies and other ways of interacting with Gartner were also clarified.

This goal of changing and improving the MQ was announced back in October, just prior to the purchase of META. The METAspectrum offered Gartner the ability to strengthen the MQ further. The goals in October have been adapted to draw a lot on META’s experience, and attempt to add more value to customers.

Vendors certainly have demanded more insight into the Gartner research processes, which were not consistent or predictable.

A number of factors have not changed: it remains ‘Magic’ and qualitatively based on analysts’ opinions rather than an ostensibly-quantitative scoring methodology, such as that in the Forrester Wave.

The first time the MQs are republished, there will be some major differences but the changes will be much greater on some quadrants than others. This will be a one-time earthquake across the MQs, and a degree of change can be expected this year which will not be repeated later.

Gartner will publish a list of MQs and MSs, along with publication dates and ‘refresh rates’. This list of markets is now controlled by a central research board, to ensure that there is less overlap.

MarketScopes are still poorly understood. The MS is used in early, or mature, markets. They help rate and track vendors even when the in-depth analysis of the MQ cannot be justified. In early markets, it’s hard to focus on the same issues and be sure that you are capturing the right criteria. On the MS, the analyst will pick some criteria, Gartner will not centrally specify which are the right criteria to use; each individual analysts will select what they believe is the right subset of criteria for a given market. In mature markets relative placement changes much more slowly, if at all, so it’s harder to justify the effort of an MQ if the differences are so small.

The changes to the MQ process starts with publishing a single MQ template with the processes for each MQ, broken by criteria, with a list of all the vendors included. The MQ template maps better onto the interactive MQs. There will be tighter definitions to set boundaries for each MQ or MS.

Fifteen criteria have been standardized: previously there was no standard set of criteria for vision (strategy, sales, offering, model, strategy, geography and so on) or execution (services, execution, responsiveness, operations and so on). However, analysts will be allowed to pick and mix from this menu of terms, and identify new sub-criteria to define each of the top-level elements. So, for example, while Innovation will be a standard criterion, each analyst will define Innovation as she or he feels it best fits the market being evaluated. Criteria will now be weighted: zero, low, high. While this standardises language, our judgement is that it changes little else (Gartner, of course, disagees with that judgement).

Analysts will retain a lot of flexibility over the way in which they gather information. Collection methods will vary widely between studies. For example, extensive data collection will be more common in the least mature markets, where Gartner’s opinion isn’t already set from client and vendor interactions. Of course, there could be some markets that are more mature but that continue to need modifications to collection methods. After all, some markets are still changing a lot. However, our opinion is that Gartner is signalling that the effort of data collection will differ greatly across MQs. Where the analyst feels that she or he has adequate insight from their existing interactions, the need for data collection will be lesser, or even minimal.

The internal review process has become slightly more rigorous. Findings are presented first to one or more of 45 virtual research communities that goes well beyond each analyst’s immediate team. Peer Review involves people that the analyst wants to bring in to critique and validate the findings of the research.

Vendor communication is being standardized. A single communication style will be used, with formatted standard emails. The first contact will verify that vendor contacts are correct. The second outlines the project process: definitions, criteria and weights[1]. This should give more insight into the data collection process[2]. There will no longer be an evaluation criteria note; instead this information will be incorporated into the piece that evaluates the vendors.

The process then involves a factual review after the peer review process: very late, and perhaps too late for vendors’ feedback to make a real impact on the findings. Since the MQ reflects opinion, Gartner stresses that they only want factual comments. They will send the full MQ graphic and vendor comments, and want written comments within five days. They will not be sharing the scores, in the way that META did. Gartner will be less transparent that way. This also makes it easier to conceal on what points each vendor is scoring weakly.

META offered a 30 minute call with the analyst to discuss the research prior to posting the data. These are not for negotiations, but to discuss and explain the findings. Gartner has adopted this practice. Vendors will now get a courtesy copy 12 hours before the MQ goes up onto

Watch this space for further developments. The markets will be posted in August, and by November there will be a follow-up report about the MQ process, and Gartner will publish an end-user report on how to use the MQ.

[1]The top-level criteria will be published, but it’s not understandable weightings, or the way these criteria are defined. the weightings between the top-level items could be announced, for example, but if the composition of each of those items is kept secret then that gives little insight.
[2] They claim they will publish the ‘refresh rate’ rather than a firm timetable.

Duncan Chapple

Duncan Chapple is the preeminent consultant on optimising international analyst relations and the value created by analyst firms. As SageCircle research director, Chapple directs programs that assess and increase the business value of relationships with industry analysts and sourcing advisors.

There are 2 comments on this post
  1. December 02, 2008, 6:09 pm

    […] in ’signature’ research, evidenced by reports like Quadrants and […]

  2. August 30, 2016, 12:22 pm

    […] fully mastered their market knowledge. Over the following decades, the MQ has gone through many methodological shifts, most aggressively in 2005. Now the MQ is much more like Blank’s procedural reviews, with […]