There’s always a lot of discussion in the analyst relations community about the rollout of new Cool Vendor designations from Gartner. We’d be better prepared for them if we looked at academic research into contest-like evaluations like these, at the University of Edinburgh’s Analyst Observatory and elsewhere. Drawing on a roundtable at Warwick University, I’ve some ideas about how some leading academics’ research can illuminate this debate.
David Stark
The release of the Cool Vendor awards is coming slowly to be a real event in the analyst relations world. It’s a competition in many ways, both inside the market and inside Gartner. On one level different firms are competing, almost head-to-head, and there’s a triadic competition (see David Stark, for example: “For What It’s Worth”), where analysts are competing against each other with their nominations. Then there’s also the contestin the market, where the designation is a prize that allows the vendor to win customers.
It’s handy to look at these different levels of competition more broadly to understand this event, and the competitiveness they reflect. The Cool Vendor process is more than a research method: competition and contest are both old and vibrant parts of human life. They potentially gives us a different lens from the traditional market competition.
The standard of excellence in a traditional contest is clear: how high is the winning high jump, for example. However, competitions also refine and reestablish what excellence is. Some competitions have moving standards: competitors can challenge the best practice. Competitors often push the leading edge and spotlight new possibilities well in advance of their broad market adoption.
Steve Fuller
Steve Fuller (@profstevefuller) give us a modern lens in his new book Post-Truth Knowledge As A Power Game. When evaluation aims to clarify reality, are the judges developing the criteria, or not? It’s an essential distinction in the Cool Vendor processes: at the highest level of the three overarching standards (Innovative, Intriguing and Impactful), Gartner analysts have little power to influence the rules of the Cool Vendor process. There’s a world-view embedded in the criteria and, to a degree, analysts who take part in the peer review process don’t have the heavy-lifting of considering what is ‘cool’ in their markets. Indeed, the assumed neutrality of the criteria turns the spotlight away from any systemic patterns and habits which might make the process non-random. Maybe the relative success with which Israeli firms have one the award is interesting (The IIAR will shortly publish a useful note on that).
Claude Rosental
The Cool Vendor process, of course, has become more formal over time. There are, even so, informal evaluation methods in many settings, not just with analysts. Claude Rosental shows this beautifully in his paper, Managing Research through Statistics and Demos. Gossip, chit-chat and social life are as important as formal evaluation methods. If we think about it, the process of identifying candidates for considering for a Cool Vendors nomination must also to involve many informal points.
Furthermore, there are new ways of evaluating and quantifying things. Quantification has many problems: perhaps most obviously, it is reductionist (for example, in the way that market share data has concealed the rise of open source and freemium solutions). In “Modes of Exchange The Cultures and Politics of Public Demonstrations” Rosental also shows that there is a unique problem with technology solutions which rely so much on product demonstrations. Demos are subject to failure, fakery and are often directly incomparable. They produce a lack of comparability, but they give busy investigators a quick, engaging and enjoyable grasp of a solution. They provide a playful scent of the value of solutions, but few organizations have discussed the safeguard they need to avoid being misled by demos.
Michèle Lamont
A global, but uneven, trend toward more evaluative techniques seems to be unfolding in every setting. Harvard’s Michèle Lamont researched How Quality Is Recognized by Peer Review Panels (the Cool Vendor process, for example, involves peer review). Pierre Bourdieu, Bruno Latour, Jens Beckert, and David Stark are among the academics who have criticised the free-for-all drift of organisations into the evaluation culture. It shifts the agendas (not only of researchers) and introduces both biases and changes into the processes of legitimation and structuring of evaluation. Many researchers, for example in cognitive science, take a rather narrow approach towards understanding evaluation.
Lamont’s research shows how these more structured, almost automated, evaluation processes create symbolic boundaries that have uncertain and unforeseen outcomes. We can get some hints at possible challenges using Ian Hacking’s work on how classifications interact with those being classified. Most powerfully, Hacking reminds us that often it is people being evaluated through specific events, even if evaluators think they are assessing something else. Hacking’s study, like the research Christian Hampel and I presented in 2015, draws on Erwin Goffman to illuminate the particular setting in which relationships are created or declined.
None of this, of course, detracts from the honest work done to identify Cool Vendors and the high value these designations have for buyers, investors and the vendors themselves. However, as the methodology develops, and now that competitors are creating similar designations, it’s important that the consumers of these awards put them in context.
[…] start-ups, being nominated by Gartner as a Cool Vendor can be advantageous and appealing, which could make them stand out from the crowd and get valuable […]