I thought it was the best analyst briefing ever, but was it the worst? I’m not so sure after speaking with Marian Gatzweiler. Gatzweiler is an award-winning research fellow at the University of Oxford, and a tutor at the University of Edinburgh, specialising in the emergent nature of evaluation criteria.
I shared with him a story I’ve often told about what I think was the best analyst briefing I ever saw, when MicroStrategy pitched to Ovum. The relationship between those two companies was in a rut for a while. Ovum has its idea of how the market was moving while MicroStrategy had a different view. Of course, each was drawing on the experiences of a slightly different client base and market implantation. MicroStrategy made the relationship move ahead by totally shifting its tactic. After one bruising meeting, MicroStrategy returned with a pitch that only spoke to Ovum’s concerns about the older, more shallow, set of solutions, and used Ovum’s business intelligence stack to show how deeply it could go. It did not return to its past disagreements with Ovum. Instead it showed, as far as was possible, how it was making progress on the factors where Ovum thought MicroStrategy was falling short.
Gatzweiler challenged my assumption that it was one of the best briefings I ever saw. Indeed, the outcomes for MicroStrategy were great. Using Ovum’s language and model, it forced the analysts to see how far its partnerships with database and ETL tool providers gave it more than shallow reporting functionality.
Gatzweiler, however, pointed out the continually emergent nature of evaluation models. Many people in the analyst relations community treat analysts’ criteria as being relatively stable. The big dimensions don’t different much across the firms, even if each organisation has its own vocabulary for execution, marketing, strategy, vision and work in progress. Even so, these are not static models, like Platonic forms, that describe some unchanging notion of the value that solutions produce for clients.
The valuation process is messy. Both the models and diagrammatic outputs of analyst research have to both conceal the mess and produce a tool for dialogue that allows the different stakeholders viewpoints to be integrated. Physicists use a term, “concurrent visibility”, to describe the quality of a thing that can be observed simultaneously from different positions or frequencies. Evaluation models, similarly, produce “concurrent visibility”.
Analysts’ research taxonomies and scoring criteria are not simply robust tools for recognising value. They are moving targets: commonly understood symbols for a continually shifting goal. My research has found that the most effective pitches to analysts are often battles between overlapping and competing criteria. That battle helps analysts to be certain they have identified the value being produced by solutions comprehensively. That’s why analysts’ evaluation criteria need to be both open to challenge and stable enough to avoid ineffective compromises.
The importance of challenging categories explains why pitches are needed, and why face-to-face briefings are so highly sought. Pitches are valued by analysts because they help to refine ideas about the market, and to identify new value being created by innovative solutions. Methods need to be fast enough to allow value to be recognised quickly, but also flexible enough to identify to novel methods of producing value. The richness of face-to-face discussion helps add value because it moved analysts away from recognising pre-existing criteria and categories. It creates value by uncovering new benefits of emerging solutions.
For many analysts, the process of consideration and reconstruction of evaluation methodologies is the system by which value is given more accurately to different solutions. That means that the research findings are also moving goalposts, not only the research methods. Because criteria continually emerge as the market evolves, research starts to become out of date as soon as it is written. Vendors need to be in the conversation continually to influence the criteria otherwise they end up, as MicroStrategy did with Ovum, responding to past criteria as if they are fixed. In such a setting, the best that most vendors can hope for is to show their compliance with the analysts’ models. They need to compromise in doing so, pulling their punches on any strong differences with the analysts’ approach. When MicroStrategy pitched to Ovum, it was trying to make itself look as familiar as possible. However, argued Gatzweiler, such an approach kills innovation. True strategic advantage cannot come from resembling the market generically. Businesses need to create additional value in order to gain an edge over their competition, and to give analysts more insight into the market than they had before. Giving analysts a picture of the market that they instantly recognise adds little value.
Needless to say, such an approach has an impact on the relationship with the analysts that might be major, but will often be intangible. Once both vendors and analysts accept that the market is not static, then they can add huge value to each other by discussing not only those aspects of solutions that have concrete value to clients, but also the future ways in which that value can change and grow.
[bctt tweet=”Giving analysts a picture of the market that they instantly recognise adds little value. “]