How should analyst relations (AR) be valued? That fundamental question underlies choices for AR measurement. One answer has been made concrete in Analyst Attitude Survey awards given to AR teams in December 2020. The top AAS awards went to AWS, Siemens and Microsoft.
About the awards
Since 2014, several organizations have surveyed selected industry analysts annually to ask their opinion about the AR teams at the largest ‘powerhouse’ IT vendors. The AAS awards for AR teams take a different approach: They use the Analyst Observatory’s Analyst Advocacy Study which uses a random, representative sample of the analyst community as a whole. The AAS is independently collected by my colleagues and me at the University of Edinburgh. The awards are published simultaneously by the Analyst Observatory and SageCircle, the company which commercializes the study.
Random, representative sampling of analysts has some advantages. The Analyst Attitude Survey asks analysts to select from a pick list of dozens of firms, and has a write-in option which collects data about many others. It allows a very large sample size, an order of magnitude greater than the minimum needed for statistically significant results. Furthermore, the Analyst Attitude Survey asks a wide range of questions spanning their experience of the AR teams, beliefs about the business performance of the vendors, and emotions about the firms.
Awards represent a substantial opportunity to offer role models to analyst relations teams. They let analysts thank the AR professionals with whom they work. However, awards can fall short of this goal in three ways.
First, AR expertise is not fully visible to analysts. Parts of the AR profession are in a considerable transformation, and few role models for transformation are available. The transforming firms are leading the way in generating insights, supporting sales enablement, leading reference programmes, and focussing on customer advocacy. These areas are where AR’s value is felt internally, and these surveys cannot capture them because analysts cannot see those internal processes.
Second, many awards assume that excellence is concentrated in the giant ‘Powerhouse’ firms, but that isn’t the case. Innovative and excellent AR teams come in all sizes and are not only in the powerhouse vendors. The vast majority of AR teams have more to learn from similarly-sized AR teams than from giants. The Analyst Attitude Survey data shows there is an efficient frontier: with a fixed amount of resources there is a limit to the outcome, but there are choices about how to manage that resource.
Third, awards come with the embedded ideas about value and objectives that scholars call teleology. For example, an AR program could spread its effort across more people, or across fewer. It might focus most offer on two firms, or on five firms. It might focus on analysts’ ‘customer experience’ rather than their perceptions of the company as a whole. The selection of questions and of participants influence the findings. I’m very happy to see awards using Analyst Attitude Survey data because our large, random and representative sample strips away some of the bias that comes from those embedded values. There are also substantial advantages in asking analysts about their advocacy of the brands, their emotional impact and their experience of the AR teams.
The efficient frontier
This chart compares data from the H2 20202 Analyst Attitude Survey on those two dimensions: How many analysts commented on that firm in the Analyst Attitude Survey, and how often they interact with it. Multiplying those two numbers gives us an idea of how many interactions those firms organize. On this chart, we show a group of firms that organized a similar number of interactions with survey participants. The line shows what portfolio managers call an efficient frontier: a range of options along which trade-offs can be made.
At one end are Ericsson, Siemens and Intel: they interact with fewer analysts, with a higher average frequency. At the other end are Google, Oracle and SAP, who interact with more analysts, with a lower average frequency. Naturally, these are averages, with some analysts interacting more often and others interacting less or not at all. But the trend line shows the general set of trade-offs. To do better you either need more resources, or you need to move along the efficient frontier, either up or down, depending on your goals.
That choice is all about goals. To build relationships, you need more interactions. To maximise your coverage and mentions, you need to spread your effort across more people. The Analyst Attitude Survey uncovers those choices and makes them material. If we focused only on the AR teams that organize the most interactions, we might miss out on high-touch AR programmes that were extracting more insights or gaining stronger advocacy.
Why this matters
Forrester’s analyst serving AR professionals, Kevin Lucas, speaks about AR teams’ need to make the business responsible for AR success, not the team itself. That’s a real need because an AR team that is working hard cannot by itself shift analysts’ perception of the brand. So, AR teams need to have separate measures for the analysts’ perception of the business and their ‘customer experience’ of the AR team itself. A highly efficient AR team needs to know if there are other obstacles to analyst advocacy, so then it can mobilize more support from elsewhere in the business. Viewed through the Analyst Attitude Survey lens, the only longitudinal survey of analyst advocacy, AR teams can spotlight opportunities to improve and get a resource that helps them to mobilize the rest of the business to shift analyst perception.