Bland isn’t best, for AR or for analysts

Yesterday’s meeting of the IIAR heard initial results from its Analyst of the Year survey. Only the first results are in. Although the data don’t show very strong patterns, there are some interesting trends. For example, some firms get both strong and weak scores for influence and quality. What’s interesting is the ratio between the two.

David made the suggestion that there’s an opportunity to do a sort of ‘net promoter score‘ in which we subtract the negative score from the positive. That’s an approach that has been used before (and by the IIAR, here) but is not widespread in the analyst relations world. Much of the world in developing understanding of net promoter methods has been done by academics such as Mark Ritson, who taught me at London Business School.

That discussion, and a heated debate over ethics in the analyst industry, brought me back to the notion of best practice. What is the relationship between what is done in the ‘real world’ versus what rigorous research, by academic and others, shows to be best? Our friends at Sage Circle seem to feel that one must choose between them: One of their three guiding principles is to make “real world best practice and not focus on academic-style best practices“.

In our opinion, it’s mistaken to take the standpoint that what is actually being done by practitioners is, from first principles, better or worse than what academic-style research suggests is better.

  1. First, what is seen as being best practice in current analyst relations operations could have problems. Something is not best simply because something is commonly used. For example, it is common to stretch the budget of analyst events so as to maximize the attendance. The logic behind that is ‘analyst hours per dollar’. With this reasoning, which some call best practice, a seven-hour event with 50 analysts is twice as good as an event with 25 analysts that costs the same. The first event gets the vendor 350 analyst hours; the latter is 175. A few vendors will even turn that into cash. If an analyst otherwise costs $200 an hour to hire, then the first event has ‘made’ the firm $70,000 of analyst time, versus $35,000. What this missed out is the greater favourability of the experience for the most influential analysts if half the number of analysts are competing for vendor time. Normally, spreading time across fewer analysts is better.
  2. Second, much of what presents itself as ‘best practice’ is simply benchmarking against what is most commonly looked for and found. Many surveys about analyst relations effectiveness, for example, ask about techniques (how different forms of content, information tools and channels are used, for example). In fact, the effectiveness of analysts relations is more strongly connected to a number of emotional variables that many AR manager fight shy of thinking about, including how candid analysts feel the the vendor is. Current practice is to under-emphasize the emotional aspect of the relationships, and instead to focus on information transactions.
  3. Third, ‘real world’ is often cautious and retrospective. Where does this idea of ‘real world’ best practice come from? It’s not very open to objective or rational debate, and produces paradigm blindness. There’s a kind of common sense that dictates the contents of this body of knowledge. It’s based on what the ‘leading’ vendors do, but this causes a certain reversion to the mean. If your company uses a tactics which is not seen as ‘real world best practice’ then a pressure get set up for you to stop doing it, even if it’s effective.
  4. Fourth, what is best practice is wrong for your firm? For example, there’s a lot of discussion about how vendors should allocate their resources to web 2.0 strategies. However, for some vendors the right choice is to not do so.
  5. Fifth, what would result from assuming that the findings of academic-style research are inferior to what is seen as best-practice in the real world? We would lose a valuable opportunity to validate our assumptions, and we would avoid information that challenged us to improve.

The reality is that, while working in best-practice ways is safe, real competitive advantage can also come from providing a distinct and different experience. Using effective methods that are not wide-spread, and looking for effective methods that run against the common sense of the AR profession, can give firms a great competitive advantage. Great AR need not be bland and average. However, the struggle to match your competitors’ AR practice can simply make your AR program dull and ineffective.

To return to the start, this is also the moral of the Analyst of the Year competition. The ‘best’ analysts are not those who conform to the mainstream; nor do they conform with each other. They take distinctive approaches that ring true with their individual personalities and brand values. Can we really so so sure that the same cannot also be said of analyst relations approaches?

0.00 avg. rating (0% score) - 0 votes

3 thoughts on “Bland isn’t best, for AR or for analysts

  • Hi Duncan – as a firm that specialises in best practice research, I agree that neither the average or most common is the best reference point – it is what is most effective that matters. If you take IT as a parallel, whether we are researching SOA, BPM, change management, security, mobility or whatever, we always uncover an ‘elite’ group that seems to get better results than others. Of course the challenge is always how you measure results, but assuming you can nail this in a particular context, it is usually possible to identify this elite minority. The question then is what do those guys have in common?

    We generally find that it is a combination of approach and circumstance, and then the game starts in terms of unravelling the dependencies. What we then often find is that there is not one ‘best approach’ but a number, and the one that is most relevant depends on your situation and what you are trying to achieve. The point is that the quest for a single definition of best practice is generally a futile one, and we are constantly battling with IT vendors who are looking for ‘the answer’ or the ‘magic bullet’ from research. Concepts such as ‘Average Inc’ get in the way of this whole process.

    I guess this is a long way of saying that I agree with the general thrust of your post, though I think the real world element helps to refine and update the underlying principles, particularly when circumstances are a changing at either a macro level (e.g. market conditions or communications landscape) or micro level (e.g. budget constraints or tactical objectives). The best most effective performers are constantly testing their previous assumptions and current ‘accepted wisdom’.

    The bottom line is that taking your lead from the real-world doesn’t have to equate to bland or average. In fact, I am personally very passionate about it not doing!

Comments are closed.