There is a productive confusion at the heart of how most B2B technology firms organise their intelligence-gathering. Analyst relations sits in one corner — often in marketing or communications — busily managing briefing schedules and preparing for Magic Quadrant submissions. Market and competitive intelligence sits in another, triangulating win/loss data and commissioning research reports. The two functions rarely talk to each other in any structured way, and the result is that both are weaker than they need to be.
I was reminded of this after a week in Barcelona at Mobile World Congress, where I had the chance to meet several colleagues from market intelligence providers, and even more of their vendor-side clients. And it reminded me of my previous visit to Barcelona to speak at SCIP Intellicon, where a workshop on B2B market research quality surfaced something that AR professionals ought to find alarming. Research buyers — the people whose organisations actually pay for the intelligence that shapes buying decisions — are increasingly sceptical of the analysis they receive. One participant described a quality-testing protocol that is almost brutally simple: check a provider’s numbers against known data points. If they are off by an order of magnitude, the provider is out. Another noted that error propagation across the research ecosystem — where flawed data from one report spreads unchallenged into several others — makes straightforward triangulation between sources unreliable. A third pointed out that websites conveying analytical authority are now trivially easy to build, so provider selection has become genuinely difficult.
These are not abstract concerns. They describe the information environment in which analysts form their views about your market, your competitors, and your place in both.
The practical implication for AR programmes is one that most still resist: analyst interactions are not just a communications channel. They are a research instrument, and they should be treated as one. When a senior Gartner or Forrester analyst explains why they have positioned a competitor differently from you, or describes what buying criteria are shifting in your sector, that is primary intelligence. It comes with context, with nuance, and with access to buyer conversations that your own research teams cannot replicate. Treating that exchange as a briefing to be “won” — rather than a research session to be mined — is an expensive form of waste.
The problem is structural as much as it is attitudinal. Large vendors, simply by virtue of volume, have disproportionate access to analyst thinking. They brief more, they attend more inquiry calls, and their internal teams develop a sharper picture of how analysts are framing competitive markets. Smaller vendors, with lower budgets and leaner teams, receive less of this insight. That asymmetry compounds over time: the firms best placed to act on analytical intelligence are those with strong market positions, while the challengers most in need of early signals find themselves working with a thinner feed. The answer for smaller firms is not to try to out-brief the incumbents, which is not a winnable competition, but to go deeper with a carefully selected tier of analysts and to bring genuinely differentiated, evidence-based perspectives to those conversations — perspectives that analysts cannot find elsewhere.
On the question of the Magic Quadrant and its equivalents, the picture is more nuanced than the annual debate about their influence usually allows. The MQ remains, by some margin, the most influential non-financial business research document in the technology sector. It shapes shortlists, justifies budgets, and gives procurement committees a common reference point. That commercial reality is not going to dissolve in the near future, and any AR programme that treats MQ positioning as a secondary concern is making a category error.
But the nature of that influence is changing. The early quadrant-style tools were genuinely analytical instruments — qualitative, forward-looking, and built on synthesis as much as process. Over time, they have industrialised. The methodology has been standardised, data collection has expanded, and the result is considerably easier to produce consistently, but it often lags vendor reality by the time it appears. Standardisation creates procedural fairness at the cost of real-world nuance, and the inclusion thresholds and revenue filters built into most major evaluations introduce a structural bias toward incumbents that is not, in itself, a guide to who is actually solving buyers’ problems most effectively.
Sophisticated buyers have noticed. They still use MQ-style tools — because the alternatives require more work and the quadrant at least provides a shared frame — but they are increasingly bringing additional evidence into the decision: peer review platforms, customer references, use-case specific evaluations, and the direct input of specialist analysts whose coverage depth exceeds what a major firm can offer across dozens of markets. The result is that a single MQ position, while still important, no longer carries the authority it did fifteen years ago. It has become one signal among several rather than the decisive map.
The question this raises for AR programmes is not whether to engage with major evaluations — you should, actively and systematically — but whether to be intellectually captive to them. The risk is real. When an organisation orients its entire positioning effort around a single publication date, it tends to tell a story shaped by evaluation criteria rather than one shaped by genuine differentiation. Analysts, being intelligent professionals, can usually tell the difference. The firms that perform best in evaluations are rarely the ones that have most carefully reverse-engineered the rubric. They are the ones whose programme gives analysts a consistent, evidence-rich, and commercially honest account of what they do and for whom.
The direction of travel for evaluation frameworks is toward greater granularity, transparency, and continuity. Attitude surveys, mindshare tracking, and structured measurement of how analysts describe your firm in unguarded conversations all reflect the same underlying insight: the picture matters less than the process that produces it. Building an AR programme around that principle requires treating analysts as long-term intellectual partners rather than as an annual audience. It requires feeding insights from those interactions back into product, strategy, and competitive intelligence teams. And it requires the organisational honesty to act on what analysts tell you, even when that is uncomfortable.
Barcelona was a useful reminder of how far market intelligence still has to travel in separating signal from noise, especially given the declining use of scenario-based approaches, which would be useful amid the instability rippling out from the Persian Gulf. For AR professionals, the lesson is simpler to state than to implement: if your programme is not actively improving your market intelligence, it is leaving one of its most valuable contributions on the table.
Duncan Chapple is co-director of the Analyst Observatory at the University of Edinburgh Business School and the author of the forthcoming B2B Strategic Analyst Relations: Turning Industry Influencers into Revenue Drivers (Kogan Page). He spoke at SCIP Intellicon in Barcelona.
