No Alternative To Megamistakes?

Ancient kings pointlessly punished their alchemists when they failed to create gold, rather than question their own avarice. Frequent criticism of industry analysts’ inaccurate forecasts is similarly wasted. Humanity’s systematic bias toward overestimating the pace of change is amplified by the deep challenges in forecasting discontinuous, high-growth, markets. Forecasts will not get much better very quickly. Industry experts should stop relying on them, and should stop complaining when forecasts are proven wrong.

The lastest false prophet is IDC: convicted by the blogosphere judges (The Register and ARmadgeddon) of misestimating the growth of Itanium. The chart shown here shows that IDC revised its estimates downwards over four years (the negative final trend line, in Yellow, is The Register’s ‘hilarious’ addition). IDC’s forecast has changed dramatically over time, and its increasing accuracy is not unusual.

Accurate forecasts of technology change are elusive for substantial reasons: not the least of which is vendors’ irresistible demand for them. Between 1984 and 1986 Conrad Berenson and Stephen Schnaars reviewed a sample of technology forecasts over the preceding 20 years: their article Growth Market Forecasting Revisited: A Look Back at a Look Forward was published in the California Management Review. Then, as now, many forecasts are later found to be wildly wrong: fewer than one in four had come true. Berenson and Schnaars argued that “many of these forecasts failed because they did not consider fundamental aspects of the markets they sought to serve but were enamored of the underlying technologies or unduly swayed by the ‘spirit of the times’.” Schnaars coined the word Megamistakes to describe them.

The large margin of error in these forecasts have a common underpinning. The top ten reasons are listed below.

  1. Forecasts come from forecasters. Many of them are advocates of the technology who have been seduced by the wonders of a technology. Their reasoning is biased, and they have a material interest in both the success of the technology and in meeting the demand to produce forecasts.
  2. Forecasts attempt to connect solutions are they appear to be evolving today to future problems. However, new technologies do not always meet expectations. Many problems can be partly resolved with current technologies. Furthermore, conceptions of future needs also change because the future is not determined.
  3. Planning gives false feelings of control. Data is unavailable from the future, therefore researchers aim to plan out future trends through discussion with experts. Many of these share the same biases and interests as the analysts. Furthermore, Tyzoon Tyebjee has shown that the very act of participating in planning processes leads to overly optimistic forecasts.
  4. Statistical trend projection is unreliable because technology markets and human systems do not behave like physical objects. Systematically, forecasts over-estimate the pace of technology diffusion. Change normally happens slowly.
  5. Extrapolation is misleading, since early adoption patterns are deeply uneven and do not give a reliable plan of later adoption. Early growth rates tend towards the horizontal more quickly that most forecasts indicate. Accelerating trends rarely continue for long.
  6. Markets do not work like high school science projects. New Product Development, on which Berenson ‘wrote the book‘, often uses conceptions of diffusion theory that builds on the work on Gabriel Tarde, a sociologist who drew analogies from chemistry to the interaction of people. However, the S-shaped curves of chemistry presuppose physical laws that do not apply to innovations whose success should not be presupposed.
  7. Price performance of many new technologies is not strong enough to allow technologies to ‘cross the chasm’. Mag-Lev trains are a case in point: the benefits are not see to outweigh the costs, impeding the adoption of the technology. Some new technologies offer no real benefit at all.
  8. Demographic, social, evironmental and political trends appear more stable than they are. Long-standing trends can reverse or peter out. Generally, they are less predictable than we would like to admit.
  9. Few forecasts involve self-critique. Researchers need critics built into their process of analysis to challenge assumptions: to question technological wonder; identify cost-benefit advantages; discount and dampen extrapolations and precedents; and identify similar developments in other industries.
  10. Few forecasts are multi-methodological. There are many forecasting approaches: most researchers use only one. More is better.

Our conclusion is that businesses need alternatives to forecasting. We recommend scenario analysis, which recognizes multiple possibilities and allows businesses to relate unfolding events to scenarios. There are some useful courses available in both New England and Britain, but for the analysts to reorient, vendors will need to stop requesting single, high confidence, forecasts. It should be self-evident that forecasting will not improve, but it is easier to blame the forecasters.

Duncan Chapple

Duncan Chapple is the preeminent consultant on optimising international analyst relations and the value created by analyst firms. As SageCircle research director, Chapple directs programs that assess and increase the business value of relationships with industry analysts and sourcing advisors.

There is 1 comment on this post
  1. September 07, 2008, 9:43 am

    […] types of analysts, and most of them are not driven by forecasts.As we discussed in our article on megamistakes, it’s only vendors who have the fantasy that analysts can predict the future. Analysts’ […]