Scientific averages of wild guesses

Techtel’s IT spending forecasts came our way today. It’s a fascinating piece of empiricism. Techtel’s forecast averages estimates, by a panel of interested individuals, about changes in the IT environment [1]. What Techtel does is certainly easier than tracking spending itself, however the results are worthless. Their figures for 2004, for example, show average growth of 17.5 percent, trending up to 21.2 percent in the final quarter. IDC’s research, which is similar in findings to those of most analysts, shows growth of around one third that amount.

According to a Bitpite study of research by different analyst firms, the spread of 2004 forecasts was between zero and eight percent. Somehow Techtel’s sample returned results well outside the range of those collected forecasts. Indeed, their figure is more than double the upper limit [2]. Because their sample is not very small, my suspicion is that there must be a huge selection bias in the way they are choosing who to survey. If that’s the case, then Techtel’s samples cannot produce statistically significant results, even if the samples were doubled or tripled in size.

Techtel needs to think carefully about the sampling and selection techniques its is using to form its panels [3]. Techtel’s sample is used widely, for example to provide the vast majority of those questioned for an annual technology buyers study used by some of the largest US-headquartered technology firms. Their clients, and their clients’ clients, deserve better.

[1] The method involves asking buyers whether their spending will fall or rise, and then subtracting the former from the latter to measure the size of the gap. The resulting number is called ‘Net Increased Spending’, which reflects intended spending. A similar method is ued to calculate something they term a ‘Purchasing Index’ which reflects ‘actual demand’ for key market segments. These ‘actual’, ‘purchasing’, numbers are also surprising: the Index shows ‘actual’ ‘Enterprise Purchasing’ rising 37.5% from 2003 to 2004. Considering the far more modest change in real spending, it is hard to see the insight these data give.
[2] One could say, in their defence, that there could be some small print (which might not be not publicly available) that explains that their forecasts, and ‘actual’ numbers, are not measuring the same as the analyst houses. But, frankly, is that the impression that the presentataion of these data give? Imagine if one argued that the data were consistent: one could say both that overall dollar spending rose around 5.8%, and that 37.5% of enterpises were spending more, because on average those enterprises who were spending more had increased their spend by 0.15% (which is 5.8/37.5). This could give totally a different impression than Techtel’s current presentation of the data, and would be a deeply counter-intuitive argument. It seems much more probable that their research method is either broken or misleadingly explained.
[3] Or, it needs to be much clearer about what its research is, and is not, measuring. To say: ‘our survey shows the trends in a large sample of wide guesses about IT spending’, rather than suggesting the data show ‘actual’ and forecast spending.

More from Duncan Chapple
Frost & Sullivan beats 451 in 2017 IoT Analyst Firm Awards
These 2017 awards have been superceded by the 2018 Analyst Firm Awards....
Read More
One reply on “Scientific averages of wild guesses”
  1. says: Duncan Chapple

    I had a nice comment on this post today from a competitor to Techtel:
    “I’m struggling to understand quite what Techtel has actually done? (how many interviews; who with; how many organisations; size etc, etc). The wide disparity does seem to be of concern. Plus we would always talk about ‘the research shows’ or ‘the sample suggests’ rather than assuming the data analysis offers accurate insight into the universe. B2B research is always an inexact science – [it is better to] talk more about trends and pointers rather than ‘forecasts’. Although we do think, by and large, the more interviews you conduct (by whatever methodology), the more you’re likely to get ‘accurate’ readings. Although there comes a point where quantity and increased cost doesn’t impact significantly the findings. That’s when you would stop. The substantial point is that research needs to be ‘random’ but inevitably there are professional respondents’ who will spend time answering surveys. The line between ‘panel’ and ‘bias’ is thin and ultimately every client is looking for fast, accurate, efficient research without having to spend too much.”

Comments are closed.