A new paper from Berlecon is an important broadside against much of the rest of the analyst industry. The paper is only in German, so I thought I’d attempt a translation of the paper, by Dr. Andreas Stiehler.
Of course, also look at our posts on statistics , biased samples and megamistakes.
–oo00oo–
Hunger for numbers is substantial in the ICT market. Whether in specialized technical-, board- or customer-meetings, ambiguous percentages, packed into graceful designed diagrams, are basic equipment. This is understandable, because market knowledge is a competitive factor in the dynamic IT field. Information that supports the company’s viewpoint is hot property. Furthermore, managers must make strategic decisions and plan investments – and in the context of rapid technology development and fast changing customer needs.
As there are so many mouths hungry for data, some cooking is worthwhile. Thus the ICT analysts, advisers and market researchers offer an important source of information. For the ICT providers, market numbers are a suitable means to better understand the needs of their target group, and a lever with which to obtain press attention and generate new business leads. In view of the multiplicity of number producers it is no surprise that the technical literature is stoutly filled with up-to-date statistics. However, whomever wants to avail themselves of this rich buffet should treat the data with respectful caution.
Alongside many qualitatively high-quality researchers there are also some ‘outliers’, whose research is useless for serious investment calculations, strategy decisions or market analyses and – to remain in the language of cooking – are inedible and actually present a case for strict supervision of the ingredients. Such a ‘Food Standards Agency‘ does not yet exist, so enterprises should deal more critically with the numbers being offered. To do so, they do not have to complete a statistical study over several years. In order to separate the chaff from wheat, it is sometimes already sufficient to examine a few indicators in detail.
Of course, statistics are not perfection. Market study is also an art of the feasible that – as in every other business – must weigh up benefits and costs. All the more important, therefore, that providers of study results should openly give background information for those who want to use the results. He who broadcasts into the world that 27.276 per cent of the German enterprises will grow to use the XYZ technology, or the market will grow by 2.32%, must say openly how one arrives at this result.
The attributes of such statements include the size of the sample, its composition and representativeness, as well as the methodology for questioning. If these data are missing, then published statistics are not worth the paper on which it was printed. Now, of course, all that will not always fit into a snappy press release. They are however a mandatory part of respectably provided studies, whether offered for the purchase or free download, because that information is significant for the usefulness of the market numbers.
For example, the sample size is a primary indicator for the reliability of the results. Someone who extracts, on the basis of 20 respondents, conclusions for the total market (and marks these statements as statistically relevant with percentages with decimal points) either acts with gross negligence or uses this message to selling crassly. 20 responses is too few to really develop any solid conclusions about a market segment. In this case the researcher should refer to the statistical uncertainty, and consider that in the analysis and presentation of the results.
Of course, you are not always subject to deception when the size of the sample is not an enormous number. Statements for individual groups (e.g. enterprises with a certain size in a country) do not need a x-thousand observations to be sufficient reliable. Many market researchers use a minimum sample size for surveys of 50-100 observations. Upwards of that, naturally, no borders are set. The increased confidence (i.e. the reduced variance of the sample mean around the true mean) offered by a larger sample size is however rather small – in particular when regarded in relation to extra costs of the questioning.
Even if study providers speak of surveys of several thousand enterprises, caution is still appropriate in many cases. It is a popular practice in the statistics ‘grey market’ to indicate the numbers asked, not the number of answers: “a survey of x-thousand German enterprises shows that…”. When looking closely, then frequently we see that several thousand enterprises were asked, however only a small part of these enterprises also answered. That kind of presentation of results does not have to do anything with respectable statistics.
As a further indicator of the usefulness of the numbers, it is advisable to put the design of the survey more exactly under the magnifying glass because the quality of the responses is only as good as the quality of the questions, how representative the sample is, and the methodology for questioning. The firm offers that the results should answer critical questions:
- To what extent are the people asked able to answer the questions posed?
- To what extent is the composition of the sample representative, after characteristics such as business size and industry composition are considered?
- And to what extent is the questioning procedure suitable to support the quality of statements offered?
Getting qualified information from real decision makers, especially from groups of people with no large interest in participation, is hard work and correctly costs money. Online public opinion polls, which are now widely used, are a suitable means to save time and money. They are however hardly suited as basis for respectable investment plans, or as basis for benchmarking, if the method of collection offers little control over the composition of the sample and the quality of the answers.
Qualitatively bad statistics create value for neither the customers nor for the providers of market numbers. Firstly, the risk increases that one could make decisions on the wrong basis and lose money. Bad study quality can have a lasting negative effect also on the perception of future studies. After a short-term gain of press attention, an equally fast loss of reputation can follow for the long-term. Identify risks using the most common key risk indicators.
Our core business is to help enterprises in the conception, execution and evaluation of statistics. A loss of reputation in this field would endanger our business. We stand for quality in statistical collection as well as the honest and responsible handling of information.
–oo00oo–
P.S. Thanks to Nicole Duft at Berlecon for her comments on this translation. German isn’t my mother-tongue, so please regard this as a precis rather than an authoritative translation. (German-speakers: let me know if you can see ways to improve this.)
It is important not only to educate AR on the science (rather than art) of industry analysis.
Longhaus also finds that end users benefit from this type of information.
The reality is that many CIOs and ICT decision makers are aware of the risks in relying on stats from industry analysts (and vendor market research), but do not know the simple tests (such as those shown above) that can be used to determine the level of trust a particular report deserves.
Great summary. Well done.
Sam Higgins, Research Director (Longhaus).
[…] at times the firm is quite belligerent against its competitors’ data quality: I translated one of these broadsides, and it gives a good flavour of the firm’s pride. This evidence-based approach has allowed it […]