An IIAR survey has prompted the Wikinomics blog to ask “who needs analyst firms anyway“? The article echos some of the themes in Jonny’s article, In praise of open source analysis, which last year discussed the trend for analyst firms to give away their basic research. “Analyst houses” he commented “have been using this model successfully for years.”
In fact, most of those firms are not open source in the the commonly accepted definition of the term. While “open source” is understood by some people to mean only “free”, few analysts have actually adopted the open source approach.
What is open source in the context of industry analysis? Jonny answers: “Analysts who give away their research for free (e.g. via blogs, free downloads, journals etc)”.
I am not so sure. For example, Internet Explorer is free, as is much of the research on agreegation sites (like alltheanalysts). However, the open source aproach is defined by a number of attributes:
- Open to collaboration. The process of production is transparent and collective. The consumers are also co-producers.
- Open to development. The flaws and weaknesses of the work are totally open to discussion and correction. The source for the whole product is open. There are no secrets.
- Open to reuse. With open source, users are free to modify the product, or to take only the parts they want. They can incorporate it into their own work without needing the permission of the original author. They can take the product and host it on their own machines. They can engage in mass distribution of a modified work.
Taken together, this means something produced in an open source way carries with it a ‘warranty of quality’ that is quite different from conventional products. Because the collaborative and open development process is different, it’s quite possible that the open source process produces different products: not necessarily better or worse, but certainly different.
In contrast it seems that the research methods and objectives of many analyst firms which distribute free research are often not open source.
For example, some firms take advantage of vendor sponsorship to fund some, or all, of their research. In many cases, the spirit and practice may resemble that of a “patronage” model (similar to the way vendors often fund open source research). However that means the analyst remains in control of the research design, execution and interpretation, as well as the final form distributed, in a manner that is typically well thought out at the start. One could say that this avoids the risk of bias, from vendors or others. However, that is not open source.
Sometimes, patronage is seen as having some issues. Patrons expect benefits. Jan Vermeer’s patron, Pieter van Ruijven, got paintings. Linus Torvalds’ helped his patron, Transmeta, to get valuable IP and become the ‘most important company in Silicon Valley’. Neither Vermeer nor Torvalds could have stopped and acted in a way that acted against their sponsors interests and then expected the patronage to continue.
For example, highly-respected and independent firms have been criticised for accepting funding: as was the case with Microsoft-sponsored research into Linux. The patron clearly retains some authority: For example, the topics are specified by the vendors who sponsor the research. The methods employed are not open to the user who wants to see if the research process might work differently if the source – the data and methods – are modified.
Open source research could involve collective ad open discussion about, for example, survey tools and scope. That process is not open for free analyst research. Readers don’t know the approaches, question and responses that have not been used in the final report. Furthermore, the reworking, modification and re-use of alost all free analyst research is not free.
Johnny made this comment about these analyst firms: “If it’s given away – it’s open source. Period. Does this liken to Aberdeen being ‘gun’s for hire’ – that’s a different argument entirely. The market will dictate that – if people do not believe [their] trustworthiness or independence (…) then they will go bust. If indeed they or anyone else can be bought I will celebrate their demise too. However, this isn’t the case at the moment and I hope it never becomes that way.”
That advice is worth listening to, even though I disagree with Jonny’s use of ‘open source’. All analyst firms, both those with free research and the others, can use open source techniques to develop wider appreciation of their reliability.
Hi Duncan
That is a very clear post. Thank you for laying out your analysis and in particular making the point that it is possible to separate funding from control when conducting research. This is an extremely important principle upon which our community research model is based. It allows quality advisory material that is both objective and properly funded to be made available to those without access to commercial research services. It also provides an alternative to the ‘big firm’ view of the world which some consider to be driven by more of a market making rather than community oriented agenda.
In practical terms, the principle of ‘open patronage’ is now well accepted among larger IT and communications vendors, and it is encouraging that most understand the need for an analyst with the community interest at heart to remain in control of study design, analysis and reporting. We still, however, sometimes struggle with smaller vendors who are more likely to take the view that if they are parting with money, they should be able to call the shots. As you can imagine we therefore often turn away funding opportunities from potential sponsors that are not ready to work under the terms we require.
The other point I’ll make is that genuine community oriented analyst firms must also conduct research in areas that are uncomfortable for vendors and are therefore almost impossible to obtain sponsorship for. There is then restriction that it is very difficult to investigate certain areas on a vendor funded basis with running into conflict of interest issues. Anything that is product or vendor specific, for example, generally falls into this last category. This is why you see a mix of sponsored and unsponsored studies from us, the latter being an extremely important part of the equation. When you are taking your lead from the community rather than ‘the industry’, you have to accept that some things need to funded internally.
I hope that makes sense and goes some way towards explaining at the next level down how tapping into funds from vendors and service providers does not mean selling your soul or integrity to sponsors.
Meanwhile, turning to the main thrust of your post, I agree with you that it is not appropriate to use the term ‘open source’ to refer to the methods and models employed by Freeform Dynamics and other firms who provide free research. While it is legitimate to refer to us as ‘community oriented’ (which is the term we use ourselves), creating the impression of an open source model at work here creates confusion and runs the risk of the real contribution we are making getting lost.
Best regards
Dale Vile
Managing Director
Freeform Dynamics Ltd
Completely agree that “available for free or no cost” does not equate with “open source.”
The analysts who choose to publish their research openly on their websites still do all the hard work of research, writting and vetting. They control the research and the conclusions. If they are smart they will leverage the community through comments to their blogs to extend and improve their ongoing research. But have no doubt, the analysts control their work.
Dale makes some important points here, and perhaps him having a guest post here to explain the community oriented approach in more detail. There is a huge challenge for firms that want to leverage the full value of the user community in their research and, personally, I think that involves being open in ways that we can learn from the open source movement.
Carter is right that the analysts control their work but, as Dale suggests, can readers be certain who controls the analysts? If vendors specify the methods, and if some vendor-sponsored research is spiked by vendors who refuse to let research be published that reaches unpredicted conclusions, then there is a real risk that bad research will force out good (in much the same way that counterfeits erode the market for tangible goods).
Agree with both your points, Duncan – a) the need to be as open as possible and draw from experiences in the open source world where appropriate, and b) to operate in a manner that works around the risk of sponsor veto if they don’t like the conclusions of a study.
On the first, there are some practical considerations that we are working through at the moment to do with research ethics and the need to ensure that ‘derivative works’ do not misrepresent the source data (interpreting surveys safely is much more complex than many imagine because of the number of variables involved). We are starting by opening up our ‘source’ to selected analyst partners who have the appropriate experience and/or are responsible/aware enough to check back with us to ensure validity of use before a derivative is published. Whether it is possible to go completely open is something I have my doubts about, but there is certainly a will here to go as far as is sensible. Having said all that, I think we are pretty open already in the way we work, though it is true that some other firms are not.
On the other matter of running a tight and ethical ship and avoiding the problems of spiking, etc, we are pretty well geared up this area. The simple answer that all data generated during our community research studies belongs to us, and we retain copyright over all words written in association with it. In the case of a disagreement, we can publish anyway, without the sponsor logo, and simple note saying that while XXX funded the underlying research (in the interests of declaration), the analysis presented may not necessarily reflect the views of the sponsor. I think more to the point, though, the larger vendors with whom we primarily work on the community stuff are genuinely not interested in misleading anyone, particularly on the kind of stuff we investigate (best practice, etc). I can imagine it being an issue for firms who take sponsorship for studies relating to the use of specific products or vendors, but we just don’t go there.
Anyway, happy to contribute a guest post that provides a more structured walk through of not just the community research model, but some of the thinking and experimenting (including mistakes, challenges and workarounds) that have been involved in defining and refining it.
Very good post and follow up from Dale etc. I would just like to echo the point that open source does not mean free, even in the software community, somebody pays for it at some point.
Dale,
Just one quick point – I love the way in which you’re sharing research with partner firms. There’s a huge value in getting other analysts’ to look at the data and at the conclusions. Alan and I remember Ovum’s review cycles, which were not always fun but certainly tested out findings.
Duncan.
I think I’ve had at this topic with you good folks before. Open source is open source, period. Free of charge does not equate to open source. Free to do what you want (within ethical boundaries) for commercial or non-commercial purposes does equate to open source.
As analysts with opinions, it’s very hard to accept vendor money and remain 100% objective. It’s not a criticism…just an observation.
[…] [1] fake open source (vendor-funded research given away, not produce using an open source approach), in which extra value is generated for the vendor who therefore pays (Read ‘Is free analyst research really “open source”?’); […]