In “My customer is not your client, or why RAS analysts need to go beyond DRE” ARonaut makes a number of excellent points about the sample size and skewing of data points. Here are a few more issues to consider when using analyst commentary based primarily on conversations with IT managers:
Small sample size on any one topic – In its 2004 annual report, Gartner claimed that it receives 230,000 client inquiries per year. It sounds like a large number, but how statistically valid is it? The average number of end-user data points is probably less than 400 to rarely more than 700 per analyst per year or about eight to 15 per week. Remember that an analyst answers questions on a variety of issues. So, of those eight to 15 inquiries per week, how many inquires concern a specific vendor or its products? A couple, maybe a little more or a little less? Forrester has even a smaller sample size because its business model and its client base is smaller.
Limited to firm’s clients – Another issue is that data points are limited to clients of the analyst firms. Gartner’s CEO has stated that Gartner has only 15% of the possible end user market at companies. Does Gartner’s or Forrester’s client base represent a statistically valid sample of the overall IT buyer market?
Further limited to advisory seat holders – Another limitation is that only advisory seat holders can conduct inquiries with analysts. Thus, there might be hundreds of IT managers at a company but only a handful actually talk with the analysts using inquiry. Are the analysts hearing the full story from only a subset of the IT managers?
Limited still further by caller self selection – The advisory analysts passively sit and wait for clients to call them. Thus, their data points come from self-selecting callers. What motivates these callers? Probably the most common reason is that they are working on a project (e.g., buying an ERP system). The second most common reason to call an advisory analyst is likely they are having a poor experience with a vendor and want to check with the analyst to see if other companies are having the same experience and what the end user should do to correct the situation. It is unlikely that IT managers who are happy with their vendors are calling up the analysts simply to complement their vendors. As a consequence, analysts are only hearing from a vendor’s disgruntled customers, which is not a statistically valid sample.
Limited once more because of no follow up by analysts – Advisory analysts rarely proactively follow up with end users to see how something was resolved by vendors. Maybe within days or weeks the vendor corrected the situation to the satisfaction of the customer, but the analysts never hear about that. Thus, the analysts are left with only negative data points.
No data warehouse, knowledge management or business intelligence tools – None of the advisory analyst firms have invested in infrastructure to capture, store and analyze the information that surfaces during inquiries. As a consequence, the advisory analysts rely on their memory to store this information and we all know how faulty memory can be. In addition, this means that when an advisory analyst leaves a firm, all those data points leave with them.
Second hand information – The advisory analysts claim that the number of data points they work with is actually much larger because they pool data from the entire team. However, without a knowledge management infrastructure then this pooling is little more than sharing stories around a camp fire. Over time significant errors can creep into the share pool of information as second hand information is passed on from analyst to analyst.
Bottom line: The primary information that advisory analysts have is not statistically valid because of the limitations mentioned above and in ARonaut’s post. For instance, a vendor might have thousands, hundreds of thousands or even millions of satisfied customers, but the analysts are relying on information from only a few dozen or less disgruntled customers. While the inquiry-based information can provide interesting insights it cannot take the place of fact-based research from surveys and other systematic research.
Recommendation: All clients, end user and vendor, of the advisory analysts should take responsibility for checking the validity of analysts’ data from client inquiry. Question that could be asked include:
1. Pin the advisory analyst down on how many data points they specifically have on the relevant topic. Don’t let the analysts be vague or talk about how many inquiries that do they – this could be masking a very small set of relevant data points.
2. Pin the advisory analysts down on how many different companies are represented in the sample size. There could be a number of inquiries the analyst that the analyst could point to, but the number of unique companies could be much smaller because a few end users make multiple calls.
3. Have the analysts discuss the time frame that the data points were gathered over. There could be a problem if the data points are out-of-date or too recent.
4. Have the analysts discuss the characteristics of the companies that were the source of the inquiries. Are the companies similar to your company (if you’re an end user) or your target market (if you are a vendor)?
5. How much follow up has the analysts done to determine how effectively the vendor responded to the situation.
What other questions or issues should be raised by advisory analysts’ clients?
Monday 12 December 2005
More on Analysts’ Data Sample Size
Subscribe to:
Post Comments (Atom)
8 comments:
Great post SVG, it's really important to keep analysts honest!
Is it too long?
Busy writing up response to other questions that analyst clients should be asking. Too bad you don't have trackback enabled...
This is an accurate description of an analyst's daily lode of inquiry. Good advice for a vendor to understand this. Get your happy customers that intersect with that 15% who are Gartner clients to schedule inquiries!
As a former analyst, I feel that this misses a few points. Analysts are not only, or even mainly, conducting their research through client inquiries. We source case studies and other research data in a number of ways. The idea that analysts simply write up their phone log is mistaken.
However, for most client inquiries a personal data-bank of two or three client discussions per day - on top of ongoing research - is very powerful. The idea that these data are not captured is odd. Firstly, it is not self-evident that these data can be comprehensively captured in a normalised structure. However, Forrester actually provides a services based on such data: if one firm is sellig such a system, then clearly at least one firm has it and it is probabaly that others have internal systems.
Furthermore, sample sizes are supposed to be samples. As a statistician, I must point out that 230,000 inquiries is a massive number. The risks of systematic failure with such a large sample are very small.
Duncan.
I hae written a bit more about this...
http://analystrelations.blogspot.com/2005/12/armadgeddon-and-ostrich.html
Hi James,
Blogger does unfortunately not support trackback -but see below for links to this post.
Duncan,
See our responses to your post on your blog!
http://analystrelations.blogspot.com/2005/12/armadgeddon-and-ostrich.html
We meant http://analystrelations.blogspot.com/2005/12/armadgeddon-and-ostrich.html
Post a Comment