Following up on 79.7% of all Gartner RAS statistics are made up, part of the mis-understanding between RAS analysts and vendors AR is down to the customer set surveyed, aka sample size and dispersion. RAS analysts (Gartner, Ovum, Forrester) need to recognise that they have often a much smaller sample size (amazingly, not everyone consults the Borg!) and this sample may be distributed quite unlike a vendors client set. Another factor is that RAS analysts tend to (re)act to client queries. End-user clients tend to pick-up the phone to the consult analysts when thy have an issue, not when their application or piece of kit works just fine (happens apparently). This explains why analysts are often the bearer of bad news and are reluctant to cover stuff that does what it says on the tin. For analysts, one could also argue that such behaviour drives their consulting revenue?
Another sample skew is about geographical and vertical distribution: vendors have a far greater sample size but obviously the dispersion is skewed towards specific industries or geographies where that vendor is stronger. So if you're doing great in say AsiaPac where analysts tend to be thinner on the ground, it's tough to get that recognised by analysts.
Finally, as as SVG points out, it would be nice RAS analysts could be a little more thorough with research and avoid discovering "trends" which are often only based on very few data points.
- VENDORS need to be open with analysts, share installed base demographics and maintenance calls data (at a high level).
- RAS ANALYSTS need to stop annoying AR with "my client this, my client that" statements and work on their own demographics too; they need to also start covering products that are not generating enquiries.