This column by Efrem Mallach originally appeared in Computerworld, November 25, 1991. A few details - a date, some microprocessor names, the speed of a modem considered "fast" - were updated later that decade, but it is otherwise unchanged.
Much has changed in information technology and in the analyst/consultant world since this column appeared. It reflected an understanding of the world at that time but, like anything written about high-tech then, does not necessarily apply today.
According to esteemed expert Oscar Porkchop, the market for X (fill in the X of your choice) will grow by Y percent (not less than 25) until the year Z (two to five years from now). You can sample Oscar's wisdom for a few dollars in this paper, for a few hundred dollars in a research house report, or for tens of thousands by subscribing to an industry analysis service. The message is the same for all three sources.
Now try an experiment. Pull any five-year-old issue of Computerworld out of your dusty archives. See what Oscar and his ilk said then. Compare their forecast with what happened. How good was their crystal ball? Chances are, it wasn't wonderful.
That by itself is not a reason to criticize the Oscars of this world. Weather forecasters make mistakes, but we still wear galoshes when they say it will rain. The difference is that computer industry gurus are consistently wrong in the same direction: they overestimate the speed with which new developments will displace the tried and true. What's more, they do so for specific reasons that we can analyze, understand, and take into account the next time we read a computer industry forecast. The result: our forecast, based on the analyst's opinion but aided by our understanding, can be better than the analyst's was.
The three factors that make analysts anticipate change faster than it will really occur are analyst bias, survey subject bias, and misleading comparisons.
- Analyst bias arises because industry analysts have an economic ax to grind. Consider the publisher of a report on UNIX. The more important UNIX will be in the computer industry picture, the more people will buy the report. The more people who buy the report, the more money the analyst's firm makes, the more raises/promotions/etc. the analyst gets. The analyst's economic self-interest motivates him or her to inflate the UNIX market. Most analysts do not consciously fudge their numbers. They do, however, give them the most favorable interpretation - usually without realizing what they are doing. The publishers of generic reports, that discuss overall industry trends without focusing on a specific product area, are subject to the same pressure. People need their services more in times of rapid change, less when change is slower. Who will pay $40,000 to hear "next year will be like last year and this year?" It is in their economic self-interest to predict rapid changes. They, albeit subconsciously, react as predictably as Pavlov's dogs.
- Survey subject bias influences the numbers analysts use. Many market forecasts come from asking MIS managers "When will you start using hula hoop storage?" or "What fraction of the PCs you buy next year will use the 786 chip?" MIS managers are human too. They want to be seen as being with it, at the state of the art, using the most modern equipment and the latest technologies. Are 786-based PCs hot? By golly, that's what we'll buy! Put down 100% for me on that one, Oscar! Even in anonymous surveys these folks want to see themselves as current. They can't admit - even to themselves in the most private of whispers - that their shop might be falling behind. The facts of the matter, of course, are quite different. A lot of 486-based machines are still doing useful work, the Pentium is mainstream technology, most people still find a 250MHz Pentium II "fast." Dreams of 786s next year fall victim to harsh realities of budget justification, lack of software, and other mundane issues. The MIS manager who says "Oh, yes, we'll be using UNIX next year" often means "we'll get a server" or "one of our engineers will have a Sun workstation," while the corporate IS shop sticks to MVS. A respondent's "We're using multimedia" must be translated into English as "we've learned how to put clip art on our overheads."
- Misleading comparisons result from comparing the R&D promise of a new technology with available commercial products using the old one. Veterans may remember how many years semiconductor memory was "around the corner" as a replacement for core. It did replace core eventually, of course. Nobody has bought core memory for ages. But the switch took far longer than gurus, who compared semiconductor memory lab devices with real-world core memory products, thought it would. They forgot that, during the years that it takes a new product to emerge from the lab, the previous technology will not stand still. By the time Year X semiconductor memory technology reached production levels, it was no longer competing with Year X core, but with Year X+3 core. Since then scads of other products have been in the same boat. Some, like bubble memory, never made it. ISDN is falling victim to the same syndrome, at least in part: many of the benefits that were trumpeted for ISDN when it was conceived years ago can now be had more simply and less expensively via other approaches, such as fractional T1. The reason is the same: older telecommunication techniques didn't stop evolving while ISDN got its act together. When ISDN was conceived everyone "knew" 2400 bps was the theoretical maximum speed of dial-up phone lines. Today they're at 56Kbps and counting. ISDN may still have a future, but it won't be as dramatic and as universal an improvement over the alternatives as it was once. The same is true of any other technology - pen-based computing, object-oriented databases, whatever - that you read about: it may make it to the mainstream, but this blessed event will almost surely occur far later than the analysts are saying.
The next time you read a technology or product growth forecast, ask yourself - or the analyst, if you get the chance - these questions:
- Where did the raw data come from? What is the bias, conscious or subconscious, of that source?
- What is the analyst's, and the firm's, interest in creating a perception of rapid change and/or high growth? (If they say "none," either they are lying, which is unlikely but possible, or they're blissfully unaware of their own inner motivations.)
- Are lab results or pre-alpha software versions being compared with generally available products? How long will it take the new technology to reach real users? How far will the old technology have advanced by then?
If you can answer these questions - often, the act of asking will suggest the answers - you'll be on your way to a better forecast than your guru can sell you.
2 comments:
So yeah great. Analyst bias and survey respondent bias exist. But the analyst is the one that predicted that Pentium II would go the way of the dinasaur, as it did. If you followed an analyst's advice you would have prepared to move away from MVS. This is a strange criticism looked at with 20/20 hindsight. It claims that analyst predictions are optimistic yet in all the examples they have more than come to pass.
Full disclosure: I am an analyst so you can bet I will predict the continued success of analysts!
Richard, note that this is an article from Efrem Mallach and it's quite old. His recommendations are quite useful to understand the research process though...
Post a Comment