We're entering what I like to think of as the silly season, the time when the major tech analysis companies issue their quarterly reports on mobile device sales. IDC already released its report on Q4, while Canalys is due any day. These reports always generate press coverage; some of it insightful, most of it just repeating whatever the analysis companies said.
Once all the reports are out, I'm going to look through them and try to give some comments on what I think they mean. But in the meantime, I thought it would be good to post a note on the numbers themselves – how they're gathered, what they mean, and what to watch for.
First, let's differentiate between shipment reports and forecasts. The shipment reports typically discuss what happened in the previous calendar quarter, and are issued about a month after the quarter ended. Forecasts are generally issued about once a year, and predict what sales will be in the next five years or so. We'll do forecasts first.
"You don't actually use these things to make business decisions, do you?"
--A horrified industry analyst, when she realized why we had requested the latest forecast
I think tech industry forecasts are worthless. Completely worthless. That may sound harsh, but think about it for a minute – can you reliably predict the future? Can anyone you know do it? If anyone could predict the future, don't you think they'd get rich off the stock market rather than working at an analysis company? I once asked a very senior manager at one of the biggest tech analysis firms how they created their forecasts. He laughed. "The process involved pizza, beer, and a dart board."
Here's an example of how bad tech forecasts can be: In the handheld market, the big analysis firms once predicted that handheld sales would be 60 million units a year by now. Instead they're 20 million. That's a margin of error of about 200%. If NASA forecasted that badly, Neil Armstrong would have landed in Arkansas.
Since the forecasts are useless, why do the analysis companies create them? Two words: publicity, and money. If you create a forecast and issue a press release about it, a lot of reporters will write articles on it, all of them crediting your company and publicizing its name. The tech websites will repost your press release, and even more people will read about your work.
The other reason to do a forecast is because companies will buy it. If you work in a company and you're tasked with creating a business plan, your management will insist on having a forecast as a part of it. No one believes an internally-created forecast, because the employees are assumed to be biased. So you buy an external forecast. If management is trying to be especially conscientious, you'll be asked to buy several forecasts for due diligence.
I've seen companies assign teams of senior people to spend months massaging and cleaning and polishing the industry forecasts, so they can be prepared for use in the business plan.
It's all a waste if time. When you start with a cow pie, no matter how much you polish it, all you'll get is a polished cow pie. Companies would be much better served by getting together their brightest people, asking them to make a guess, and then writing that down. Chances are the folks in your company are more in touch with the market than the analysts (you talk to a lot more of your customers than they do).
Now let's talk about the quarterly shipment numbers
To be fair, I should acknowledge that the quarterly numbers are a lot more accurate than the forecasts. But there are still major problems. No one, absolutely no one, knows what's really happening in mobile device sales. The market's too complex, and some critical numbers simply aren't available. For example, Dell won't tell anyone what its precise unit sales are by product line. It's also notoriously difficult to get phone shipment numbers out of the operators. Sometimes that seems to be because they view the information as confidential, and sometimes it seems to be because the operators themselves don't have very good inventory tracking programs for their retail stores (they care a lot more about how many service plans they sell than which phones).
The most accurate sales numbers generally come from two companies, NPD and GfK. Both of them directly track sales of electronic devices through retailers (NPD in the US, GfK in Europe and several other countries). So, for example, with the NPD numbers you can find out exactly how many handhelds were sold last week at retail in the US. You'll get a database that includes brand, model, average selling price, and a lot of other information. But those numbers won't include the things NPD can't track -- Dell, direct sales by any manufacturer to a business, and smartphones and other devices sold through the operators.
You'll also never see the NPD and GfK numbers in public. They're pretty close to monopolies in sales tracking, and they charge enormous sums of money for their data. Nothing is given away free.
That means most of us are left looking at the quarterly shipment numbers compiled by companies like Gartner, IDC, and Canalys. At all three companies, the methodology is basically the same – they call every hardware vendor, ask them how many units they shipped in the quarter, and total up the numbers. The people who make the calls and compile the numbers are generally honest and hard-working, and they're doing the best they can with the (limited) resources available to them. But there are several things you need to be aware of when reading their numbers:
--The vendors can lie. It's hard for a relatively small company like Palm to lie about its unit shipments; you can just take Palm's quarterly revenue and divide it by whatever you think the average selling price is for its devices. But for a large company like Dell or HP, mobile devices are such a small percentage of their overall sales that there's no independent way to check their numbers. I am not saying that Dell or HP fabricate the numbers they report to the industry analysts; they're pretty conservative about their reputations, and US stock market regulators frown on a company lying about anything that might move its stock price. But the rules are looser in some other parts of the world, and I've heard persistent rumors of other companies cooking their shipment numbers to make themselves look good in the quarterly reports.
If you think about it, people working in tech companies have a very strong financial incentive to mislead the share tracking companies. It's fairly common for sales and marketing managers to have performance bonuses tied to market share. Guess how share is measured. It's like asking an eight-year-old to fill out his own report card.
--The numbers measure shipments into stores, not sales to customers. When a big company like Nokia ships a new smart phone, it rushes hundreds of thousands of units into stores and distribution warehouses. Those first shipments typically happen in a single quarter, and can cause a company's share to rise dramatically in the quarter of first shipment, and then plummet the next. The analysis companies and press rarely explain this, which is why you'll get dramatic press reports that a particular handheld or smartphone company jumped from nowhere to become a top five vendor in a single quarter. Ask yourself if that's really possible – do people really change their buying preferences that quickly? And does it happen every quarter or two?
--The third problem with the quarterly numbers is that they don't report shipments by model (because companies refuse to give out their model-specific shipments). So all of Nokia's smartphone sales are reported as a single lump, or at best are cut into a couple of not-very-helpful categories such as flip phone vs. candy bar phone. This makes it incredibly difficult to figure out what's happening in different market segments.
--The fourth problem is that different analysis companies cut the mobile market differently. To Gartner, a RIM Blackberry is a PDA but a Palm Treo is a smartphone. To IDC and Canalys, both are smartphones. This makes for huge disagreements about market size. Gartner says the PDA market is growing at a healthy clip; IDC says it's dropping steadily. (By the way, if you want to hear some catty comments, ask Gartner or IDC or Canalys to tell you what they think of the way the other company classifies RIM shipments.)
Given all of these challeges, why do the analysis companies bother to compile the quarterly numbers? Once again, publicity plays a role. The quarterly score-keeping press releases get lots of coverage. (I just did a Google search for "+Gartner +PDA +share". It produced 243,000 hits.)
But also, I have to admit it – as bad as the quarterly shipment numbers are, they're better than nothing. If you know their flaws, you can try to correct for them, and sometimes you'll be able to dig out a few insights. That's what I'm hoping to do after the new round of shipment press releases comes out.
How to read tech analysts' shipment reports and forecasts
Posted by Michael Mace at 11:06 PM Permalink. 5 comments. Click here to read post with comments.
Subscribe to:
Post Comments (Atom)
5 comments:
"That's a margin of error of about 200%. If NASA forecasted that badly...."
"People sometimes make errors," said Dr. Edward Weiler, NASA's Associate Administrator for Space Science. ..... one team used English units (e.g., inches, feet and pounds) while the other used metric units for a key spacecraft operation"
1 meter = approx 3 feet. 200% error....
Michael, I think you're being slightly disingenuous here. There are plenty of different categories of forecast, with a wide range of methodology and applicability.
Many technology market forecasts cannot be "absolute" - and most are not given that way by analysts in their full reports. As you know, most headline numbers used in press releases do not include information on assumptions, sensitivities, methodologies, and so forth. Speaking to the analyst gives much more colour on these issues.
In particular, in a concentrated industry such as PDAs, it is clearly impossible (and self-referential) to second-guess what decisions might be made on the basis of your own forecasts. And it's even more difficult to predict the individual competence of a handful of product managers to execute on those decisions.
What is true, however, is that "not all analysts are created equal". Some just look at historic sales figures, or what the current vendors (and customers) predict will happen, and then massage those to create a reasonably-coherent future view. The reason my clients chose to use my numbers, and qualitative predictions, is because I try to cover a broader spectrum of variables than my peers, looking at seemingly-tangential sources of upside or downside. Sometime these are technical, sometimes commercial. Unfortunately, unlike equity analysts, there are no published rankings of industry analysts.
One thing I can say, having been a director of a major analyst group in the past - you're absolutely right that money is a driver. Numerous times in the past, I thought a particular concept was a loser. I would have loved to have published a report that said "The market today is zero. In three years' time, we forecast it will still be zero." But almost nobody would have bought it.....
Nowadays, as an independent analyst (especially as I rejoice in being contrarian and Disruptive), I'll say it anyway, probably via my blog. But my paid-for research reports? Unsurprisingly, I try and pick the winners, not the losers, to write and sell reports on.
Hi, Dean.
Thanks for dropping in. If any readers don't know Dean Bubley, he's an analyst based in the UK, watching the wireless market. He's a smart guy, and runs a good weblog.
>>said Dr. Edward Weiler, NASA's Associate Administrator....one team used English units (e.g., inches, feet and pounds) while the other used metric units....1 meter = approx 3 feet. 200% error.
Yikes! I forgot about that. NASA burned up a spacecraft in the atmosphere of Mars, which is about what happened to the industry analysts' forecasts of handheld sales.
>>Michael, I think you're being slightly disingenuous here.
It's very gracious of you to give me an out, but actually I wasn't being even a tiny bit disingenuous. The one thing you can count on in my weblog is that I'll say what I think. I may get things wrong sometimes (like my NASA analogy above), but you're reading what I really believe.
If anything, I toned down this particular post a bit. The first draft was a lot harsher.
>>There are plenty of different categories of forecast, with a wide range of methodology and applicability.
Fair enough. I wasn't trying to be encyclopedic; I was thinking mostly of the high-profile forecasts from the major analysis firms. These forecasts are announced with press releases and get written up online and in newspapers.
I wasn't trying to pick on your work, Dean. I haven't ever seen a lot of your forecasts, so I can't judge.
>>Some just look at historic sales figures, or what the current vendors (and customers) predict will happen
Most of the forecasts I've seen appear to have been based on a straight-line extrapolation of last year's growth rate, massaged up or down based on how the analyst expected the overall economy to grow.
This is analogous to driving your car by looking in the rear-view mirror. Everything works farily well until the road takes a sudden turn.
>>most headline numbers used in press releases do not include information on assumptions, sensitivities, methodologies, and so forth.
And why is that?
I believe it's because a press release saying, "we think the smartphone market might grow 100% next year, unless there's a major change in the market, or the economy tanks, or there's an unexpected new entrant, or the market turns out to saturate sooner than we expect" would not be picked up by the press, and a report headlined that way would not be bought by most companies because it'd sound too wimpy.
The press and the industry crave the illusion of certainty, and most of the forecasting companies serve that craving.
>>Many technology market forecasts cannot be "absolute" - and most are not given that way by analysts in their full reports.
Although the forecasts may indeed be nicely footnoted with lots of risk factors and other hedges, someplace in the report there's a numbers chart that gets extracted and used in a company's business plan, and all the hedging and nuancing gets left behind.
What I'm trying to do here is educate people not to do that. Once you fully take into account all the assumptions and dependencies and such, any forecast (especially one for longer than 12 months) becomes pretty darned vaporous.
I think a forecast can be a useful tool to educate a company about the range of possibilities in the future, and the factors that might drive a particular outcome. But I have almost never seen them used that way. If you're using your forecasts that way, then more power to you.
>>Unfortunately, unlike equity analysts, there are no published rankings of industry analysts.
Oh, man, wouldn't that be fun? I'd love to see all the past forecasts plotted against what actually happened in the market. Maybe there are indeed some companies that generally get their forecasts right. But I sure never found them.
Anyway, thanks for the comments, Dean. Although I disagreed with you on some points, I want to make clear that I respect your professionalism and sincerity.
Hi Michael and readers of the Mobile Opportunity blog
What a wonderful posting. Excellent, Michael. And don't be at all miffed about Dean's comments. We have a saying in Finland that it is that dog which yelps, who got hit by the piece of wood that you threw (Se koira alahtaa, johon kalikka kalahtaa). So Dean probably felt a pang of a guilty conscience, ha-ha..
Speaking quite openly as one who is both guilty of being a forecaster and also of being a fellow analyst of reported actual statistics, I TOTALLY agree with you.
So first, yes, forecasts are always guesses into the future. Some forecasters may use methodologies that deliver less of an error than others, but yes, all forecasts are inherently wrong. If ever - EVER - a forecaster hit the numbers on the money, it was totally an accident. And any good forecaster will also tell you that.
What we can learn out of the process of forecasting, is to analyze trends, patterns, and thus try to learn about an industry or economy etc.
Still, yes, me too - whenever I look at any published numbers for mobile telecoms, I only focus on the reported actual numbers. I mostly don't even bother to record the weird and wonderful projections into the turn of the decade or wherever they happen to be aiming.
With all that being said, there are honest, professional forecasters, who do come back to their forecasts and explain what went wrong when we had a number too high (where is the MMS picture messaging traffic Tomi you promised us in 2001) or the number is too low (how come SMS revenues are higher today than you forecasted in 2001).
I am one of those that is stuck with a history I cannot escape. My second book, m-Profits, was the first business book for advanced mobile telecoms and it is full of stats - and my forecasts. I am happy that for the most part I am in the ball park, although for almost every forecast I made, I also have to come back with explanations and corrections today.
But your part about analyzing the reported "true" numbers is PRICELESS. It is so true. I used to run the 3G Business Consultancy Department for Nokia (mine was the unit that reported all ARPU forecasts and messaging, videocalling, subscriber penetration, churn etc forecasts formally for Nokia). And in that work of course I also got very familiar with the processes both of ours but also of the various other analysts in the mobile space.
You are totally right. We massage the numbers. We fuddle with the definitions. A reported number from one provider is not comparable with another. The Blackberry example is a perfect example.
VERY GOOD POSTING MICHAEL !!
I am one of the "stats police" at various mobile communities. I would welcome you and your readers to join us at Forum Oxford, the free expert community that was set up by the leading authors and bloggers in mobile. It is at www.forumoxford.com and for your first time sign-up you need and enrollment key, for whicy you use the word "forumoxford". We have a lot of very professional people over there who would love to have this dialogue also with you.
Please join us there.
Thank you for posting about this.
Tomi Ahonen
4-time bestselling author on mobile
founding member Forum Oxford, Carnival of the Mobilists and Engagement Alliance
website www.tomiahonen.com
blogsite www.communities-dominate.blogs.com
Thanks for the comments, Tomi!
>What we can learn out of the process of forecasting, is to analyze trends, patterns, and thus try to learn about an industry or economy etc.
An excellent point, and I should have said this -- the most useful thing you can do with a forecast is ask, "why is the author predicting this? What trends is he/she seeing?" Even if the numbers are wrong (and they will be) the discussion about the trends can be very enlightening.
The worst forecasts, in my opinion, are the ones that just deliver numbers and don't explain the basis for them. When I see those I always wonder if the authors are afraid to expose how superficial their thinking is.
And don't be at all miffed about Dean's comments.
No risk of that. Dean did some contracting for PalmSource, and he knows I respect him.
I wasn't even thinking of analysts like him when I wrote my post. I was referring to certain large analysis companies that make a business out of selling forecasts.
This is a rather late comment to your post, but anyway…
I always thought of industry analyst reports as self-fulfilling prophecies, especially the ones made by Gartner and Forrester. The content of the reports are not always accurate, as you pointed out, and sometimes there are flaws in the logic behind the arguments.
-Still I believe that these reports have a rather large impact on decisions made in companies.
Without mentioning any names, I have to admit that I have seen decision making in companies that rely partly on these reports and also taken part in such decision making for a short while.
In order to successfully advance a good idea (innovation; e.g. product evaluated by Gartner) several factors need to comply. For simplicity, I am going to focus on ideas in form of technological innovations. The model I am suggesting follows the following flow of events:
Criticism of source / Technological understanding – Business model – Communication – Implementation
First, the idea needs to be good. This suggests that the person advancing the idea should be capable of understanding the idea and capable of evaluating the “goodness” of the idea. Goodness is a rather ambiguous concept, so for simplicity this constraint can be relaxed to application of criticism of source and/or technological understanding of the idea.
Second, the person advancing the idea should be capable of making a reasonable business model. In some cases it requires more effort to create a business model than it takes “just to do it”; my argument does not apply to those.
Third, the communication is crucial. The communication part could be split up into several sub-categories: quality of presentation (assuming it’s the way to advance an idea), personality of the presenter and mood of the audience. All of these need to reach some level of acceptability to deliver a message (an idea) successfully.
Fourth, the idea needs to be possible to implement and implemented.
At each of these steps an idea can be perceived as not worth pursuing. One of the problems is that the audience for the business model often has quite different qualities than those needed to assess the reliability of the business plan presented. This would usually be senior management. Don’t take me wrong here; I’m not saying that senior managers are incompetent, I’m merely trying to point out that they usually have (and should have) very different knowledge and skills than those needed to understand e.g. the technical efficiency of an idea (a technical innovation). This actually brings me to the point I’m trying to make.
-In my experience, the first two steps are often blatantly skipped. In order to convince the sponsors (senior managers) that an idea is good and should be implemented, these first two are not essential. This is where industry analyst reports come into the picture. Sometimes these reports are even used as starting points for creation of new strategies.
My experience is limited to SME’s, so I cannot say if the same applies for large companies. Nevertheless, SME’s employ some 50-70 % of all employees employed by private companies in OECD member countries (e.g. Finland 56,3 %, the Netherlands 67,3 %). The SME’s make up a large part of the market, and their behavior significantly affect the market.
Because industry analyst reports are used as described above, I believe that industry analyst reports actually play a significant role in the diffusion of technological innovations, regardless of the correctness of the content.
When the ideas have finally been implemented; who will be there to point fingers if it does not bring as high returns as was projected in the communication phase? This is the part when responsibilities blur out and relative measures are used. If the implementation failed miserably, then there might be several other companies who made the same mistake (e.g. mortgages based on the assumption that the value of houses will always rise).
Post a Comment