Style vs. Substance in Mobile Software

Although we’ve all been talking about mobile computing for years, the smartphone and tablet markets are still very young, and changing rapidly. Many app and web companies are struggling to figure out how mobile works and what makes it different from the more familiar world of websites and personal computers.

The depth of the confusion became clear to me recently when, as part of a research project, I had the chance to watch a huge archive of videos of users trying out mobile-specific apps and websites. The results were surprising. Many users struggled to figure out even basic tasks, and I saw the same design mistakes repeated over and over by different developers.

I’ve written a whitepaper on the findings, with many details and examples (you can download it here).* In this post, I want to highlight the biggest problem I saw in the tests, and what I think it means for all of us.

The most common problem I saw in the tests was users struggling with mobile apps and websites that prioritized beauty over usability. Too often, we as an industry equate an app that looks simple with an app that’s easy to use. Those are two entirely different things. Stripping all the text out of an app and hiding all of the buttons makes for a beautiful demo at TechCrunch, but a horrible user experience for people who are trying to get something done with an app.

We tell ourselves that this is OK, relabeling confusion as “intrigue.”  How many times have you see an expert online say something like this: “Users enjoy the process of discovering new functions in your app as they gradually explore its interface and learn its hidden features”?  From watching real people use apps, I can tell you that’s lunacy. What delights most mobile users is getting things done. The only time they want to explore an app’s hidden nuances is if they’re playing. In a game or other entertainment app, cryptic Myst-like interfaces make for an engaging puzzle. In all other apps, puzzlement is a sign of bad design.

Here are three examples of the trouble we're creating for ourselves:

Low contrast. A trend in modern graphic design is the use of low-contrast graphics and text: light gray or blue text on a white background, or dark gray text on a black background. It looks sexy in print and on the web, but causes problems in mobile. Smartphones are often used outdoors, in situations where any screen image is hard to see. Low-contrast items can completely disappear in direct sunlight. Often companies don’t realize that this will be a problem because they test their apps indoors, or do design reviews by projecting screen images in a darkened room.

If you think this is just an isolated problem, check out the weather app in iOS 7. I love the look of that white text that Apple superimposed over a pale blue sky with puffy white clouds. But can you read it? How will it look in the sun?



Cryptic icons. There are a few icons that mean the same thing on all mobile platforms. For example, the magnifying glass means “search” everywhere. But in most cases, the mobile OS players have used icon designs as a point of differentiation. The table below shows some conflicting icon designs in Android and iOS:

The last two examples in the table show similar icons in iOS and Android that have different meanings.

Some developers respond to this diversity by creating separate versions of their mobile app for each OS, with different icons in each version. But users are not as easily segmented. In the tests, I saw cases in which iOS users assumed the Android icon definitions, and vice-versa. The situation is even worse for a mobile web developer, who must use the same UI on all platforms. Which icon set should they use?

When icon designs conflict, they cancel each other out and mean nothing. Many apps are studded with icons that the developers think make sense, but that actually are just tiny meaningless pictures in the eyes of many users.

Missing help. I used to think the ideal mobile app would be so simple that everyone could figure out how to use it intuitively. I now realize that’s a fantasy. The tiny screen and other restrictions of a mobile device make it almost certain that people will sometimes be confused by your app.

When mobile app users get confused, the first thing they do is search in the app for a help function. If help is available and properly structured, the user can usually resolve the problem and get back on task.  Unfortunately, in most mobile apps and websites, help is minimal or totally absent. I don’t know why that is. Maybe developers feel adding help would be an admission that their app is hard to use. But that’s like saying you shouldn’t put seat belts in a car because it implies the car might crash. Plan for trouble and your users will be happier.


What it means. The fixes to these specific problems are straightforward:

—Use high-contrast text (black on white, white on black, or pretty close to it). And test your mobile app or website outdoors, in bright sunlight.
—Label all buttons with text in addition to (or instead of) icons.
—Add context-sensitive help to every screen in your app (the help can be as simple as an overlay saying what you’re supposed to do on this screen and what the buttons do).

The harder part is dealing with the underlying design attitude that created these problems in the first place. I don’t know exactly when we went astray on design. Early websites were horribly cluttered, and in reaction to that we started to see a welcome move toward cleaner and simpler designs online (think of Mint.com, which took a complex subject like personal finance and made it feel accessible). The rise of the iPhone, with Apple’s strong emphasis on design elegance, reinforced this trend. But somewhere along the way, we lost track of the user’s needs. Instead of making things simple, we made them simplistic. We hid features for the sake of hiding them, rather than because the user didn’t need them. And we started designing software that would look beautiful to VCs and other designers, rather than being helpful to our users.

If we’re going to permanently solve the usability problems in mobile, we need to readjust our attitude toward mobile design. The most beautiful app is not the one that looks most striking; it’s the one people can actually use. You should design your app to be usable first, and then make it as pretty as you can.

The highest form of beauty is functionality.

__________

*Full disclosure: In addition to my startup role at zekira.com, I’m working on mobile strategy for UserTesting.com. They’re the leading “talk aloud” user testing service, and they gave me access to their test archive for the whitepaper. I controlled the content of the research and the conclusions. And the company had nothing to do with this blog post; I wrote it because I thought you’d be interested in the findings.

Announcing "Map the Future," a Better Way to Create Business Strategy

I wanted to let you know that my book on business strategy, Map the Future, is now available. Map the Future is a how-to book for business strategy. It teaches you how to combine information about competitors, customers, and technology trends to spot future opportunities and problems before they're obvious. That lets you grab opportunities before anyone else, and get ready for your competitors' responses before they happen.

This is not a theory or case-study book. It’s a practical how-to manual, summarizing the things I learned in a couple of decades of doing this stuff in Silicon Valley. The book is designed to help anyone who works on strategy, from individual contributor to senior manager. That’s a broad audience, so different parts of the book will be relevant to different readers. To help you find what you need, I organized it like a cookbook. It starts with an overview that's written for everyone, and then dives into very detailed how-to instructions on strategy-related subjects, ranging from how to manage a competitive analysis team to how to assemble a long-term road map.

The central idea behind Map the Future is that most companies think about the future the wrong way. Visionary companies (like Apple) try to impose their will on the future, like a military drill sergeant; analytical companies (like General Electric) try to predict the future in detail, like a weather report. Both approaches fail when there are changes we didn't anticipate. The reality is that you can neither fully predict nor fully control the future, because it hasn’t happened yet. But you can anticipate what could happen. What you need is a realistic map of the possibilities, like a highway map for the future, so you can see where you can and can't go, and then nudge events toward the future you want to create. Map the Future teaches you how to create the building blocks of that future roadmap (using competitive, customer, and technology information), and how to bring them together to drive strategy.

Topics covered include:
—How to segment the market for a new product
—How to create and use technology forecasts
—How to analyze competitors and test competitive products
—How to use market growth forecasts
—How to recruit and manage competitive analysis and market research teams
—How to manage third party researchers and analysts
—How to do competitive analysis and market research if you’re in a small company with no budget
—How to influence in a large company
—How to guide Agile product development through strategy
—How to read the adoption curve and tell when you’ve crossed the chasm

One comment I’ve received from early readers is that the book has a lot of information on what works and doesn't work in large companies. That’s true; steering strategy at a big company is an especially tough task because of the politics involved. But I did my best to also highlight information and techniques relevant to small companies and startups. The sections relevant to small companies are labeled and hyperlinked, so you can jump straight to them if you want to.

For more information on the book, and sample content, click here.

At this time, Map the Future is only available electronically, at the e-bookstores below and through my website. I didn’t want to wait nine months for a print publisher, and besides I’ve spent years preaching the benefits of electronic publishing, so it’s time to eat my own dog food.

PDF & ebook bundle $14.99

(Includes the .mobi file for Kindle; .epub file for Apple, Android, Nook, and most other e-readers; and PDF files formatted for 8.5 x 11-inch pages, 10-inch tablets, and 7-inch mini-tablets.)  

Buy the ebook for $9.99

(Includes .mobi file for Kindle and .epub file for Apple, Android, Nook, and most other e-readers. About 340 pages.)  

PDF version $9.99

(For those who prefer to read PDF files. Includes PDF files formatted for 8.5 x 11-inch pages, 10-inch tablets, and 7-inch mini-tablets.)  

Buy the ebook on Amazon:
Map the Future

Buy the ebook on the Apple iBookstore:
 

Click here to buy the ebook on Barnes & Noble (Nook)  

Click here to buy the ebook on Kobo  

If you have problems ordering, contact me at the e-mail address here.

If you have questions or comments on the book, feel free to contact me directly, or post them below. Meanwhile, here are a few comments from people who reviewed pre-release copies of the book:

“Even before finishing the book, I had written a stream of emails to my professional colleagues, making suggestions for new approaches in our projects, based on the examples I had just read.”
—David W. Wood, Technology Planning Lead, Accenture Mobility

 “I wish all the business guidebooks I’ve read were as good as this one. Hell, I wish ANY of them were.”
—Matt Bacon, Deputy Director, Device Strategy and Communication, Orange-France Telecom Group

Map the Future will sit on my desk for years to come as an invaluable guide to help me make good decisions about the future.”
—Tom Powledge, VP and General Manager, Symantec Corporation

Map the Future is a landmark guidebook for forward-thinking executives and strategy consultants.”
—Martin Geddes, Founder & Principal, Martin Geddes Consulting Ltd.

Map the Future is your cookbook for developing a strong roadmap and strategy. I wish I'd had a guide like Map the Future when I started my career. ”
—Gina Clark, Vice President & General Manager, Integrated Collaboration Group, Cisco Systems, Inc.

A big thank-you to the many Mobile Opportunity readers who offered advice and encouragement as I wrote the book. You helped a lot!

It’s Time to Reinvent the Personal Computer

“In chaos, there is opportunity.”
—Tony Curtis,
Operation Petticoat (and also Sun Tzu)

“Chaos” is a pretty good word to describe the personal computer market in 2013. Microsoft is trying to tweak Windows 8 to make it acceptable to PC users (link), its Surface computers continue to crawl off the shelf (link), PC licensees are reconsidering their OS plans and business models (link), and Apple’s Macintosh business continues a genteel slide into obscurity (link, link).

No wonder many people say the personal computer is obsolete, kaput, a fading figment of the past destined to become as irrelevant as the rotary telephone and steam-powered automobile (link).

I beg to differ. Although Windows and Macintosh are both showing their age, I think there is enormous opportunity for a renaissance in personal computing. (By personal computing, I mean the use of purpose-built computers for productivity tasks including the creation and management of information.) I’m not saying there will be a renaissance, because someone has to invest in making it happen. But there could be a renaissance, and I think it would be insanely great for everyone who depends on a computer for productivity.

In this post I’ll describe the next-generation personal computing opportunity, and what could make it happen.


What drives generational change in computing?

Let’s start with a bit of background. A generational change in computing is when something new comes along that makes the current computing paradigm obsolete. The capabilities of the new computers are so superior that previous generations of apps and hardware are instantly outdated and need replacement. Most people would agree that the transition from command line computers to graphical interface (Macintosh and Windows) was a great example of generational change.

What isn’t as well understood is what triggered the graphical interface transition. It wasn’t just the invention of a new user interface. The rise of the Mac and Windows was driven by a combination of factors, including:

A new pointing device (the mouse) that made it easier to create high-quality graphics and documents on a computer.
Bitmapped, full-color displays that made it easy for computers to display those graphics, pictures, and eventually video. Those displays also made it easier to manage file systems and launch apps visually.
New printing technology (dot matrix and laser printers) that made it easy to share all of those wonderful new documents and illustrations we were creating.
A new operating system built from the ground up to support these new capabilities.
An open applications market that enabled developers to turn all of these capabilities into compelling new apps.

All of those pieces had been around for years before graphical computing took off, but it wasn’t until Apple integrated all of them well, at an affordable price, that the new paradigm took off. The new interface and new hardware, linked by a new or rebuilt OS, let us work with new types of data. That revolutionized old apps and created whole new categories of software.

Windows and Mac took off not because they were new, but because they let us do new things.

Although later innovations, such as the Internet, added even more power to personal computing, it’s amazing how little its fundamental features and capabilities have changed since the mid-1990s. Take a computer user from 1979 and show them a PC from 1995, and they’ll be completely lost in all the change. Take a computer user from 1995 and show them a PC from 2012 and they’ll admire the improved specs but otherwise be feel very much at home.
   
Maybe this slowdown in qualitative change is a natural maturation of the market. After an early burst of innovation, automobiles settled down to a fairly standard design that has changed only incrementally in decades. Same thing for jetliners.

But I think it’s a mistake to look at personal computers that way. There are pending changes in interface, hardware, and software that could be just as revolutionary as graphical computing was in the 1980s. In my opinion, this would be a huge opportunity for a company that pulls them all together and makes them work.


Introducing the Sensory Computer

I call the new platform sensory computing because it makes much richer use of vision and gestures and 3D technology than anything we have today. Compared to a sensory computer, today’s PCs and even tablets look flat and uninteresting.

There are four big changes needed to implement sensory computing.

The first big change is 3D. Like desktop publishing in the 1980s, 3D computing requires a different sort of pointing device, new screen technology, and a new kind of printer. All of those components are available right now. Leap Motion is well into the development of gesture-based 3D control. 3D printers are gradually moving down to smaller sizes and more affordable price points. And 3D screens that don’t require glasses are practical, but have a limited market today because we keep trying to use them for televisions, a usage that doesn’t work with the screen’s narrow viewing angle.

But guess what sort of screen we all use with a very narrow viewing angle, our heads perched right in front of it at a fixed distance? The screen on a notebook computer.

Today we could easily create a computer that has 3D built in throughout, but we lack the OS and integrated hardware design that would glue those parts together into a solution that everyone can easily use.

You might ask what the average person would do with a 3D computer. Isn’t that just something for gamers and CAD engineers? The same sort of question was asked about desktop publishing in the 1980s. “Who needs all those fonts and fancy graphics?” many people said. “For the average person Courier is just fine, and if I need to send someone an image I’ll fax it to them.”

Like that skeptical computer user in the 1980s, we don’t know what we’ll do when everyone can use 3D. I don’t expect us to send a lot of 3D business letters, but it sure would be nice to be able to create and share 3D visualizations of business data and financial trends. I’d also like to be able to save and watch family photos and videos in 3D. How about 3D walkthroughs of hotels and tourist attractions on Trip Advisor? The camera technology for 3D photography exists; we just need an installed base of devices to edit and display those images. And although I don’t know what I’d create with a 3D printer, I’m pretty sure I’d cook up something interesting.

Every time we’ve added a major new data type to computing, we’ve found compelling mainstream uses for it. I’m confident that 3D would be the same.

The second big change is modernizing the UI. User interface is ultimately about increasing the information and command bandwidth between a person and a computer. The more easily you can get information in and out of the computer, the more work you can get done. The mouse-keyboard interface of PCs, and the touch-swipe interface of tablets, were great in their time, but dramatically constrain what we can do with computers. We can do much better.

The first step in next-generation UI is to fully integrate speech. This doesn’t mean having everything controlled by speech, but using speech technology where it’s most effective.

Think about it: What’s the fastest way to get information in and out of your head? For most of us, we can talk faster than we can type, and we can read faster than we can listen to a spoken conversation. So the most efficient UI would let us absorb information by reading text on the screen, but enter information into the computer by talking. Specifically, we should:
—Dictate text to the computer by via speech, with an option to use a keyboard if you’re in public where talking out loud would be rude.
—Have the computer present information to us as printed text on screen, even if that information came over the network as something else. For example, the computer should convert incoming voice messages to text so you can sort through them faster.

We can do all of these things through today’s computers, of course, but the apps are piecemeal, bolted on, and forced through the funnel of an old-style interface. They’re as awkward as trying to do desktop publishing on a DOS computer (something that people did try to do for years, by the way).

Combine speech with 3D gestures and you’ll start to have a computer that you can control very richly by having a conversation with it, complete with head nods and waves of the hand. Next we’ll add the emerging science of eye tracking. I’m very impressed by how much progress computer scientists are making in this area. It’s now possible to build interfaces that respond to the things we look at, to facial expressions, and even to our emotional response to the things we see. This creates an incredibly rich (and slightly creepy) opportunity to build a computer that responds to your needs almost as soon as you realize them.

Once we have fully integrated speech, gesture recognition, and eye tracking, I’m not sure how much we’ll need other input technologies. But I’d still like to have the option to use a touchscreen or stylus when I need precision control or when a task is easier to do manually (for example, selecting a cell in a spreadsheet or drawing something). And as I mentioned, you’ll need a keyboard option for text entry in public places. But these are backups, and a goal of our design should be to make them options rather than a part of the daily usage experience.

The third change is a new paradigm for user interaction In a word, it’s time to ship cyberspace. The desktop metaphor (files and folders) was driven by the capabilities of the mouse and bitmapped graphics. The icons and panels we use on tablets are an adaptation to the touchscreen. Once we have 3D and gesture recognition on a computer, we can rethink how we manage it. In the real world, we remember things spatially. For example, I remember that I put my keys on my desk, next to the sunglasses. We can tap into that mental skill by creating 3D information spaces that we move through, with recognizable landmarks that help to orient us. Those spaces can zoom or morph interactively depending on what we look at or how we gesture. Today’s interface mainstays such as start screens and desktops will be about as relevant as the flowered wallpaper in grandma’s dining room. Computer scientists and science fiction authors have played with these ideas for decades (link); now is the time to brush off the best concepts and make them real.

The fourth change is to modernize the computing ecosystem. The personal computer software ecosystem we have today combines 20-year-old operating system technology with a ten-year-old online store model created by Apple to sell music. There’s far more we could do to make software easy to develop, find, and manage. The key changes we need to make are:

—The operating system should seamlessly integrate local and networked resources. Dropbox has the right idea: you shouldn’t have to worry about where your information is, it should just be available all the time. But we should apply that idea to both storage and computer processing. We shouldn’t have web apps and native apps, we should just have apps that take advantage of both local computing power and the vast computational resources of the web. An app should be able to run some code locally and some on a server, with some data stored locally and some on the network, without the user even being aware of it. The OS should enable all of that as a matter of course.

In this sense, the advocates of the netbook have it all wrong. The future is not moving your computing onto the network; it’s melding the network and local computer to produce the best of both worlds.

Discovery needs work. App stores are great for making apps available, but it’s very hard to find the apps that are right for you. Our next generation app store should learn your interests and usage patterns and automatically suggest applications that might fit your needs. If we do this right, the whole concept of an app store becomes less important. Rather than you going to shop for apps, information about apps will come to you naturally. I think we’ll still have app stores in the future because people like to shop, but they should become much less important: a destination you can visit rather than a bottleneck you must pass through.

Security should be built in. The smartphone operating systems have this one right: each app should run in a separate virtual sandbox where malicious code can’t corrupt the system. No security system can be foolproof, but we can make personal computers far more secure than they are today.

Payment should be built in as well. This is the other part of the software and content ecosystem that’s broken today. Although the app and content stores have made progress, we’re still limited to a small number of transaction types and fixed price bands. You can’t easily sell an app for half a cent per use. You can’t easily sell a software subscription with variable pricing based on usage. As an author, you can’t easily sell an ebook for more than $10 or less than 99 cents without giving up 70% of your revenue. And you can’t easily sell a subscription to your next ten short stories. Why? Because the store owners are manipulating their terms in order to micro-manage the market. They mean well, but the effect is like the worst dead-hand socialist market planning of the 1970s. The horrible irony is that it’s being practiced by tech companies that loudly preach the benefits of a free market.

It’s time for us to practice what we preach. The payment system should verify and pass through payments, period. Take a flat cut to cover your costs and otherwise get out of the way. The terms and conditions of the deal are between the buyer and the creator of the app or content. Apple or Google or Amazon has no more business controlling what you buy online than Visa has controlling what you buy in the grocery store. The free market system has been proven to produce the most efficiency and fastest innovation in the real world; let’s put it to work in the virtual world as well.

Adding it up. Let’s sum up all of these changes. Our next-generation computer now has:
—A 3D screen and 3D printing support built in, with APIs that make it easy for developers to take advantage of them.
—Speech recognition, gesture recognition, and eye tracking built in, with a new user interface that makes use of them.
—A modernized OS architecture that seamlessly blends your computer and the network, while making you more secure against malware.
—An app and content management system that makes it easy for you to find the things you like, and to pay for them in any way you and the developer agree to.

I think this adds up to a new paradigm for computing. It’s at least as revolutionary as the graphical computing revolution of the 1980s. We’ve opened up new usages for your computer, we’ve enabled developers to revolutionize today’s apps through a new interface paradigm, and we’ve made it much easier for you to find apps and content you like.

Why can’t you just do all this with a tablet? You could. Heck, you could do it with a smartphone or a television set. But by the time you finished adding all these new features and reworking the software to make full use of them, you would have completely rebuilt the whole device and operating system. You’ll no longer have a cost-efficient tablet, but you’ll still have all the flaws and limitations of the old system, jury-rigged and still adding cost and inefficiency. Windows 8, anyone?

It’ll be faster and cheaper just to design our new system from scratch.


When will we get a sensory computer?

If you agree that we’re overdue for a new computing paradigm, the next question is when it’ll arrive. Unfortunately, the answer is that it may not happen for decades. Major paradigm changes in technology tend to creep forward at a snail’s pace unless some company takes on the very substantial task of integrating and marketing them. Do you think ebooks would be taking off now if Amazon hadn’t done Kindle? Do you think the tablet market would be exploding now if Apple hadn’t done the iPad? I don’t think so, and the proof is that you could have built either product five years earlier, but no one did it.

So the real question is not when we’ll get it, but who might build it. And that’s where I get stuck.

Microsoft could do it, and probably should. But I doubt it will. Microsoft is so tangled up now in tablet computing and Windows 8 that I find it almost impossible to believe that it could take on another “replace the computer” initiative. I think there’s a very good argument that Microsoft should have done a sensory computer instead of Windows 8, but now that the decision’s made, I don’t think it can change course.

Google could do it, but I don’t think it will. Google is heavily invested in the Chrome netbook idea. It’s almost a religious issue for Google: as a web software company, the idea of a computer that executes apps on the web seems incredibly logical, and is emotionally attractive. Google also seems to be hypnotized by the idea that reducing the cost of a PC to $200 will somehow convert hundreds of millions of computer users to netbooks. I doubt it; PC users have been turning up their noses for decades at inexpensive computers that force them to compromise on features. The thing they will consider is something at the same price as a PC but much more capable. But I don’t think Google wants to build that.

One of the PC companies might take a stab at it. Several PC companies have tried from time to time to sell computers with 3D screens. Theoretically, one of those companies could put together all the features of a sensory computer. I think HP is the best candidate. It already plans to build the Leap Motion controller into some of its computers, and I can imagine a beautiful scenario in which HP creates a new ecosystem of sensory computers, low-cost home 3D printers that render in plastic, and service bureaus where you can get your kid’s science fair design converted to aluminum (titanium if you’re rich). It would be glorious.

But it’s not likely. To work right, a sensory computer requires a prodigious amount of new software, very careful integration of hardware and software features, and the courage to spend years kick-starting the ecosystem. I don’t think HP has the focus and patience to do it, not to mention the technical chops, alas. Same thing for the other PC companies.

Meg, please prove me wrong.

Apple is the company that ought to do it, but does it have the will? Apple has the expertise and the market presence to execute a paradigm change, and its history is studded with market-changing products. I love the idea of Apple putting a big 3D printer at the back of every Apple store. Maybe you could let Sensory Mac users sell their designs online, with pickup of the finished goods at any Apple store, and Apple (naturally) taking a 30% cut...

But I’m not sure if today’s Apple has the vision to carry out something like that. The company is heavily invested in smartphone and tablet computing, with an ongoing hobby around reinventing television. There’s not much executive bandwidth left for personal computing. The company’s evolution has taken it steadily toward mobile devices and entertainment, and away from productivity.

Think of it this way: If Apple were really focused on personal computing innovation, would it be letting HP take the lead in integrating the Leap Motion controller? Wouldn’t it have bought Leap Motion a year ago to keep it away from other companies?

I think personal computing is a legacy market to Apple, an aging cash cow it’ll gently milk but won’t lead. I hope I’m wrong.

We’re out of champions, unless...  At this point we’ve disposed of most of the companies that have the expertise and clout to drive sensory computing. I could make up scenarios in which an outlier like Amazon would lead, but they’re not very credible. I think the other realistic question is whether a startup could do it.

It’s hard. Conventional wisdom says that you need $50 million to fund a computer system startup, and that sort of capital is absolutely positively not available for a company making hardware. But I think the $50 million figure is outdated. The cost of hardware development has been dropping rapidly, driven by flexible manufacturing processes and the ability to rapidly make prototypes. You could theoretically create a sensory computer and build it in small lots, responding to demand as orders come in. This would help you avoid the inventory carrying costs that make hardware startups so deadly.

The other big barrier to hardware startups has been the need to get into the retail channel, which requires huge investments in marketing, sales support, and even more inventory. Here too the situation has been changing. People today are more willing to buy hardware online, without touching it in a store first. And crowdfunding is making it more possible for a company to build up a market before it ships, including taking orders. That business model actually works pretty well today for a $100 gizmo, but will it work for a $2,000 productivity tool?

Maybe. I’m hopeful that some way or another we’ll get a sensory computer built in this decade. At this point, the best chance of making it happen is to talk up the idea so one or more companies will make it happen. Which is the point of this post.

[Thanks to Chris Dunphy for reviewing an early draft of this post. He fixed several glaring errors. Any errors that remain are my fault, not his.]


What do you think?  Is there an opening for a sensory computer? Would you buy one? What features would you want to see in it? Who do you think could build it? Please post a comment and share your thoughts.

The Dell Buyout: Storm Warning for the Tech Industry

Michael Dell is engaged in a lengthy struggle to take his company private, and if you’re focused on the smartphone and tablet markets, you probably don’t care. It’s hard to picture an old PC company like Dell pushing the envelope in tech, so from one perspective it doesn’t really matter who runs the company or whether it stays public or private. But I think Dell’s situation is important because it shows how the decline of Windows is changing the tech industry, and hints at much more dramatic changes that could affect all of us in the future.  In this post I’ll talk about what’s happening to Dell, why it matters, and what may happen next.


Why is Dell going private?


I should start with a quick recap of Dell’s situation: Michael Dell and tech investment firm Silver Lake Partners have proposed to take Dell private in a transaction funded in part by a $2 billion loan from Microsoft. The proposal has angered shareholders who believe the company is worth more than what was offered, and two competing proposals have emerged from Carl Ichan and Blackstone Group. Dell now apparently faces an extended period of limbo while the competing proposals are evaluated.

Given how messy this process could be, it’s reasonable to ask why Michael Dell started it in the first place. I’m surprised at how many conflicting explanations have surfaced:

—The deal is largely a tax avoidance scheme, according to Slate (link). Like many tech companies, Dell has accumulated a large pool of profit overseas which it can’t bring back into the United States without paying 35% income tax on it. If Dell takes itself private, it can use that money to pay off the interest from the buyout without paying tax on it.

—It’s a financial shell game according to some financial analysts, including Richard Windsor, formerly of Nomura. His scenario is that after Dell takes the company private, it will sell or spin out the PC half of the company to pay off the buyout. That will leave Michael Dell and his partners owning Dell’s IT services business at low cost (link).

—It’s a way for Michael Dell to get some peace. In this scenario, Michael Dell is a sensitive man who’s grown tired of taking criticism from investors. The buyout is a way to get away from them. This explanation showed up in a large number of press reports immediately after the proposal. For example, here’s PC World: “Michael Dell apparently grew tired of running his company to the whims of a stock market that often favors immediate return over long-term investment.” (link)

—It’s a necessary prelude to broad organizational changes at Dell. The Economist put it this way: “Making the kind of wrenching operational changes Silver Lake typically prescribes would be tricky for a public company anxious not to panic shareholders.” (link)

—Michael Dell did it to save his job. According to BusinessWeek, Michael Dell was afraid that an activist shareholder might take over the company and force him out as CEO. So he proposed the deal as a pre-emptive strike. (link).

The problem with analyzing a company’s motivations is that you tend to assume there’s a logical explanation for the things it did. Often there’s not. Company managers are frequently fearful or misinformed, and sometimes they just make dumb mistakes. It’s possible that’s happening with Dell. But if we assume a basic level of rationality, then we can probably discount some of the proposed explanations. For example, I personally doubt Dell can pay off the deal by selling the PC business, because I don’t think anyone would buy it. It’s not like there’s another Lenovo out there hungry to get into PCs, and Google already bought one floundering hardware company; I doubt it has the appetite for another.

I’m also skeptical that after a lifetime in business Michael Dell is so thin-skinned that he can’t stand shareholder criticism. If you have the ego and drive to build up a company from scratch to the size of Dell, you usually don’t care much about complaints from puny mundane humans.

And I find it hard to believe that Dell had to take the company private in order to reorganize it. If Dell took a machete to the PC business, I think most investors would cheer rather than panicking.

The explanation I lean toward is that Michael Dell was afraid he wouldn’t be left in charge long enough to finish transforming the company. You can make a case that as 15% owner and with a base of investors focused on long-term gains, his position was secure from takeover threats. But after I looked in more detail at the company’s finances, and some market trends, I started to suspect that he felt a lot less secure than you’d expect. There are big storm clouds on the horizon for Dell, and they’re darkening rapidly. Those trends also threaten the rest of the PC industry.


A storm’s a-brewin’

Dell’s problems have been developing for years. The company’s power probably hit its peak in about 2005, when it was the world’s #1 PC vendor with about 17% of the market. Dell was the upstart beast that had dethroned the PC powers like Compaq, HP, and IBM. But after 2005, the PC industry adapted many of the flexible manufacturing practices that had made Dell so powerful. PC sales also shifted toward notebooks, which are much less customizable than the desktop computers that made Dell successful. The company’s market share started to erode. Dell tried for several years to turn around the PC business through innovation and new product categories, with no effect. Then in late 2008 it changed strategy and started evolving itself into an IT services company (like IBM, but supposedly aimed more at small and medium businesses). Starting with Perot Systems, Dell made a long series of IT services acquisitions, a process that has continued to this day.

Throughout this process, Dell gradually lost PC share, dropping to 12% by 2011. But because the PC market was growing, Dell’s actual PC shipments were more or less flat, giving the company a financial cushion to fund its transition to services.

Then in 2012, the situation changed. For the first time in years, overall PC unit sales shrank. What’s more, Lenovo (the new upstart beast in the PC market) was taking share from the other leaders. The combination of a shrinking market and a growing Lenovo caused a big drop in Dell’s PC sales.

Worldwide PC (desktop and notebook) unit sales

This chart shows worldwide PC revenue for calendar 2006-12. Until 2012, PC sales were growing fairly steadily, and I'm sure the management of Microsoft and the big PC companies found that reassuring. But in 2012, total PC unit sales dropped while Lenovo (the green wedge) continued to grow. This combination put huge pressure on sales of the other PC leaders, including Dell. (Source: Gartner Group)

Dell revenue (fiscal years)
This chart shows what that did to Dell’s revenue. The new parts of the company -- storage, services, and software -- were flat to slightly up last year. Servers grew as well. But they couldn’t grow quickly enough to offset the major declines in desktop and notebook computers. Dell’s total revenue dropped substantially. (The chart shows Dell’s financial years, which are about a year ahead of the calendar. So FY 2013 in this chart is roughly calendar 2012. Note that Dell did not break out its revenue by product line in FY 2010.) (Source: Dell financial reports)

I think the most disturbing thing for Dell about this revenue drop is that it happened in the face of the launch of Windows 8. Traditionally, new Windows launches have usually led to a nice uptick in PC sales as customers buy new hardware to go with the new software. Even the unpopular Windows Vista didn’t reduce PC sales. I’m sure Dell was expecting some sort of Windows 8 bounce, or at least a flattening in any decline. Instead, as we learned from the latest PC shipment reports, PC shipments dropped after the launch of Windows 8 (link). That indicates that the channel was probably stuffed with new Windows 8 PCs that have not yet sold through.

People who live in the world of smartphones and tablets are probably saying “so what?” But I doubt that was the reaction at Dell.

If you haven’t worked at a PC company, you’ll have trouble understanding how profoundly disturbing the current sales situation is for Windows licensees. The PC companies married themselves to the Microsoft-Intel growth engine years ago. In exchange for riding the Wintel wave, they long ago gave up on independent innovation and market-building. In many ways, they outsourced their product development brains to Microsoft so they could focus on operations and cost control. They trusted Microsoft to grow the market. Microsoft is now failing to deliver on its side of the bargain. Unless there's a stunning turnaround in Windows 8 demand, I think it’s now looking increasingly likely that we’ll see a sustained year over year drop in PC sales for at least several more quarters.

This is an existential shock for the PC companies. It’s like discovering that your house was built over a vast, crumbling sinkhole.

Prior to the PC sales decline, I think Michael Dell probably assumed that his PC business could continue to fund its growth in services for the foreseeable future. He has probably now reconsidered that assumption. If Lenovo continues to grow and the market continues to shrink, Dell’s revenue will drop further, and the company could be in a world of financial trouble a year from now. It’s the sort of trouble that can get a CEO fired even if he does own 15% of the company.

So here’s the sequence of events: By fall of last year, the troubles with Windows 8 were already becoming clear to the PC companies (remember, the Windows licensees have much better information on customer purchase plans than we get from the analysts). Michael Dell must have realized that he was headed for a significant decline in revenue. At the same time, we now know, one of the company’s major shareholders approached Michael Dell to float the idea of a buyout. That was apparently the trigger that started the whole buyout process.

Put yourself in Michael Dell’s shoes: the shareholders are getting restless already, and you know the situation is likely to get worse in the next year. Proposing a buyout now would be a pre-emptive strike to keep control over the company you founded. That’s what I think happened.

What happens next? After more confusion, someone will eventually win the bidding Dell. All of the bidders seem to agree that Dell should continue to invest in services, so the real debate is over what happens to the PC business. Michael Dell says if he wins, Dell will re-engage with the PC market (link):

“While Dell's strategy in the PC business has been to maximize gross margins, following the transaction, we expect to focus instead on maximizing revenue and cash flow growth.”

In other words, Dell will cut its PC prices.

It seems strange that Dell would want to refocus on PCs after treating them like a cash cow for years. If the business was unattractive when PC sales were growing, why would it be attractive now? Maybe Dell decided that it needs strong PC sales to get its foot in the door to sell services. That seems like a reasonable idea. But shouldn’t the company have known that years ago?

Or maybe Dell feels that the interest and principal payments on its buyout will be smaller than the profits required of a public company. That might allow Dell to compete more aggressively in PCs while it still invests in services.

Maybe that’s the purpose of Microsoft’s $2 billion loan, to let Dell stay in PCs while it also grows services. It says something sad (and alarming) about Microsoft’s business if it now needs to pay companies to stay in the PC market.


What it means to the rest of us

I think the Dell deal is just the beginning of the Windows 8 fallout. There are several other, bigger, shoes waiting to drop.

What will the other major PC licensees do? If you’re working at a company like HP or Acer, everything about this situation feels ugly. Your faith in Windows has been broken, you’re losing share to Lenovo, and now Microsoft is subsidizing one of your biggest competitors. I’d be tempted to fly out to Redmond and demand my own handout. And I’d also be willing to look at more radical options. There are several possibilities:

—Exit the PC market. HP considered this in 2011, but backed away after a change in CEO. I wonder if the company will think about it again. Meg Whitman says no, that the PC business is important to HP’s other businesses, such as servers, because they buy many of the same parts. Exit PCs and you costs will go up because you won’t have the same purchase volumes. That’s a pretty backward endorsement of the PC business, but I guess it’s possible.

Acer doesn’t really have the option of dumping PCs. They make up most of its business, so it has to stay in computing hardware, one way or another.

—Find a new plough horse. In this option, you replace Windows with a platform that has better growth prospects. That lets you continue to use your clone vendor skills, but in a market that’s growing. Acer and HP are both dabbling in Chrome netbooks (link) and Android tablets. I wouldn’t be surprised to see many more experiments along these lines. But it’s not clear how much market momentum Google can generate for its tablets and netbooks. HP and Acer could easily spend a lot of money for very few sales, and in the meantime create a rift with Microsoft that would be hard to return from if Windows 8 does eventually take off.

—Reinvest in creating differentiated devices. This is the other option: get off the clone treadmill and be more like Apple, a device innovator. The trouble with this is that many years ago, the PC licensees laid off the people who knew how to build new markets and new categories of computing device. Recovering those skills is like trying to grow a new brain – very slow, and hard to do when your head is stuffed with other things. You need to be incredibly patient during the learning process, and accept that there will be failures along the way. It’s hard for public companies to show that sort of patience.

So maybe you buy a company that knows how to make new-category devices. For example, you could have bought Palm. As time goes on, HP’s handling of that transaction looks more and more like a business Waterloo.

There aren’t many other hardware innovators that you could buy. RIM, maybe? Or HTC? But then you’re in a meatgrinder smartphone market dominated by Samsung and Apple. The PC market, even if it’s shrinking, might look more inviting.

Personally, I’d look at buying Nook. Not necessarily because I want to be in the ebook business, but to get a team that knows how to design good mobile devices and is familiar with working on a forked version of Android.


I don’t think any of these three options look very attractive, but the slower the takeoff for Windows 8, the more desperate the Windows licensees will get, and the more likely that they’ll try one or more radical “strategic initiatives” in the next year.


What if Microsoft gave a party and nobody came?  The situation for Microsoft is becoming more and more complicated. Windows is not dead. It has an enormous installed base of users who are hooked on Windows applications and won’t go away in the near future. However, Microsoft faces some huge short-term and long-term challenges, and many of its possible responses could make the situation worse rather than better.

I think it’s pretty clear that we’ve entered a period of extended decline in Windows usage, as customers use tablets to replace notebooks in some situations and for some tasks. The tablet erosion may be self-limiting; I don’t think you can use today’s tablets to replace everything a PC does. If that’s the case, Windows sales may eventually stabilize and even resume growing once the tablet devices have taken their pound of flesh.

On the other hand, it’s equally possible that tablets and netbooks will continue to improve, gradually consuming more and more of the Windows market. That’s certainly what Google is hoping to do with Chrome. What would happen if Apple made a netbook and did it right?

Microsoft had hoped to head off all these problems with Windows 8. By combining the best of PCs and tablets, Windows 8 was supposed to stop the tablet cannibalization and also set off a lucrative Windows upgrade cycle. Unfortunately, at least for the moment, Windows 8 is looking like the worst of both worlds – not a good enough tablet to displace the iPad, but different enough to scare away many Windows users.

This puts Microsoft in a nasty dilemma. If it believes that Windows 8 sales will eventually rebound, then Microsoft should invest heavily in keeping its PC partners engaged. In that context, the $2 billion loan to Dell is a reasonable stopgap to prevent the loss of a major licensee.

On the other hand, if Windows sales are entering a long-term period of gradual decline, Microsoft should be doing the exact opposite. Rather than spending money to keep licensees, it should be allowing one or more of them to leave the business, so the vendors that remain will still be profitable and willing to invest. It’s better for Microsoft to have seven licensees who are making money than ten licensees who all want to leave and are investing heavily in Chrome or Android or other crazy schemes.

Microsoft also faces a difficult challenge with Lenovo. Even if Windows sales turn up, Lenovo has been taking share so fast that it will be hard for other Windows licensees to grow. At current course and speed, Lenovo is likely to end up the largest Windows licensee. In the past, Microsoft didn’t care if one licensee replaced another; they were interchangeable. But Lenovo has close ties to the Chinese government, which has repeatedly shown that it’s willing to lean on foreign tech companies. That has to make Microsoft uncomfortable.

In that case, the $2 billion investment in Dell starts to look like a defensive measure to get someone to compete against Lenovo on price. But if Microsoft subsidizes a price war in PCs, that might give the other licensees more reason to disinvest, enabling Lenovo to gain share even faster.

This is the true ugliness of Microsoft’s situation. It is in danger of falling into a series of self-defeating actions:
—To combat tablets, it creates a version of Windows that accelerates the Windows sales decline.
—To keep its licensees loyal, it makes Windows overdistributed, which increases licensees’ incentive to leave.

The situation is becoming more and more fragile. As I said above, I don’t expect Windows to collapse instantly. But many companies are reconsidering their investments in it, a process that is likely to eventually give customers second thoughts as well. We could end up with an unexpected series of events that combine to break the loyalty of Windows users and start a migration away from it that Microsoft couldn’t stop.

The key question is whether Google, Apple, or some other vendor can give Windows customers and licensees an attractive place to run away to. So far they haven’t, but the year is still young. I’ll talk more about the possibilities next time.

The Real Story Behind Google’s Street View Program

This morning Google signed a consent decree with the US Federal Trade Commission to avoid a lawsuit over privacy concerns caused by Google’s Street View program, which covertly collected electronic data including the location of unsecured WiFi access points and the contents of users’ web access logs. As part of the settlement, the FTC released a huge collection of sworn depositions that were taken from Google employees during its investigation. I’ve been wading through the depositions, and they give tantalizing hints of a deeper data collection plan by Google. Here’s what I found...

–We shouldn’t be surprised that there was an Android angle to the data collection. Most smartphones contain motion sensors, which when tuned to eliminate background noise can detect the pulse of the user. That was being combined with the phone's location and time data, and automatically reported to a centralized Google server.

Using this data, Google could map the excitement of crowds of people at any place and time. This was intended for use in an automated competitor to Yelp. By measuring biometric arousal of people at various locations, Google could automatically identify the most interesting restaurants, movies, and sporting events worldwide. The system was put into secret testing in Kansas City, but problems arose when it had trouble differentiating between the causes of arousal in crowds. This resulted in several unfortunate incidents in which the Google system routed adventurous diners into 24-hour fitness gyms and knife fights at biker bars. According to the papers I saw, Google is now planning to kill the project, a process that will involve announcing it worldwide with a big wave of publicity and then terminating it nine months later.

–Tech Crunch reported about six months ago that Google was renting time on NASA’s network of earth-observation satellites. This was assumed to be a way to increase the accuracy of Google Maps. What wasn’t reported at the time was that Google was also renting time on the National Security Agency’s high resolution photography satellites, the ones that can read a newspaper from low Earth orbit. Apparently the NSA needed money from Google to overcome the federal sequester, and Google wanted a boost for Google+ in its endless battle with Facebook.

Google’s apparent plan was to automate the drudgery of creating status posts for Google+ users. Instead of using your cameraphone to photograph your lunch or something cute you saw on the street, Google would track your smartphone’s location and use the spy satellites to automatically capture and post photographs of any plate-shaped object in front of you, and any dog, cat, or squirrel that passed within ten feet of you. An additional feature would enable your friends to automatically reply “looks yummy” or “awww so cute.” (An advanced option would also insert random comments about Taylor Swift.) Google estimated that automating these functions would add an extra hour and 23 minutes to the average user’s work day, increasing world GDP by three points if everyone switched from Facebook to Google+.

–The other big news to me was the project’s tie-in to Google Glass, the company’s intelligent glasses. Glass doesn’t just monitor everything the user looks at and says; a sensor in Glass also measures pupil dilation, which can be correlated to determine the user’s emotional response to everything around them. This has obvious value to advertisers, who can automatically track brand affinity and reactions to advertisements. What isn’t widely known is that Glass can also feed ideas and emotions into the user’s brain. By carefully modulating the signals from Glass’s wireless transceiver, Google can directly stimulate targeted parts of the brainstem. This can be used to, for example, make you feel a wave of love when you see a Buick, or to feel a wave of nausea when you look at the wrong brand of beer.

This can sometimes cause cognitive problems. For example, during early tests Google found that force-fitting the concepts of “love” and “Buick” caused potentially fatal neurological damage to people under age 40. The papers said Google was working on age filters to overcome this problem.

Although today’s Glass products can only crudely affect emotions, the depositions gave vague hints that Google plans to upgrade the interface to enable full two-way communication with the minds of Glass users. (This explains Google's acquisition of the startup Spitr in 2010, which had been puzzling me.) The Glass-based thought transfer system could enable people to telepathically control Google’s planned fleet of moon-exploring robots. It may also be used to incorporate Glass users into the Singularity overmind when it emerges from Google’s server farms, which is apparently scheduled for sometime in March of 2017.

Posted April 1, 2013

The ghosts of April Firsts past: 
2012: Twitter at Gettysburg
2011:  The microwave hairdryer, and four other colossal tech failures you've never heard of
2010:  The Yahoo-New York Times merger
2009:  The US government's tech industry bailout
2008:  Survey: 27% of early iPhone adopters wear it attached to a body piercing
2007:  Twitter + telepathy = Spitr, the ultimate social network
2006:  Google buys Sprint