The mobile data apocalypse, and what it means to you

The mobile industry is now completing a huge shift in its attitude toward mobile data. Until pretty recently, the prevailing attitude among mobile operators was that data was a disappointment. It had been hyped for a decade, and although there were some successes, it had never lived up to the huge growth expectations that were set at the start of the decade. Most operators viewed it as a nice incremental add-on rather than the driver of their businesses.

But in the last year or so, the attitude has shifted dramatically from "no one is using mobile data" to "oh my God, there's so much demand for mobile data that it'll destroy the network." A lot of this attitude shift was caused by the iPhone, which has indeed overloaded some mobile networks. But there's also a general uptick in data usage from various sources, and the rate of growth seems to be accelerating.

Extrapolating the trend, most telecom analyst firms are now producing mobile data traffic forecasts that look something like this:




The forecasts are driven by a couple of simple observations:

--Smartphones produce much more data traffic than traditional mobile phones. Cisco estimates that a single smartphone produces as much data traffic as 40 traditional feature phones. So converting 10 million people from feature phones to smartphones is like adding 390 million new feature phone users, in terms of impact on the data network. The more popular smartphones get, the busier the network becomes.

--A notebook PC generates far more traffic than a smartphone. According to Cicso, a single notebook computer generates the same data traffic as 450 feature phones. As notebook users convert to 3G-enabled netbooks and add 3G dongles to their computers, they dramatically increase the data traffic load on the network.

You can read Cisco's analysis here.

This becomes especially interesting when you look at the forecasts for growth of 3G-equipped netbooks and notebooks. Mobile operators in many countries have started subsidizing sales of those devices if you pay for a data service plan. It's an attractive deal for many people. Say your son or daughter is going off to college. Do you buy them a regular notebook computer and also pay for the DSL service to their apartment, or do you buy them a 3G data plan for about the same price as DSL and get the netbook for free?

The forecasting firm In-Stat recently predicted that by 2013, 30% of all notebook computers will be sold through mobile operators and bundled with 3G data plans (link). Notebook computer sales worldwide are about 150 million units a year, so that's 45 million new 3G notebooks a year -- or the data equivalent of adding 20 billion more feature phones to the network every year.

Jeepers.

These forecasts are producing a behind-the-scenes panic among mobile network operators. The consensus is that there's no way their networks can grow quickly enough to support all that data traffic. There are several reasons:

--They can't afford to build that much infrastructure.

--Even if they could afford the buildout, they won't have enough bandwidth available to carry all that data, even with 4G.

--Traffic-shaping techniques like tiered pricing and usage caps can't restrain usage growth enough to save them, because

--Fear of losing customers to a competitor will force them to continue to subsidize sales of 3G dongles and offer relatively generous caps in their data plans.

There are a number of projections that show the operators losing money on wireless data a few years from now, as costs continue to increase faster than revenue. The danger isn't so much that they will all go broke, but they're very afraid that they'll turn into zero-profit utilities.

Many operators now seem to be counting on WiFi as their ultimate savior. The theory is that if they can offload enough of the data traffic from their networks to WiFi base stations connected to wired networks, then maybe other measures like 4G, usage caps, and aggressive improvements to the network will let them squeak through.

It's an ironic situation. For a long time the mobile operators thought of themselves as the future lords of data communication. All devices would have 3G connections, the thinking went, and the fixed-line data carriers such as Comcast and BT would fade away just like the fixed-line voice companies are doing.

Instead, the new consensus is that we're moving to a world where the fixed-line vendors will be expected to carry most consumer data traffic for the foreseeable future. They'll provide your wireless connectivity at home and work, while the mobile network will fill in the gaps when you're on the move. The area of disagreement, of course, is who will get the majority of the access revenue. We'll let the fixed-line and mobile operators argue over that one; I want to talk about some of the other impacts of this weird new hybrid wireless world that we're heading into.

(I touched on some of this in my post on net neutrality a couple of weeks ago (link), but I want to go into more detail here.)


The brave new world of scarce mobile bandwidth

Built-in WiFi is now good. For a long time many mobile operators resisted selling smartphones with WiFi built in. They viewed WiFi networks as competitors for customer control, and wanted to prevent usage of them. Now that they see WiFi as their savior, the operators are suddenly encouraging its inclusion in phones. Don't be surprised if in the near future it becomes impossible to get a subsidized price for any smartphone that doesn't have WiFi built in.


Traffic shaping is a fact of life, and a likely source of irritation. Many mobile operators are starting to limit the performance of applications that consume the most data bandwidth (today that's mostly video and file sharing). It's already being done today, and in most cases the operators won't even tell you they're doing it, unless the government requires them to. Certain apps will just communicate more slowly, or fail altogether, when the network gets busy.

There are a couple of exceptions where operators have been more public about their traffic shaping activity. The 3 network in the UK recently announced restrictions (link). And O2 in the UK has given details on exactly which applications it restricts in its home wireless data service (link).

Current traffic shaping hasn't generated a firestorm of complaints from the average customer (as distinct from net neutrality advocates), in part because it is very hard for users to tell why a website runs slowly on a particular day. But as mobile traffic continues to increase, operators are going to find that it's cheaper to ratchet up the restrictions bit by bit rather than pay for more capacity. Eventually people will notice, and I worry that we'll end up in a situation in which the operators carefully balance out how much they can piss off their customers without creating an outright revolt. It's a lot like the way the US airline industry operates today, and it's a miserable experience for everyone involved.

What to do. There are better ways to shape traffic. I think operators should give customers more information on how much data they're using at any given time, so they can manage it themselves. Then let them make an informed decision about which apps they'll use their bandwidth on. It would be relatively simple to create an on-screen widget showing how much data is being transferred at any time, just like the signal strength and battery life indicators on today's phones.

It's also possible to create some APIs that would tell a website how much bandwidth is available to it, so the developer could adjust its features accordingly. This idea is being tossed around between web companies and operators, but I don't know how much is actually being done about it.

Combine those changes with usage-based pricing (my next point) and customers will shape their own traffic. Then there won't be any need for covert manipulation of the network.


Say hello to capped data plans. Completely unlimited wireless data plans are not sustainable long term; the economics of them just don't work. And in fact, virtually no data plans today are completely uncapped; there is almost always some fine print about the maximum amount of traffic allowed before surcharges kick in or the user is tossed off the network.

Some people are saying that the operators should go back to charging by the byte, and in some parts of the world (particularly Asia), there is a long history of per-byte pricing. But the experience in most of the world has been that per-byte pricing makes users so nervous about their expenses that they won't use data services at all.

(DoCoMo in Japan has an interesting hybrid approach (link) in which it charges per-packet until the user hits a maximum charge of about $70 per month. Additional usage beyond that cap is free. So that's capped pricing rather than capped usage. This reduces customer fear of accidentally running up a gigantic bill, but I wonder how DoCoMo prevents power users from flooding the network with traffic. Maybe there's a second, hidden cap on total usage.)

What to do. I think the right answer in most of the world is going to be flat-rate data plans in which there's a clearly-communicated cap, with tiered charges beyond that. The cap will need to be set at a level that moderate users won't ever reach, so they don't become gun-shy about data. To alleviate the fear of accidentally running up a huge bill, there will also need to be an on-device meter showing how much of the user's monthly data allocation has been used (just telling them to go look at a website is not enough; it should be on-screen). I'm told that on-screen meters like this are already being offered on netbooks by some European operators.

Today most operators are pretty up-front about communicating the data limits when a computer is connected to a mobile network. But many of them are still deceptive toward smartphone customers. AT&T's Smartphone Personal service, for example, promises the following for $35 a month:

Included Data: Unlimited; Additional data: $0 per MB

Sounds pretty straightforward. No asterisks, no fine print. But if you click on the terms of service (link), you'll find a long list of banned application types, followed by this general provision:

"AT&T reserves the right to (i) deny, disconnect, modify and/or terminate Service, without notice, to anyone...whose usage adversely impacts its wireless network or service levels or hinders access to its wireless network... and (ii) otherwise protect its wireless network from harm, compromised capacity or degradation in performance."

In other words, if the network is getting slow, they can do anything to your service, at any time, without notice.

There is also a hidden 5G per month maximum:

"If you are on a data plan that does not include a monthly MB/GB allowance and additional data usage rates, you agree that AT&T has the right to impose additional charges if you use more than 5 GB in a month."

This is not just an American problem. Orange in the UK calls its iPhone data service "unlimited," but there's a footnote saying that "unlimited" actually means 750 megabytes a month, a surprisingly low cap compared to AT&T's.

If we're ever going to collectively manage mobile network overload, we'll all need to be much more up-front about the way it operates and what a particular service plan will and won't do.


Is residential 3G really a good idea? Especially in Europe, it's common for operators to tell people that they should ditch their DSL or cable modem at home and replace it with a 3G modem. That works out well only when the network has excess capacity. As soon as the networks start to get congested, the operators will need to offload traffic to residential WiFi routers connected to DSL or cable. If those residential fixed lines have been removed, the operators can't offload.

What to do. I think this one is going to be self-limiting. Once 3G bandwidth gets scarce, the operators will realize that they can get a lot more revenue feeding data to smartphones than to PCs. The math works like this: With a given amount of bandwidth, you could support a single notebook computer and charge about $50 a month, or support 11 smartphones at $30 a month each. Hmm, $330 a month versus $50, seems like a pretty easy decision.

But there are two circumstances in which it would make sense for the operators to keep subsidizing PC sales:

1. If smartphone sales plateau. If this happens, eventually the network will catch up with demand and then there will be excess capacity for PCs; or

2. If operators can route most of the actual data traffic from PCs through WiFi connected to landlines. In this case they could sell you data plans knowing that you won't affect their networks much. That brings us to the next point...


Operators have a huge vested interest in unlocking WiFi access points. Most WiFi access points today are encrypted and inaccessible to other devices in the area. I think there's a strong financial incentive for mobile operators to work with fixed-line access companies to get those access points unlocked. The benefit for the wireless companies is clear -- the more WiFi points they can talk to, the fewer cell towers they need to build. But the benefits for the fixed-line operators are much less clear. Why should they help the mobile operators with their bandwidth crunch?

What to do. The ideal situation would be a revenue-sharing deal in which the operators share some money with the fixed-line companies to encourage them to open up access to their networks. In this scenario, your DSL or cable provider would give you a WiFi router that has been pre-configured to automatically and securely share excess bandwidth with mobile devices in the area. Your own traffic would get priority, but any extra capacity could be shared automatically. The benefit for you as a consumer would be a free router, and/or a lower DSL bill as the cable company passes along some of the revenue it gets from the mobile operators.

The effectiveness of this sort of approach is going to depend on the relative cost for an operator of subsidizing a set of WiFi base stations in an area, versus the cost of installing more wireless capacity. I wonder about weird scenarios like a DSL provider auctioning off excess WiFi capacity to wireless operators in a particularly congested area.


Femtocells for the rest of us. Another very logical step for the operators is to start pushing femtocells aggressively. (Femtocells are radios that work like a short-range cell tower, but are the size of a WiFi router. You connect one to your DSL or cable line, and it offloads traffic from the wireless network. Link)

What to do. Today femtocells are generally sold as signal boosters in areas with marginal wireless coverage. But in the future I think it may make sense for operators to give away femtocells, or at least subsidize them, for customers who live in areas where the data network is congested.


What it all means: Fixed-mobile convergence with a twist

If you step back from the details, the big picture is that we really need a single integrated data network that encompasses mobile and fixed connections, and switches between them seamlessly. People have been talking about this sort of thing for years (check out the Wikipedia article on fixed-mobile convergence here), but the focus has generally been on handing voice calls between WiFi and cellular. That's hard to do technologically (because you can't interrupt a voice conversation during the handover for more than a fraction of a second). Besides, it doesn't solve a significant customer problem -- the voice network isn't the thing that's overloaded.

The place where we could really, really use fixed-mobile convergence is in data. I'm worried, though, that the intense competition between the wireless and wired worlds will make it difficult and slow to achieve the coordination needed. This might be a useful place for government to put its attention. Not in terms of regulating the integrated network into existence (that would be the kiss of death), but to grease the skids for cooperation between the mobile and fixed-line worlds.


Just one more thing...

Everything above is based on the assumption that those Cisco and analyst forecasts are correct. But Cisco has a vested interest in hyping fear of the data apocalypse (Emergency! Buy more routers now!!), and my general rule about tech analysts is that every time they all agree on something you should bet against them.

There is a genuine crunch in mobile data capacity going on at the moment; you can read about network outages caused by the iPhone even today. And I can assure you that for every network failure you read about, there are dozens of other failures and near-failures that don't get reported. Many wireless data networks are very stressed.

And the situation will get worse.

But there's no such thing as infinite demand. At some point the growth of mobile data will slow down, and it's very important to try to estimate how and when that'll happen, so we as an industry do not overshoot too badly. The question isn't whether the growth forecasts are wrong, it's when they will be wrong.

I'll write about that next week...

The OS is always greener...

In a report from a developer meeting, Nokia officials said they're moving to Maemo Linux as the OS for their high-end smartphones. That resulted in an entertaining little obituary in the Register by Andrew Orlowski (link). But then later in the day Nokia clarified that "we remain firmly committed to Symbian as our smartphone platform of choice" (link). That in turn led to a lively online debate about what Nokia actually said, and the challenges that Finnish people face when speaking English (check the comments here).

It's just one more chapter in the long and exquisitely awkward saga of Nokia and Symbian. From the outside I can't tell exactly what's going on at Nokia, and it's possible that Nokia itself doesn't know. It's a very large company, and various groups there can have conflicting agendas.

But I can't believe that there would be all of these repeated reports, leaks, and artfully-worded partial denials unless Nokia were de-emphasizing Symbian in the long run. The most prominent theory, which I believe based on things I hear through back channels, is that Nokia does indeed intend to move to Maemo at the high end. And, as we all know, in computing whatever's at the high end eventually ends up in the mainstream.

I'm sure Nokia has valid technical reasons for moving to another OS. Nokia has said that there are some things it wants to do with its smartphones that Symbian OS can't support. But still the change worries me. Nokia's biggest problem in the smartphone market isn't the OS it uses, it's the user experience and services layer in its smartphones. Moving to a new OS does almost nothing to fix that. It does force a lot of engineers to work on writing a lot of low-level infrastructure code that won't create visible value for users. It also forces Nokia to maintain two separate code bases, which will chew up even more engineers.

All of that investment could have gone into crafting some great solutions, the things that are the only way to pull customers away from Apple and RIM. At a minimum, it's a terrible shame that Nokia spent so much time and money on an OS that couldn't take it into the future.

(By the way, this focus on the OS doesn't apply only to Nokia. I hear a lot of buzz from operators and handset companies who believe that if they just pick the right OS they'll automatically end up with great smartphones. Android is the latest white knight for most of them, but of course Nokia's not going to depend on a technology from Google.)

There's an old joke in the tech industry about rearranging deck chairs on the Titanic. I don't think that applies to Nokia because they haven't hit an iceberg by any means. But I do have a mental picture of a sweet old lady who spends all her time every day cleaning the bathroom while the food is spoiling in the refrigerator.

Which mobile apps are making good money?

At a conference the other day, several industry executives were on a panel discussing mobile application stores. There were representatives from Yahoo, Qualcomm, Motorola, and an independent application store. Someone from the audience asked a simple question: "Other than entertainment apps, name three mobile applications that are monetizing well." (In other words, apps that have a good business model and are making good money.)

The interesting thing was that none of the panelists had a very satisfying answer. The Qualcomm person cited navigation apps and something called City ID, and had no third app. The app store guy cited search-funded apps (Opera) and apps that are extensions of PC applications (Skype). The Motorola person, who used to work at Palm, cited two cool old Palm OS developers (SplashData and WorldMate, the latter not even available for Motorola's Android phones). And the Yahoo guy talked about Yahoo-enabled websites.

None of them had the sort of answer that the room was looking for -- what categories of smartphone apps are making it, and what are their business models, so other developers will know what to emulate? I started to laugh at the panelists' obvious discomfort, but then I realized that if I'd been on the panel and had been asked the same question, I would have blown it too. I know of a lot of mobile app companies that aren't making steady money, because they send me e-mails asking for ideas, but I don't seem to hear from the raging successes. Also, because I try to focus on what needs to be fixed in the industry, I'm probably guilty of skewing my posts toward what's not working.

So I did some thinking and a bit of research, and here are my three nominations for categories of non-entertainment mobile apps that are making it, and why. Then I'll open it up to your comments -- I have a feeling you'll have much better answers than me.


1. Vertical-market business applications. This was a good category for PDAs ten years ago, and it's a good category for smartphones today. There are dozens of business verticals where information overload, or an excess of written forms, hinder productivity. Find a way to manage that information electronically, and your application quickly pays for itself in increased productivity.

One example is ePocrates, which gives doctors drug reference, dosing, and interaction information. ePocrates has a beautiful business model in which the drug companies pay to get access to the doctors who use it. That helps the company give away its base product. I have to believe there are other verticals where you could create apps that would act as a middleman between suppliers and users.

Another interesting example, which I ran into at a conference recently, is Corrigo. They do work-order management (stuff like managing a mobile workforce and dispatching them to work sites on the fly). I like Corrigo because it makes good use of mobile technology, and scales nicely to multiple vertical markets.

Note that neither Corrigo nor ePocrates is a purely mobile application -- they are business solutions that leverage mobile. That's very typical of the business mobile market. It's not about being mobile for its own sake, it's about solving a business problem and using mobile technology to help do it.

One other cool thing about these businesses is that you can ignore the whole app store hassle and market them directly to the companies. You control your customer relationships, and you can keep 100% of your revenue.

2. PC compatibility applications. Inevitably some people will need to do on a mobile device the same things that they do on a PC -- edit a document, for example, or query a database. There's a solid market for applications to let the user do that. The market isn't enormous (not everyone is crazy enough to edit a spreadsheet on a screen the size of a Post-It note), but the people who need to do that are usually willing to pay for the apps. Or to make their employers pay for the apps. Documents to Go was probably the most successful application on Palm OS, and based on the stats posted by Apple I think it is probably one of the most lucrative non-entertainment apps on iPhone.

Unfortunately, Docs to Go is also a very well-entrenched application, so good luck displacing it. Maybe you can find another category of PC app that needs a mobile counterpart.

3. Brand extenders. There seems to be a steady market for mobile apps that help a major brand interact with its loyal users. A few recent examples:
  • -The Gucci app lets a customer get special offers, play with music, and find travel attractions endorsed by Gucci. The company calls it a "luxury lifestyle application."
  • -There are four different Nike iPhone apps: a shoe designer, a women's training guide, a football (soccer) training guide, and an Italian soccer league tracking app.
  • -The Target store search app lets you find stores, and search for items within the stores (it'll tell you which aisle to look in). (For those of you outside the US, Target is a large chain of discount department stores.)
  • -Magic Coke Bottle is a Coke version of the Magic 8-Ball. It's one of three Coke-branded apps.
The iPhone is the most popular platform for these apps today, although I expect they'll spread to other smartphone platforms over time.

The business model for this one is simple -- you get hired by the brand (or its marketing agency), they pay you to develop the app, and then they give it away. The more popular smartphones become, the more companies feel obligated to create mobile apps, so this is a growing market for now. (Beware, though -- having an iPhone app is kind of a corporate status symbol right now, like creating a corporate podcast was a couple of years ago. Development activity could drop off when businesses find the next trendy tech fad.)

To create this sort of app, you need to be very skilled at visual design, and you need to be comfortable managing custom development projects. Some developers don't have this sort of project and client management skills, and you can get yourself into a lot of trouble if you sign a contract without understanding what'll really be required to execute on it.

Also, you don't get to change the world creating a shopping app for Brand X. But in the right situation this can be a good way to make money while you work on your own killer app on the side. And if you're not into changing the world, there are companies that have built solid ongoing businesses on custom mobile development.


Other possibilities

There are a few of other categories of apps that I think could be candidates for inclusion, but in my opinion the jury is still out on them. I'm interested in what you think:

Location. Right now there are several location and direction apps selling well for iPhone, but with Google making directions free on Android, I fear the third party apps are at risk. However, the direction-finding business is a lot trickier than you'd expect (I learned that as a beta-tester for the Dash navigation system, which sometimes tried to get me to make a right turn by telling me to make three consecutive left turns). So we need to wait and see how good Google's directions are. But in the meantime I don't feel comfortable pointing to this as a viable category in the long term. What do you think?

Travel apps. There was once a very nice business in city guides for PDAs, but I get the sense that like many other categories of mobile apps, this one is being sucked into the free app vortex. But I suspect that there may still be a paid market for specialized tools like translation programs, and software that helps executives manage trips. WorldMate is an interesting example -- the base product is free, but if you pay you get special services.

Upgradeware. Speaking of free base products, I think this is the most intriguing possibility in the whole mobile app business today. In the PC world, there are a lot of app companies that manage to build sustainable businesses by giving away a free base product and then charging you for the advanced version (this is how most of Europe gets its antivirus software today, for example). In mobile this model worked well on Palm, but was not available on iPhone because Apple's terms and conditions prohibited a free application from offering in-app conversion to a paid upgrade. Apple just changed those terms.

Rob at Hobbyist Software asked the other day what I thought about the change. I think it's very long overdue, and I'm intensely interested in hearing from developers who have moved to that model. How's it working out for you?


Okay, so that's my list. If you're scheduled to appear on a panel somewhere, you're welcome to quote from it as needed. But now I'd like to throw the discussion open to you. Please post a comment -- What do you think of my list? And what non-entertainment mobile app categories, and business models, are making good money today, and why?

A web guy and a telecom guy talk about net neutrality

It was a nondescript bar in the American Midwest, the sort of place where working men drop in at the end of the day to unwind before they head home. You wouldn't expect to find two senior business executives there, and as I sat in the empty bar at midday I wondered if maybe my contact had given me a bad lead. But then the door opened and a general manager from one of the leading web companies walked in, followed by a senior VP from one of the US's biggest mobile network operators. I hunched down in the shadows of a corner booth and typed notes quietly as they settled in at the bar.

Bartender: What'll you have?

Telecom executive: Michelob Light.

Web executive: I'll have a Sierra Nevada Kellerweis.

Bartender: Keller-what?

Web executive: Um, Michelob Light.

Telecom executive: Thanks for coming. Did you have any trouble finding the place?

Web executive: All I can say is thank God for GPS. I've never even been on the ground before between Denver and New York.

Telecom executive: I wanted to find someplace nondescript, so we wouldn't be seen together. The pressure from the FCC is bad enough already, without someone accusing us of colluding.

Web executive: No worries, my staff thinks I'm paragliding in Mexico this weekend. What's your cover story?

Telecom executive: Sailboat off Montauk.

Web executive: Sweet. So, you wanted to talk about this data capacity problem you have on your network...

Telecom executive: No, it's a data capacity problem we all have. Your websites are flooding our network with trivia. The world's wireless infrastructure is on the verge of collapse because your users have nothing better to do all day than watch videos of a drunk guy buying beer.

Web executive: Welcome to the Internet. The people rule. If you didn't want to play, you shouldn't have run the ads. Remember the promises you made? "Instantly download files. Browse the Web just like at home. Stream HD videos. Laugh at an online video or movie trailer while travelling in the family car."

Telecom executive: That was our marketing guys. They don't always talk to the capacity planners. Besides, who could have known that the marketing campaign would actually work?

Web executive: Don't look at me. I've never done a marketing campaign in my life. I think you should just blame it on A--

Telecom executive: You promised, no using the A-word.

Web executive: Sorry. But I still don't see why this is a problem. Just add some more towers and servers and stuff.

Telecom executive: It's not that simple. The network isn't designed to handle this sort of data, and especially not at these volumes. Right now our biggest problem is backhaul capacity -- the traffic coming from the cell towers to our central servers. But when we fix that, the cell towers themselves will get saturated. Fix the towers and the servers will fall over somewhere. It's like squeezing a balloon. We have to rebuild the whole network. It's incredibly expensive.

Web executive: So? That's what your users pay you for.

Telecom executive: But most of them are on fixed-rate data plans. So when we add capacity, we don't necessarily get additional revenue. It's all expense and no profit. At some point in the not-too-distant future, we'll end up losing money on mobile data.

Web executive: Bummer.

Telecom executive: More like mortal threat. Fortunately, we've figured out how to solve the problem. The top five percent of our users produce about 50% of the network's total traffic. So we're just going to cap their accounts and charge more when they go over.

Web executive: Woah! Hold on, those are our most important customers you're talking about. You can't just shut them down.

Telecom executive: The hell we can't. They're leeches using up the network capacity that everyone else needs.

Web executive: Consumers will never let you impose caps. You told them they had unlimited data plans, that's the expectation you set. You can't go back now and tell them that their plans are limited. They won't understand -- and they won't forgive you.

Telecom executive: First of all, the plans were never really unlimited in the first place. There's always been fine print.

Web executive: Which no one read.

Telecom executive: Off the record, you may have a point. On the record, the fact is that you can retrain users. Look, you grew up in California, right?

Web executive: What does that have to do with anything?

Telecom executive: Once upon a time, there weren't any water meters in California. Now most of the major cities have them, and they'll be required everywhere in a couple of years. Something that was once unlimited became limited, and people learned to conserve.

Web executive: The difference is, I can read my water meter. You make a ton of money when people exceed their minutes or message limits, and you don't warn them before they do it. If you play the same game with Internet traffic, it'll scare people away from using the mobile web -- or worse yet you'll invite in the government. Look what happened with roaming charges in Europe.

Telecom executive: Jeez, don't even think about that. Okay, so we'll need to add some sort of traffic meter so people will know how much data they're using when they load a page.

Web executive: Great, that'll discourage people from using Yahoo.

Telecom executive: Huh?

Web executive: Oops, did I say that out loud?

Telecom executive: Then there's the issue of dealing with websites and apps that misuse the network.

Web executive: Not this again.

Telecom executive: I'm not talking about completely blocking anything, just prioritizing the traffic a little. Surely you agree that 911 calls should get top priority on the network, right?

Web executive: Of course.

Telecom executive: And that voice calls should take priority over data?

Web executive: I don't know about that.

Telecom executive: Oh come on, what good is a telecom network if you can't make calls on it?

Web executive: (sighs) Yeah, okay.

Telecom executive: So then what's wrong with us prioritizing, say, e-mail delivery over video?

Web executive: Because when you start arbitrarily throttling traffic, I can't manage the user experience. My website will work great on Vodafone's network but not on yours, or my site will work fine on some days and not on others. How do you think the customers will feel about that?

Telecom executive: Not as angry as they will be if the entire network falls over. Listen, we're already installing the software to prioritize different sorts of data packets. We could be throttling traffic today and you wouldn't even know it.

Web executive: But people will eventually figure it out. They'll compare notes on which networks work best and they'll migrate to the ones that don't mess with their applications. Heck, we'll help them figure it out. And if that's not enough, there's always the regulatory option. The Republicans are out of office. They can't protect you on net neutrality any more.

Telecom executive: You think you're better at lobbying the government than we are? We've been doing it for 100 years, pal. Besides, we have a right to protect our network.

Web executive: You mean to protect your own services from competition!

Telecom executive: Parasite!

Web executive: Monopolist!

Telecom executive: That's it! It's go time!

They both stood. The telecom guy grabbed a beer bottle and broke it against the bar, while the web guy raised a bar stool over his head. Then the bartender pulled out a shotgun and pointed it at both of them.

Bartender: Enough! I'm sick of listening to you two. Telecom guy, you're crazy if you think people will put up with someone telling them what they can and can't do on the Internet. The Chinese government can't make that stick, and unlike them you have competitors.

Web executive: See? I told you!

Bartender: Shut up, web guy! You keep pretending that the wireless network is infinite when you know it isn't. If you really think user experience is important, you need to start taking the capabilities of the network into account when you design your apps.

Web executive: Hey, he started it.

Telecom executive: I did not!

Bartender: I don't care who started it! Telecom guy, you need to expose some APIs that will let a website know how much capacity is available at a particular moment, so they can adjust their products. And web guy, you need to participate in those standards and use them. Plus you both need to agree on ways to communicate to a user how much bandwidth they're using, so they can make their own decisions on which apps they want to use. That plus tiered pricing will solve your whole problem.

Telecom executive: Signaling capacity too. Don't forget signaling.

Bartender: That's exactly the sort of detail you shouldn't confuse users with. Work it out between yourselves and figure out a simple way to communicate it to users. Okay?

Web executive: I guess.

Telecom executive: Yeah, okay.

Bartender. Good. Now sit down and start over by talking about something you can cooperate on.

Telecom executive: All right. Hey, what's that guy doing in the corner? Is that a netbook?

Web executive: He's a blogger!

Bartender: There's no blogging allowed in here!

Telecom executive and web executive: Get him!

I ran. Fortunately, the bar had a back door. Even more fortunately, the web guy and the telecom guy got into an argument over who would go through the door first, and I was able to make my escape.

So I don't know how the conversation ended. But I do know that I wish that bartender was running the FCC.

Is Apple too powerful?

The new iPod nano is a tour de force, the Swiss Army Knife of mobile entertainment. I'm sure there's some obscure gadget from Japan that packs more features per cubic millimeter, but I've never heard of it, and chances are neither have you. This one's a major consumer product, just in time for stimulating the economy this holiday season. Speaking as a technophile, I want one of the new nanos for the same reason I want a Dremel with 300 different bits: just because.

I'm also impressed by the new price point on the iPod Touch. Apple frequently overhypes its announcements, but the $199 price point in the US truly is a milestone that should lead to much higher sales. The improvements to iTunes and the App Store look promising as well, and I'm especially intrigued by Apple's effort to make paid apps more prominent. More on that in a future post.

But the thing that surprised me the most about Apple's announcement wasn't the features of the new products, or the absence of a tablet or an iPhone Lite. It was something Steve Jobs said when he talked about the video camera in the nano:

"We've seen video explode in the last few years," he said, showing a picture of a Flip video camera. "Here's one, a very popular one, four gigabytes of memory, $149, and this market has really exploded, and we want to get in on this."

Think about that for a minute. "There's a big new market, and we want in." Not, "we're creating something new" or "we can vastly improve this category." Just, "we want a cut."

It sounds like something Don Corleone would say. Or Steve Ballmer. But it's not what I expected from Apple.

Now, it's logical for Apple to put video cameras into iPods. A friend of mine worked at one of the companies producing cameras-on-a-chip, and he's passionate about the potential for building vision into every consumer product. It's not just an imaging issue; when the device can see the user, you can create all sorts of interesting gesture-based controls that don't require you to ever even touch the device. Instead of point and click, the interface is just...point.

So it's been inevitable that video cameras would eventually be built into things like the nano. For Pure Digital, the makers of the Flip, this ought to be a tough but normal competitive challenge. The first step is to make sure your camera works better than theirs (check). Next, since music players are becoming cameras, you might want to build a camera that can also play music.

But that's where the situation becomes abnormal. Because even though Pure Digital was recently purchased by Cisco, giving it almost limitless financial resources, it's more or less impossible for its products to become equivalent to the iPods as music players. Not because they can't play music, but because they aren't allowed to seamlessly sync with the iTunes music application.

The issue of access to iTunes has already been simmering in the background between Apple and Palm, with Palm engineering the Pre to access the full functionality of iTunes, Apple blocking that access, and Palm breaking back in. To date I've viewed it as kind of an amusing sideshow, and I didn't really care who won. I figured the folks at Palm had plenty of time in the past to build their own music management ecosystem, but they (including me) didn't bother, so there wasn't any particular moral reason why they should have access to Apple's system.


Apple the predator

The situation with Pure Digital is vastly different, in my opinion. Pure Digital pioneered the market for simple video cameras. It identified an opportunity no one else had seen, and built that market from scratch. In a declining economy, it created new jobs and new wealth, and made millions of consumers happy. It's incredibly difficult to get a new hardware startup funded in Silicon Valley, let alone make it successful. For the good of the economy, we ought to be encouraging more companies like Pure Digital to exist.

But there's no way for a small startup like that to also create a whole music ecosystem equivalent to iTunes. Yes, third party products can access iTunes music. But not as seamlessly as Apple's own products, and as we've seen over and over in the mobile market, small differences in usability can make a big difference in sales. So Apple gets a unique advantage in the video camera market not because it makes a better camera, but because it can connect its camera more easily to a proprietary music ecosystem.

In other words, iTunes is no longer just a tool for Apple to defend its iPod sales; it's now a tool to help Apple take over new markets.

In the legal system they call this sort of thing "tying," and it is sometimes illegal. For decades, Apple complained that Microsoft competed unfairly by tying its products together -- Office works best with Windows, Microsoft's file formats are often proprietary so you can't easily create a substitute for their apps, and so on. I was heavily involved in the Apple-Microsoft lawsuits when I worked at Apple in the 1990s, so I know how passionately we believed that Microsoft's tactics were not just unethical, but also harmful to computer users and the overall economy.

So it's very disappointing to see Apple using tactics it once bitterly denounced, and declaring that it's decided to take over a market because "we want to get in." If Apple can use iTunes as a weapon against Pure Digital and Palm, what's to stop it from rolling up every new category of mobile entertainment product? Where's the incentive for other companies to invest?

I saw first-hand the stifling effect that Microsoft and Intel's duopoly control had on personal computer innovation. PC hardware companies learned not to bother with new features, because Microsoft and Intel would insist that anything new they created be made available to every other cloner. And software investments were restrained by the belief that Microsoft would use its leverage to take over any new application category that was developed.


Good fences make good neighbors

There's a danger that Apple's behavior will have the same chilling effect in mobile electronics. So I believe Apple should allow any device to sync with iTunes content, the same as an iPod. But not because it's morally right or even because it's legally required, but because it's the best thing to do for Apple. Here's why:

The two biggest threats to a very successful company are complacency and consistency. Complacency is more common -- a company that's very successful starts to relax and loses the hunger and drive that made it a winner. I think we can safely assume that won't happen to Apple as long as Steve is around. But the second risk, consistency, is more insidious -- behavior that's appropriate and accepted for a spunky startup gets punished when a big company does it.

This is what tripped up Microsoft. The same aggressiveness that served it well against IBM got it a series of lawsuits and intense government scrutiny a decade later. Even though Microsoft eventually won those suits, its execs were distracted for years, and it was forced to dramatically change its behavior. It has never been the same company since. I think Microsoft would have been much better off had it proactively adjusted its own behavior just enough to pre-empt legal action.

That's where Apple is today. It has to realize that it's no longer the underdog. It's the dominant company in mobile entertainment, and the fastest-growing major firm in mobile phones. It's already under a lot of legal scrutiny for the way it manages the iPhone App Store. If it also leverages iTunes to take out small competitors, and especially if it's dumb enough to say things like "we want in," it will guarantee unfriendly attention from government regulators -- a group of people who actually have more power to hurt Apple than do most of its competitors.

The Obama administration in the US is making noises about enforcing competition law more vigorously, and look at how the EU is picking on details in the Oracle-Sun merger, allegedly to protect local companies (link). If they'll do all that to help SAP and Bull, what will they do to protect Nokia?

Apple, you don't need the special connection with iTunes to keep on winning. You've already proven that you're much better at systems design than almost any other company on Earth. The huge iPhone apps base is exclusive to you, and that won't change. By opening up iTunes, you take away an easy excuse for regulators to pick apart your business, a process that would be distracting, expensive, and could result in much more dramatic restrictions on your actions.

Ease up a little on the gas pedal, Steve. It's the best way to keep moving fast.

Four questions about the Microsoft-Nokia alliance

The Microsoft-Nokia alliance turned out to be a lot more interesting than the pre-announcement rumors made it out to be. Rather than just a bundling deal for mobile Office, the press release says they'll also be co-developing "a range of new user experiences" for Nokia phones, aimed at enterprises. Those will include mobile Office, enterprise IM and conferencing, access to portals built on SharePoint, and device management.

Of those items, the IM and conferencing ideas sound the most promising to me. Office, as I explained in my last post, is not much of a purchase-driver on mobile phones. And I think Microsoft would have needed to provide Nokia compatibility in its mobile portal and device management products anyway.

I understand the logic behind the alliance. Nokia has never been able to get much traction for its e-series business phones, and Microsoft hasn't been able to kick RIM out of enterprise. So if they get together, maybe they can make progress. But it's easy to make a sweeping corporate alliance announcement, and very hard to make it actually work, especially when the partners are as big and high-ego as Microsoft and Nokia. This alliance will live or die based on execution, and on a lot of details that we don't know about yet.

Here are four questions I'd love to see answered:


What specifically are those "new user experiences"?

If Nokia and Microsoft can come up with some truly useful functionality that RIM can't copy, they might be able to win share. But the emphasis in the press release on enterprise mobility worries me. The core users for RIM are communication-hungry professionals. If you want to eat away at RIM's base, you need to excite those communicator users, and I'm not sure if either company has the right ideas to do that. As Microsoft has already proven, pleasing IT managers won't drive a ton of mobile phone purchases.


Will Microsoft really follow through?

Microsoft has been hinting for the last decade that it was were willing to decouple mobile Office from the operating system, but they never had the courage to follow through. Now they have announced something that sounds pretty definitive, but the real test will be whether they put their best engineers on the Nokia products. If Microsoft assigns its C players to the alliance, or tries to make its Nokia products inferior to their Windows Mobile versions, the alliance won't go anywhere interesting.


What does this do to Microsoft's relationships with other handset companies?

Imagine for a moment that you are the CEO of Samsung. Actually, imagine that for several moments. You aren't exclusive with Microsoft, but you've done a lot of phones with Windows Mobile on them. Now all of a sudden Microsoft makes a deal with a company that you think of as the Antichrist.

How do you feel about that?

I can tell you that Samsung is not the most trusting and nurturing company to do business with even in the best of times. So I think you make two phone calls. The first is to Steve Ballmer, asking very pointedly if you can get the same software as Nokia, on the same terms, at the same time. If you don't like the answer to that question, your next call is to Google, regarding increasing your range of Android phones.

Maybe the reality is that Microsoft has given up on Windows Mobile and doesn't care what Samsung does. But that itself would be interesting news.

I would love to know how those phone calls went today.


What does RIM do about this?

It has been putting a lot of effort into Apple-competitive features like multimedia and a software store. Does it have enough bandwidth to also fight Nokia-Microsoft? What happens to its core business if Microsoft and Nokia do come up with some cool functions that RIM doesn't have? Are there any partners that could be a counterweight to Microsoft and Nokia? If I'm working at RIM, I start to think about alliances with companies like Oracle and SAP. And I wonder if Google is interested in doing some enterprise work together.

Nokia and Microsoft, sittin' in a tree...

Multiple sources are reporting that Nokia is hedging its bets on mobile phone software:

-- The New York Times says Microsoft and Nokia will announce Wednesday that Microsoft is porting Office to Nokia's Symbian S60 phones (link).
--TechCrunch, quoting the Financial Times in Germany, claims Nokia is planning to dump Symbian in favor of its Maemo Linux operating system (link).
--Om Malik says he asked Nokia about it, and the company denied plans to dump Symbian. But the company also said, "recognizing that the value we bring to the consumer is increasingly represented through software, there is logically not just one software environment that fits all consumer and market needs." In other words, we have an open marriage with Symbian (link).


In one sense, this is absolutely not news for Nokia. It has been playing the field for years, trying to prevent any single company from gaining control over mobile software (and thereby imposing a standard on Nokia). The change is that in the past, most of that energy was aimed against Microsoft.

Microsoft too seems to be bending its standards. With the exception of the Mac, Microsoft has been extremely reluctant to license Office for other operating systems. In the past, if Nokia wanted Office, it would have been expected to license Windows Mobile.

But now both companies feel threatened by Apple and Google, and all of a sudden that ugly person across the dance floor looks a lot cuter.


The real question that no one seems to be asking is whether most customers will care about any of this stuff. Most Nokia smartphone users are blissfully unaware that their phones have an operating system, let alone whether it's Symbian or Maemo. They just want the phone to work well.

And speaking as a former Palm guy who dealt with the mobile market for years, putting Microsoft Office on a smartphone is like putting wings on a giraffe -- it may get you some attention, but it's not very practical.

Don't get me wrong, I like and admire QuickOffice, which is probably the leading Office-equivalent app in the mobile space today. It's a cool product, but for most people the screens of smartphones are too small for serious spreadsheet and word processing activity. It works, but it's awkward and produces eyestrain. Most people who have a serious need for Office on the go will just carry a netbook.

So Nokia and Microsoft will both get some nice publicity, but the announcements mean very little to the average user. What both Microsoft and Nokia need to do is create compelling new mobile functionality that's better than the stuff being produced by Apple and RIM. Until they do that, all the strategic alliances in the world won't make a significant difference.

=====

Update: The announcement this morning was more subtle and perhaps far-reaching than what was reported yesterday. I think the strategic situation is still the same as what I described above, but there might be more value for users than I expected. More thoughts after I have a chance to digest the announcement.

Google Chrome OS: Opening a vein in Redmond

I need to study it some more, but here's my first take on Google's Chrome OS announcement (link). I think what they’re really saying is:

"We want to bleed Microsoft to death, and we've decided that the best way to do that is give away equivalents to their products. By creating a free OS for netbooks (the only part of the PC market that's really growing) we hope to force Microsoft into a Clayton Christensen-style dilemma. It can either cut the price of Windows in order to compete with us, or it can gradually surrender OS share.

"By using Chrome to set a standard for web applications, we also help to make the Windows APIs less relevant. So even if Microsoft manages to hold share in PCs, its OS franchise becomes less and less meaningful over time."


That helps to explain why Google would be pushing both Chrome and Android at the same time. If you're really serious about running a logical OS program in its own right, you'd try to rationalize those two things. But if your top priority is to commoditize Microsoft, then you don't mind pushing out a couple of overlapping initiatives. The more free options, the more pain caused.

The next question we should all ask is whether Chrome-based netbooks will take off. I'm skeptical, especially in the near term. Most people buy netbooks to run PC applications. Linux already failed in the netbook market because it can't run PC apps, and Chrome OS won't run PC applications either.

But in the meantime, Google can put more price pressure on Microsoft, and maybe that's the real point.

Two videos for mobile app developers

Just a quick note to let you know about a couple of informational resources for mobile developers.

--Motorola is starting the online publicity for its upcoming Android-based smartphones. They did a brief interview with me, asking how mobile app developers can distribute their software (link).

--Elia Freedman of Infinity Softworks did a great presentation on his experiences selling through the iPhone App Store, and the lessons he has learned. It’s well worth watching the video here.

It's best to watch both of these, and think about them, before you develop your mobile app.

Symbian: Evolving toward open

It's fascinating to watch the evolution as Symbian remakes itself from a traditional OS company into an open-source foundation. They've made enormous organizational changes (most of the management team is new), but the biggest change of all seems to be in mindset. A nonprofit foundation has a very different set of motivations and priorities than an OS corporation does. I get the feeling that the Symbian folks are still figuring out what that means. It's an interesting case study, but also a good example for companies looking to work with open source.

Symbian recently held a dinner with developers and bloggers in Silicon Valley, and I got to see some of those differences in action.

The first difference was the dinner itself. About six months ago, Symbian and Nokia held a conference and blogger dinner in San Francisco (link). It was interesting but pretty standard -- a day of presentations, followed by dinner at a large, long table at which Symbian and Nokia employees talked to us about what they're doing and how excited they are. The emphasis was on them informing us.

The recent dinner was structured very differently. The attendees were mostly developers rather than bloggers, and we were seated at smaller, circular tables that made conversation easier. They talked about their plans at the start, but most of the evening was devoted to asking our opinions, and they had a note-taker at each table. This had the effect of not just collecting feedback from us, but forcing us to notice that they were listening. That's important to any company, but it is critical to a nonprofit foundation that relies on others to do its OS programming. And it's essential for a company like Symbian, which has been ignored by most Silicon Valley developers.

So that's the first lesson about open source. The task of marketing is no longer to convince people how smart you are, it's to convince people how wonderful you are to work with. Instead of you as a performer and developers as the audience, the situation is flipped -- the developers are the center of attention and you're their most ardent fan.

It's an interesting contrast to Apple's relationship with developers, isn't it? It'll be fun to see how this evolves over time.

Here are my notes on the subjects Symbian discussed with us, along with some comments from me:


It takes time

Symbian said its goal is to have a lot of developers on the platform and making money, but that can't be achieved in three months. "In three years time," is what I wrote in my notes. That is simultaneously very honest and a little scary. It's honest because a foundation with its limited resources, working through phone companies with 24 month release cycles, simply can't make anything happen quickly. It's scary because competitors like Apple and RIM have so much momentum, and can act quickly. Still, in the current overused catchphrase of sports broadcasting, is what it is. An open-source company, based on trust, simply cannot afford to risk that trust by hyping or overpromising.

Speaking of Apple and RIM, Symbian made clear that it considers its adversary to be single-company ecosystems like Apple, RIM, and Microsoft. I didn't think to ask if Nokia's Ovi fits in that category, but that probably wouldn't have been a polite question anyway. Symbian also took some swipes at Google, citing the "lock in" deals they have supposedly made with some operators.

You get the feeling that Symbian is intensely annoyed by Google. It's one thing for a mobile phone newcomer like Apple to create a successful device; it's quite another for an Internet company to step into the OS business and take away Motorola as a Symbian licensee. I think one of Symbian's arguments against Android is going to be that Symbian is more properly and thoroughly open.

The question is whether anyone cares about that. Although the details of open source governance are intensely important to the community of free software advocates, I think that for most developers and handset companies the only "open" that they care about translates as, "open to me making a lot of money without someone else getting in the way." Thus the success of the Apple Store, even though Apple is one of the most proprietary companies in computing. Symbian's measure of success with developers will be whether it can help them get rich -- and I think the company knows that.


Licensees and devices

One step in helping developers make money is to get more devices with Symbian OS on them. Symbian said phones are coming from Chinese network equipment conglomerates Huawei and ZTE. They also said non- phone devices are in the works.

Licensees will be especially important if Nokia, as rumored, creates a line of phones based on its Maemo Linux platform. Lately some industry people I trust have talked about those phones as a sure thing rather than speculation, and analyst Richard Windsor is predicting big challenges for Symbian as a result:

"It seems that the clock is ticking for Symbian as technological limitations could lead to it being replaced in some high-end devices.... I suspect that the reality is that Symbian is not good enough for some of the functionality Nokia has planned over the medium term leaving Nokia with no choice but to move on."
Source: Richard Windsor, Industry Specialist, Nomura Securities

David Wood at Symbian responded that people should view Maemo as just Nokia's insurance in case something goes wrong with Symbian (link). But the point remains that Nokia is Symbian's main backer today. That is a strength, but also a big vulnerability. If Symbian wants developers to invest in it, I think it needs to demonstrate the ability to attract a more diverse set of strong supporters.


App Store envy

Another way to help developers is to, well, help them directly. Symbian said it's planning something tentatively called "Symbian Arena," in which it will select 100 Symbian applications to be featured in the application stores on Symbian phones. Symbian will promote the applications and perform other functions equivalent to a book publisher, including possibly giving the app author an advance on royalties.

The first five applications will be chosen by July, and featured on at least three Symbian smartphones (the Nokia N97, and phones from Samsung and Sony Ericsson).

The most interesting aspect of the program is that Symbian said its goal is to take no cut at all from app revenue for its services. Obviously that means the program can't scale to thousands of applications -- Symbian can't afford it. They said they'd like to evolve it into a much broader program in which they would provide publishing services for thousands of apps at cost. My guess is they could push the revenue cut down to well under 10% in that case, compared to the 30% Apple takes today.

It isn't clear to me if Symbian will produce the applications store itself, or work through others, or both. If it works through other stores, those stores might take a revenue cut of their own. But still, from a developer point of view it's nice to see an OS vendor trying to lower the cost of business for creating apps.

It's been interesting to see how many of the Palm Pre reviews this week have said that the iPhone application base is the main reason to prefer an iPhone over a Pre. I'm not sure how much purchase influence apps actually have -- at Palm, we had ten times the applications of Pocket PC, but they didn't seem to do anything for our sales. (On the other hand, Palm never had the wisdom and courage to advertise its apps base the way Apple has.)

--"Compared to the iPhone, the real missing pieces are those thousands of applications available on the App Store." Wired
--"Developer courting still seems like an area where Palm needs work. They've got a great OS to work with, but they have yet to really extend a hand to a wide selection of developers or help explain how working in webOS will be beneficial to their business. The platform is nothing without the support of creative and active partners." Engadget
--"The Pre's biggest disadvantage is its app store, the App Catalog. At launch, it has only about a dozen apps, compared with over 40,000 for the iPhone, and thousands each for the G1 and the modern BlackBerry models....It is thoughtfully designed, works well and could give the iPhone and BlackBerry strong competition -- but only if it fixes its app store and can attract third-party developers." Walt Mossberg

Anyway, if applications are the new competitive frontier between smart phones, mobile OS vendors should be competing to see who can do the most to improve life for developers. This is another area where Symbian's motives, as a foundation, differ from a traditional OS company. If you're trying to make money from an OS, harvesting some revenue from developers make sense. But as a nonprofit foundation, draining the revenue streams from your competitors is one of your best competitive weapons. Symbian has little reason to try to make a profit from developers, and a lot of reasons not to.


Driving Web standards

That idea came up again when we talked about web applications for mobile. As I've said before, I think the most valuable thing that could happen for mobile developers would be the creation of a universal runtime layer for mobile web apps -- software that would let them write an app once, host it online, and run it unmodified on any mobile OS. No commercial OS companies want to support that because it would commoditize their businesses and drain their revenues. But if Symbian's primary weapon is to remove revenue from other OS companies, a universal Web runtime might be the best way to do it. I asked them about this, and they said they're planning to use web standards in the OS "like Pre," and said they're interested in supporting universal web runtimes.

I'm intensely interested in seeing how the runtime situation develops. I think Symbian and Google are the only major mobile players with an interest in making it work, and Google so far hasn't been an effective leader in that space. I think Symbian might be able to pull it off, and become a major player in the rise of the metaplatform. But it'll take an active effort by them, such as choosing a runtime, building it into every copy Symbian OS, and making it available for other platforms. Passive endorsement of something is not enough to make a difference.


Other tidbits

Symbian said it's going to "radically simplify" the Symbian Signed app certification program, which may be very welcome news to developers, depending on the details. Many developers today complain bitterly about the cost and inconvenience of the signing program, and unless it's fixed it'll outweigh any of the benefits from Symbian Arena.

The QT software layer that Nokia bought as part of its Trolltech acquisition will be built into Symbian OS in the second half of 2010. I had been wondering if it would be an option or a standard part of the OS; apparently it'll be a standard.

Symbian plans to bring its developer conference to San Francisco in 2010, after which it will rotate to various locations around the world. This is part of an effort to increase Symbian's visibility in the US market. The company is creating a large office here, including two members of its exec staff. That makes sense for recruiting web developers, but it will be hard for the company to have a big impact in the US unless it gets a licensee who can market effectively here. In that vein, it must have been frustrating for everyone involved when Nokia announced the shipment of the N97 and it came in a distant third in coverage in the US (after the Palm Pre and the iPhone rumors).


What it all means

There are a lot of things that could kill the Symbian experiment:
--Nokia could decommit from the OS (or just waver long enough that developers lose faith).
--Symbian licensees could fail to produce interesting devices that keep pace with Apples, RIMs, and Palms of the world.
--Android could eat up all the attention of open source developers, leaving Symbian to wither technologically.
--The market might evolve faster than a foundation yoked to handset companies can adjust.

But still the Symbian foundation is worth watching. It has a different set of goals than every other mobile OS company out there, goals that potentially can align more closely with the interests of third party developers. It's still up to Symbian to deliver on that potential, but the company has an opportunity to challenge the mobile market in ways that it couldn't as a traditional company.

-----

Prof. Joel West of San Jose State was also at the Symbian meeting and posted some interesting comments about it. You can read them here.

Full disclosure: My employer, Rubicon Consulting, did a consulting project for Symbian a year ago. None of the analysis conducted in that project was used in this post. We currently have no ongoing, or planned, business relationship with Symbian.

A quick history of software platforms: How we got here, and where we're going

Intuit and Stanford recently asked me to give talks on computer platforms and what makes them successful. (By platforms I mean software with APIs that third party developers can write apps on top of; Windows and Macintosh are both platforms, as is Java.) Platforms are a hot topic in Silicon Valley these days. The success of the iPhone app store in mobile, and Facebook on the web, have forcefully reminded people that you can grow a tech business more quickly if you get third party developers to help you. Almost every tech company I work with is trying to expose some sort of API or platform offering in its products.

To explain how software platforms work today, I thought it'd be good to start with their history. But I wasn't sure about many of the details myself, so I ended up doing some research. The information was surprisingly hard to find, and also pretty controversial -- for every person who claims to be the first to have done something in computing, there's someone else who begs to differ. I did my best to sort through all the claims. The picture that developed makes an interesting story, but also has some very important lessons about where the industry might go next.

Fair warning: this is a long post. But I hope you'll feel that the destination is worth the trip.

Here's what I found:


Hardware memory, software amnesia

The computer industry is often criticized for its failure to remember its own history. Supposedly we're so focused on the new thing that we forget what's come before.

In reality, though, we're actually fairly good at remembering a lot of our hardware history (for example, Apple fans are celebrating the 25th anniversary of the Macintosh this year). There's passionate controversy over what was the first computer -- was it Konrad Zuse's Z1 (link), Tommy Flowers' Colossus (link), etc. The answer depends in part on your definition of the word "computer." But it's a well-documented disagreement, and you can find a lot of information about it online, including a cool timeline at the Computer History Museum (link).

The machine most commonly cited as the first fully programmable general-purpose electronic computer was ENIAC, the Electronic Numerical Integrator And Computer. It was completed in 1946 (link).


Here's ENIAC (well, part of it, anyway)

You can find lots of histories of ENIAC online (link). There are multiple simulators of it on the web (link), and the engineering school at the University of Pennsylvania even has an ENIAC museum online (link).

But when it comes to software, our memories are much hazier. For example, I doubt there will be a 25th anniversary celebration in 2010 for Aldus PageMaker, the program that did more than any other to make Macintosh successful. And about a day after I post this article -- May 12, 2009 -- will be the 30th anniversary of the introduction of Visicalc, the first spreadsheet program. Anyone planning a parade?

Today we take it for granted that you can use a computer for a variety of business or personal tasks, but it didn't always work that way. ENIAC and Colossus were government-funded tools for solving military and scientific problems. The US Army funded ENIAC, and in addition to calculating artillery tables, it was also used for tasks like weather prediction, wind tunnel design, and atomic energy calculations.


These nice ladies are programming ENIAC, by moving cables around.

How did we end up using computers for other purposes? The UPenn site says only, "it is recalled that no electronic computers were being applied to commercial problems until about 1951."

Yeah, "it is recalled." This is where I had to start digging. Once again there are disputes (link), but you can make a very good case that business computing started in the UK, and it involved something called a Swiss roll.


The first business computer

I had never heard of Joseph Lyons & Company, but in the 1950s they ran a chain of tea shops in the UK. I have to pause here for a second and explain what the term "tea shop" means. It's not a shop where you can buy bags of tea (which is what I assumed). Instead, it is what Americans call a coffee shop -- a fixed-menu restaurant that people would come to when they wanted to have a quick meal, snack, or meeting. The closest equivalent in the US these days is probably Denny's.

In the 1950s, Lyons had the biggest network of tea shops in the UK. It employed 30,000 people and served 150 million meals a year. The company sold 36 miles of Swiss roll a day (link).

(In case you're wondering, Swiss roll is a flat sponge cake rolled around a filling. Americans call it jelly roll. In India, it's called jam roll. In Sweden, rulltårta. In Japan, "roll cake." But in Spain, for some reason it's called brazo de gitano (gypsy's arm). Don't ask me why. [link] )


A Swiss roll made and photographed by Musical Linguist on 25 June 2006

Like every other company of its day, everything at Lyons was run on paper -- tallying 150 million receipts, calculating payroll, managing taxes, and even figuring out how many miles of Swiss roll you need to make for tomorrow's customers. All of that by hand with adding machines. It was an incredibly expensive and error-prone way of running a business, but it was the best anyone could do at the time.

When the people at Lyons first heard about these new computer thingies, they wanted one immediately to help run the business. But there wasn't any way to buy one. So they donated $5,000 (about $50k today) to Cambridge University to create a modified version of a computer that Cambridge had been working on.

The result was called LEO (Lyons Electronic Computer), and when it started regular operations on November 17, 1951, it was the world's first business computer. It occupied 5,000 square feet of floor space (about 500 square meters), and its 4k memory unit weighed half a ton because it was full of mercury. LEO's lead programmer was David Caminer, who is generally credited as either the world's first business software programmer or the first systems analyst. LEO's software let it handle -- guess what -- the same sorts of tasks we handle on business computers today: payroll, inventory, financials, and so on. It cut the time to calculate one employee's wages from eight minutes to 1.5 seconds (link).


David Caminer

Pause for a moment and think about the courage and vision it took for Lyons -- a catering company -- to build its own computer. There was no guarantee the process would succeed, and indeed the process took two years, with plenty of setbacks along the way.

But LEO was eventually a big success, and Lyons eventually spun it out as a separate computing subsidiary. Caminer went on to have a distinguished career in computing. He died in 2008, unfortunately, so we just missed our opportunity to say thanks to him. If you want to read more about LEO, Caminer co-wrote a book about it (link). Naturally, it's out of print, and the cheapest used copy when I looked it up was $75.


What is software, anyway?

One interesting aspect of LEO is that although Caminer and his team wrote software for it, that software was not available separately from the computer. That's the way the computing industry worked throughout the 1950s. For example, if you bought an IBM computer there was a set of standard IBM programs that ran on it.

In fact, the term "software" didn't even exist until it was popularized by John Tukey in 1958, more than ten years after ENIAC began operation (link). He wrote:

Today the "software" comprising the carefully planned interpretive routines, compilers, and other aspects of automative programming are at least as important to the modern electronic calculator as its "hardware" of tubes, transistors, wires, tapes and the like.

So the whole idea of software as a separate entity, a concept that we take for granted today, did not exist at the beginning of computing. The concept of making computers reprogrammable came along quite early, but it took a couple of decades for software to fully separate itself from hardware as its own distinct discipline.


John Tukey

(Naturally, there's some dispute about whether Tukey was the first to use the term "software." You can read about it here.)

Tukey was an interesting guy. He also created the term "bit," helped design the U-2 spy plane, and did a lot of other fascinating things (link).

If you want to read more about the history of software technologies, there's an essay here. And the best (and just about only) book on the history of the software industry is here.


Software as a business

Once we got the idea of software into our heads as a separate discipline, the next milestone in platform history was the creation of the first independent computer program, the first one you could buy separately from the hardware. As far as I can tell, that idea didn't just spring into being all at once; it emerged as a slow-motion avalanche over a period of 15 years.

Computer Usage Corporation, founded in 1955, is often cited as the first computer software company. It focused on custom programming services (link). Another very early custom programming company was CEIR, founded in 1954 (link). After them, a number of other custom programming firms sprang up. Sometime between 1962 and 1965, California Analysis Center, Inc. started selling a proprietary version of the Simscript programming language as a standalone product (the Computer History Museum says it was 1962 here, but CACI's own website says 1965 here). The 1962 date is the earliest I can find for any sort of independent software product. To my amazement, CACI is still selling Simscript today (link).

Several other programming languages and compilers came to market in the early 1960s, but there's disagreement over how much they actually sold, or whether they were really managed as independent products (link). A file management program called Mark IV, by Informatics, is credited as the first independent software product to generate more than a million dollars revenue. It was published in 1967 (link). That year also saw the first publication of the International Computer Programs Quarterly, the first commercial software catalog, which helped small software companies get to market at low cost (link). Think of it as a paper version of the iPhone App Store.

But if you want to find the first snowball that started the commercial software avalanche, I think it was tossed in 1964 when a contract programming company called Advanced Data Research was jerked around on a business deal by RCA.


The first commercial software product

In the mid-1960s, a cottage industry of contract programming firms did custom software development. When a new mainframe was in the works, its manufacturer would sometimes hire these firms to create software to offer with it. Computer owners could also hire those development houses to write create custom software for them. The idea of off-the-shelf software didn't exist; you got it for free with your computer, had it written for you, or developed it yourself.

RCA, which at the time was a promising mainframe company, approached ADR asking them to create a program to draw flow charts of computer programs (the flow charts were used for documentation and debugging). That may not sound like a big deal today, but in the early days of computing the industry didn't have the sort of automated debugging tools it has today. A flowchart was very useful to help maintain and document a custom software program after the project was finished.

So ADR created a proposal and submitted it to RCA. Fortunately for the computer industry, RCA turned it down, as did every other mainframe company. But ADR believed in its concept, so it decided on its own to develop the product anyway. It spent over $5,000 (about $35k in today's money) and half a man-year on the project.

But RCA was not impressed. Once again they said no.

Now ADR had a sunk cost. In business school they teach you to walk away from those, but in real life companies hate to admit they made a mistake. So ADR decided to try marketing the software on its own. They named it Autoflow, and wrote a letter to all 100 RCA mainframe owners offering them the program for $2,400 on a three year lease. It was three milestones in one: the first commercial software program, the first subscription software, and the first junk mail urging you to buy a software program.

ADR sold two licenses.

That may not sound like much, but somebody at ADR did the math -- if we sold two copies to 100 RCA customers, what would happen if we offered our software to IBM's much larger installed base? So ADR ported Autoflow to IBM mainframes. In the second half of the1960s it sold more than a thousand licenses of Autoflow, and created a portfolio of other independent software programs for IBM systems.

IBM was not pleased. Nobody was supposed to mess with the IBM customer base; that might weaken IBM's control over its customers. The company created its own flow charting software, which it gave away for free to its customers, and started to copy ADR's other programs as well. This became a huge competitive problem for ADR -- even if its software worked better than IBM's, it was hard to compete with free. IBM was also able to freeze the market for ADR by promising that it would in the future offer a free version of something ADR was currently selling. Customers would delay ADR purchases until they could evaluate the IBM product.

ADR and other fledgling software companies complained to the US government. In 1969, the Justice Department, ADR, and several others filed antitrust suits against IBM. ADR collected $2 million in penalties, and IBM agreed to stop bundling free software with its computers.

And thus the independent software industry was born.



Martin Goetz (above) was the product manager of Autoflow. I wrote to him and asked for his take on which was the first software product. Here's his reply:

Autoflow was recognized as the first software product to be commercially marketed. Starting in 1964, ADR licensed its products nationally and through ads in all the major computer publications, started investing in the development of other products and became known as a software products company.

I think that's the right way to look at it: Autoflow was the first software product to be commercially marketed, which is why I call it the snowball that started the avalanche. Informatics' Mark IV also played an important role because its financial success validated the market -- reportedly it was the top-selling software product for the next 15 years (link).

Goetz says Mike Guzik was the lead programmer on Autoflow (link), and he cites ADR President Dick Jones as a strong supporter of the idea (link). I think we should credit Goetz and Guzik as the creators of the first commercial software application, although neither of them has an entry in Wikipedia.

Incidentally, Goetz also holds the first software patent:


Computerworld, June 1968

That has to be one of the most visionary headlines in the history of the computer press: "Full Implications Are Not Yet Known." Here we are 41 years later, and it's still accurate.

Goetz was named the "Father of Third-Party Software" by mainframezone.com (link) and there's a very interesting interview with him here. You can find a much longer interview here and his memoirs are here.

Advocates of open source software will probably view Goetz as a bad guy, since he helped make software a for-profit industry. But he has some pretty strong opinions about the poor quality and slow innovations that happened in software when it was only free. In particular, he says that a completely free software industry was not responsive to the needs of users (link).

An amusing anecdote complaining about Goetz, apparently written by a former ADR employee, is here. I can't verify the anecdote, but if nothing else it shows that ADR was also a pioneer in the practice of engineers making catty comments about product managers.

(I should add that there are some different interpretations of the effect of IBM's unbundling decision. One is in a very interesting interview with the creator of the ICP catalog here.)


The rise of the third party application platform

The next evolutionary step was for computer companies to see their products as development platforms -- for them to actively encourage software developers rather than viewing them as a nuisance. I haven't been able to figure out when in the 1970s this change in perspective happened (please post a comment if you know the history). It may have happened in the era of minicomputers, or it may have been a PC thing. Definitely Dan Bricklin and Bob Frankston's VisiCalc, the world's first spreadsheet program, played a role when it came to market for the Apple II in 1979. It was so revolutionary that reviewers at the time didn't know how to describe it. They just said it was a way to make the computer do things you want it to do, without writing your own program. VisiCalc established the idea of the "killer app," a software program so popular that it drove demand for the underlying hardware.

"Visicalc could some day become the software tail that wags (and sells) the personal computer dog."
--Ben Rosen, co-founder of Compaq, reviewing VisiCalc when we was still an analyst with Morgan Stanley. Nice call, Ben. (Link)

By the early 1980s, software developers were being actively courted by computer manufacturers. Apple had a developer recruitment team for the Macintosh, and apparently coined the term "software evangelism." That's where Guy Kawasaki cut his eyeteeth, although he wasn't the first evangelist. As he puts it:

"Mike Boich started evangelism and hired me, and Alain Rossman worked with me as a software evangelist. Essentially, Mike started evangelism, Alain did the work, and I took the credit." (Link)

I happen to know that Guy did a bit of the work too.

The other critical change in the 1980s was the separation of the OS from the underlying hardware. Most of the new PC software platforms had been tied to hardware, just like traditional computers. For example, you had to buy a Macintosh in order to run Mac software, or an Amiga in order to use Amiga apps. But then IBM created the PC, and through a series of business blunders allowed Microsoft to separately sell the DOS operating system used on its hardware. IBM's brand and marketing power established the PC as a standard, but the company enabled Microsoft and Intel to create a "clone" hardware market, and eventually drive IBM out of the PC business.

So now there were three layers in the industry -- the application was independent of the OS, and the leading OS was independent of the hardware.


The network strikes back

That's where the situation sat until the late 1990s, when Java and web browsers threatened to create another layer in the architecture by separating software applications from the OS. The theory was that instead of writing programs that depended on Windows, programmers could create code that worked on Java, or on the Netscape browser.

Microsoft fought back very aggressively, killing Netscape by giving away Internet Explorer, and crippling Java on the PC. Looking back, it was an impressive use of business muscle, worthy of Microsoft's tutor IBM.

But it was also a pyrrhic victory. Microsoft's actions in the 1990s forced software innovation completely off the PC platform, because investors were afraid that new software apps would just get cannibalized by Microsoft. Instead software innovation moved onto the web, where Microsoft had virtually no control. That's one of several reasons why the next generation of software is being written as web apps.

And that's where we are today.


Where we go next

As I said at the start of the post, I think all of this history is fun in its own right. I also wanted to take this opportunity to thank some of the people who built the tech industry into the fun place it is today.

But understanding computing history is also very important because, if you look across the sweep of it from the 1940s to today, it's much easier to see where we might go next.

Here's what I think that long perspective shows us: The history of software is a history of disaggregation. First the application software gets separated from the hardware, then the OS gets separated from the hardware, and so on.

I think disaggregation is a natural outcome of the maturation of the industry, because multiple companies can move faster than a single one. At the start you need everything coordinated together to make sure the whole thing will work. But over time, no single company can pursue all of the innovation possibilities, so you get a backlog of potential creativity that can happen only if control over the architecture is broken into pieces.

For example, most of the interesting innovation in applications happened only after they were separated from the hardware.

But as the industry continues to grow, each of the pieces becomes its own stodgy monolith, and eventually another subdivision happens.

The fastest growth and the easiest innovation has generally happened at the leading edge of disaggregation, because each change creates new business opportunities.



That doesn't mean that old school companies are dead. IBM still sells mainframes, and Apple still makes PCs bundled with an OS. But to succeed in an old paradigm you have to execute extremely well, and it's much harder to grow explosively. The easiest progress is made at the leading edge.

A common thread among the people working at the leading edge of disaggregation is their excitement as they recognize the opportunities created by the change:

"There was a tremendous euphoria of success. You couldn't lose. All you needed was a group of highly technical people who could create a software product and that was it. And to some degree there was some truth to that. Because you didn't have to be good sales people. You didn't have to worry about the competition. For years I used the aphorism that we were like little boys on the beach each with our sand piles. There was plenty of sand to put in our buckets. We didn't have to edge out the other little boy to get all the sand we needed. We were limited by the size of our pail and our little shovels but not by the amount of the beach that was there or the fact that there was another little boy there with his pail."

That's Walter Bauer, cofounder of Informatics, talking about the birth of the independent software industry in the 1960s (link). But you could find similar sentiments from the people who built the first computers, or the first Mac programmers, or the first web app developers. The leading edge of disaggregation is where the action is; it's where the fun happens.

So, if you're looking to succeed in the software industry, it's extremely important to figure out what's going to get disaggregated next. Which brings us to the point of this article.


Say hello to the metaplatform

Sun's rallying cry in the 1990s was, "the network is the computer" (link). It was an excellent insight that pointed to the emerging importance of the Internet, but most of the industry misread what it meant. We looked at the architecture of the thing we knew best, the PC, and tried to map it directly to the network. So servers would replace the PC hardware, and software on those servers would replace Windows. The PC itself would be reduced to light client, a screen connected to a wire.


What we expected

But instead of a new OS on the network replacing the OS on the PC, what we're seeing is the breakdown of the OS into component parts that live everywhere, on both the client and the server.

In other words, the OS is the next thing that gets disaggregated.


What's actually happening

People have been talking about elements of this change for years, but like the proverbial blind men feeling bits of the elephant, we've talked about individual pieces of it, with each of us assuming that the piece in front of us was the most important. So people producing software layers like Java and Flash say that they are separating the APIs on the device from the underlying OS. And the advocates of cloud computing say they're creating a software services architecture that runs on servers. But in reality we're doing both of those things, and a lot more. The OS is dissolving into a soup of resources distributed across both the network and the local device, with the application in the middle calling on both as appropriate. We need to get off the idea that the network or the client will be dominant; they're both supporting elements in something larger.

You can see this process operating in the evolution of web applications. The first web app companies tried to make applications that were entirely light client, but they didn't work particularly well -- they were slow, and their user interfaces were too limited. Web apps took off only when they adopted an approach in which the platform was split between the PC and the network -- the user interface ran locally through the browser, while back-end calculation and data storage was done on the network.

Mobile computing reinforces the need for this sort of hybrid architecture. Wireless broadband has important limitations that make pure light client computing extremely problematic. Wireless networks are relatively slow compared to wired networks, there's high latency on them, coverage is inconsistent, heavy communication drains device batteries rapidly, bandwidth is expensive, and most importantly, total wireless bandwidth is limited. The most effective mobile application are and will continue to be hybrids of local and network resources, like RIM's e-mail solution.

Companies entering the mobile market often ask me which mobile operating systems are going to win long term. I think that's the wrong question. What we're seeing is the gradual evolution of a super-OS that includes both the network and the device.

Like software developers before the word "software" was invented, we don't have a name for this new thing, and so we have trouble talking about it. It's not just the Network or the Cloud, because those terms are usually understood not to include the software on the client computer. And it's certainly not just the local APIs on the client device.

I'm calling it the "metaplatform" because it subsumes all other platforms. No single company controls the metaplatform. Google obviously contributes a lot to it, as does Amazon Web Services, as does Microsoft. But they're only fragments of the picture. There are thousands of other contributors to the metaplatform, in areas ranging from mapping to graphics to identity.

There's still a lot of work that needs to be done on the metaplatform, especially in the mobile space. But already it's evolving faster than any single company could move it, because the work is divided across so many companies, and because there's competition driving innovation at almost every point in the architecture. Although the metaplatform isn't necessarily elegant (because it's poorly coordinated), what it lacks in beauty it more than makes up for in rate of change and versatility.


New opportunities

The metaplatform helps to solve some computing problems, but creates others. For example, a recurring problem for software in the OS era has been compatibility. Old data files, even when perfectly preserved, can become unreadable if the hardware and software that created them is no longer available. A lot of software is very dependent not just on the hardware, but on the particular version of the OS it's running on. (If you want to see that effect in action, try running a ten-year-old Windows game on a new PC. It may work, it may refuse to run at all -- or it may freeze right when you're about to defeat the boss bad guy.)

The metaplatform is helping to resolve some compatibility problems, through emulators available online. But more importantly, web apps on a PC are less vulnerable to PC-style compatibility breakdowns because PC browsers are relatively standardized, and much of the OS code the web app relies on lives on the same server as the app itself, so they are less likely to get out of sync.

But metaplatform-based software is uniquely vulnerable to a new set of problems. When a user's data is stored on a web app company's server 3,000 miles away, what happens if that company goes out of business or just decides to stop maintaining the product?

Another problem experienced by any website using plug-ins is component breakage. If you've incorporated external web services into your site, the site will break if any of those services stops working. This can happen without warning. On my own weblog, the load time for the site suddenly became ridiculously long. It took me weeks to realize that a user-tracking service I'd once signed up for had gone out of business without telling anyone. My site stopped loading while it tried helplessly to connect to a tracking site that no longer existed.

An old software application from the OS era has some hope of revival if you have a copy of the CD, because all the code that made up the app is together in one place. But an old, broken web app will be almost irretrievably dead, because huge chunks of its code will be missing.

Problems like these are just starting to emerge, but as the metaplatform grows and ages they'll become much more prominent. We don't have any systematic ways to deal with problems like these today -- which means they're a business opportunity for the next crop of software entrepreneurs.


What the metaplatform means to you

Much of the discussion in this post is pretty theoretical. But I think it has important practical implications. Here are a few specifics to think about:

If you're a computer user (and if you're reading this, you must be), keep in mind that the most interesting new software innovations are likely to come from companies that consciously work the metaplatform. If you want to be at the leading edge of software innovation, you should keep yourself open to experimenting with new web applications and plug-ins, and make sure your browser doesn't artificially cut you off from some technologies. This is especially true for mobile devices. The iPhone today gives (in my opinion) the best overall mobile browsing and app discovery experience, but you pay a price for it -- you're cut off from some web technologies (Flash, Java) and your choice of applications is limited by the Apple app police. You pay a serious price for the superior user experience of the iPhone. That price is worth paying today, but in the future I hope there will be mobile devices that are as satisfying as the iPhone but less controlled. Actually, I'm sure that will happen over time. But "over time" can sometimes mean a long time in the future. You can help the process along with what you buy and by the feedback you give to device manufacturers.

Are you working at an OS company? If so, you probably measure success by the number of devices your software controls. You need to rethink that viewpoint. The OS is going to be less and less of a technology control point in the future. It will become commodity plumbing underneath the metaplatform, limiting your ability to charge a lot of money for it. So at a minimum, you need to plan for cost control.

But you should also be asking if plumbing is the right place for your company's creativity in the long term. There will be much more profit opportunity in contributing to the metaplatform by creating APIs and developer functionality that can be used across different operating systems. OS companies have many of the assets needed to build those components of the metaplatform. A successful OS can be a great launching point for technologies that run across platforms, because you already have a big installed base that you can use to jump-start the technology's adoption.

Are you at an application company? Many successful app vendors are trying to create APIs that will enable other developers to extend their products. This is the right idea, but the implementation is often off-target. Many of the app companies I talk to are trying to make their APIs into the business equivalent of an operating system, with developers coming to them and living entirely within their private ecosystem. A warning sign is when a company uses a phrase like, "(insert company name) developer network" to describe its offering.

The wave of the future is not turning an application inward into its own little walled garden; it's opening the application outward so it can be mixed and matched with other functionality in the metaplatform. If you have the best drawing program in the industry, you should be asking how you can also become the best drawing module in the metaplatform. Get used to being a component in addition to a standalone product. You lose some identity in the process, but gain greater opportunities to grow.

And besides, if you don't do it, you'll be vulnerable to someone else doing it and taking your place.

If you're a computing student, or a computing veteran looking to create a new product, think about what role you can play in the metaplatform, and what customer problems you can solve with this new tool. There will be big market openings in both products for users and companies, and infrastructure for other developers in the ecosystem (billing, rights management, security, etc).

As in previous generations of software, the answers are not immediately obvious, and the people who figure them out first will have huge opportunities to do something impactful. Like Caminer, Goetz, Bauer, Bricklin, and Frankston, you're on an enormous beach with a trowel and bucket, and you have a chance to shape the next generation of computing.

Have fun.

====

I'd like to thank Eugene Miya of NASA Ames and Martin Goetz for helping with the research that contributed to this article. They're not responsible for any errors I made, but they definitely corrected some.

I'm sure there are folks out there who have additional information on the history I wrote about here. If you have anything to add (or correct) please post a comment.