"Software as a service" misses the point

At the end of October, Microsoft's Ray Ozzie and Bill Gates wrote internal memos announcing that Microsoft must pursue software services. The memos were leaked to the public, I believe intentionally. They drove enormous press coverage of Microsoft's plans, and of the services business model in general.

Most of the coverage focused on two aspects of software as services: downloading software on demand rather than pre-installing it; and paying for it through advertising rather than retail purchase.

Here are two examples of the coverage. The first is from The Economist:

"At heart, said Mr Ozzie, Web 2.0 is about 'services' (ranging from today's web-based e-mail to tomorrow's web-based word processor) delivered over the web without the need for users to install complicated software on their own computers. With a respectful nod to Google, the world's most popular search engine and Microsoft's arch-rival, Mr Ozzie reminded his colleagues that such services will tend to be free—ie, financed by targeted online advertising as opposed to traditional software-licence fees."

Meanwhile, the New York Times wrote, "if Microsoft shrewdly devises, for example, online versions of its Office products, supported by advertising or subscription fees, it may be a big winner in Internet Round 2."

I respect the Times and love the Economist, but in this case I think they have missed the point, as have most of the other media commenting on the situation. The advertising business model is important to the Microsoft vs. Google story because ads are where Google makes a lot of revenue, and Microsoft wants that money. But the really transformative thing happening in software right now isn't the move to a services business model, it's the move to an atomized development model. The challenge extends far beyond Microsoft. I think most of today's software companies could survive a move to advertising, but the change in development threatens to obsolete almost everything, the same way the graphical interface wiped out most of the DOS software leaders.


The Old Dream is reborn

The idea of component software has been around for a long time. I was first seduced by it in the mid 1990s, when I was at Apple. One of the more interesting projects under development there at the time was something called OpenDoc. In typical Apple fashion, different people had differing visions on what OpenDoc was supposed to become. Some saw it as primarily a compound document architecture -- a better way to mix multiple types of content in a single document. Other folks, including me, wanted it to grow into a more generalized model for creating component software -- breaking big software programs down into a series of modules that could be mixed and matched, like Lego blocks.

For example, if you didn't like the spell-checker built into your word processor, you could buy a new one and plug it in. Don't like the way the program handles footnotes? Plug in a new module. And so on.

The benefit was supposed to be much faster innovation (because pieces of an app could be revised independently), and a market structure that encouraged small developers to build on each others' work. Unfortunately, like many other things Apple did in the 1990s, OpenDoc was never fully implemented and it faded away.

But the dream of components as a better way to build software has remained. Microsoft implemented part of it in its .Net architecture -- companies can develop software using modules that are mixed and matched to create applications rapidly. But the second part of the component dream, an open marketplace for mixing and matching software modules on the fly, never happened on the desktop. So the big burst in software innovation that we wanted to drive never happened either. Until recently.

The Internet is finally bringing the old component software dream to fruition. Many of the best new Internet applications and services look like integrated products, but are actually built up of components. For example, Google Maps consists of a front end that Google created, running on top of a mapping database created by a third party and exposed over the Internet as a service. Google is in turn enabling companies to build more specialized services on top of its mapping engine.

WordPress is my favorite blogging tool in part because of the huge array of third party plug-ins and templates for it. Worried about comment spam? There are several nice plug-ins to fight it. Want a different look for your blog? Just download a new template.

You have to be a bit technical to make it all work, but the learning curve's not steep (hey, I did it). For people who are technical, the explosion of mix and match software and tools on the web is an incredible productivity multiplier. I had lunch recently with a friend who switched from mobile software development to web services in part because he could get so much more done in the web world. To create a new service, he could download an open source version of a baseline service, make a few quick changes to it, add some modules from other developers, and have a prototype product up and running within a couple of weeks. That same sort of development in the traditional software world would have taken a large team of people and months of work.

This feeling of empowerment is enough to make a good programmer giddy. I think that accounts for some of the inflated rhetoric you see around Web 2.0 -- it's the spill-over from a lot of bright people starting to realize just how powerful their new tools really are. I think it's also why less technical analysts have a hard time understanding all the fuss over Web 2.0. The programmers are like carpenters with a shop full of shiny new drills and saws. They're picturing all the cool furniture they can build. But the rest of us say, "so what, it's a drill." We won't get it until more of the furniture is built.

The other critical factor in the rise of this new software paradigm is open source. When I was scheming about OpenDoc, I tried to figure out an elaborate financial model in which developers could be paid a few dollars a copy for each of their modules, with Apple or somebody else acting as an intermediary. It was baroque and probably impractical, but I thought it was essential because I never imagined that people might actually develop software modules for free.

OpenSource gets us past the whole component payment bottleneck. Instead of getting paid for each module, developers share a pool of basic tools that they can use to assemble their own projects quickly, and they focus on just getting paid for those projects. For the people who know how to work this way, the benefits far outweigh the cost of sharing some of your work.



The Rise of the Mammals

Last week I talked with Carl Zetie, a senior analyst at Forrester Research. Carl is one of the brightest analysts I know (I'd point you to his blog rather than his bio, but unfortunately Forrester doesn't let its employees blog publicly).

Carl watches the software industry very closely, and he has a great way of describing the change in paradigm. He sees the world of software development breaking into two camps:

The old paradigm: Large standard applications. This group focuses on the familiar world of APIs and operating systems, and creates standalone, integrated, feature-complete applications.

The new paradigm: Solutions that are built up out of small atomized software modules. APIs don't matter very much here because the modules communicate through metadata. This group changes standards promiscuously (they can be swapped in and out because the changes are buffered by the use of metadata). Carl cited the development tool Eclipse as a great example of this world; the tool itself can be modified and adapted ad hoc.

I think the second group is going to displace the first group, because the second group can innovate so much faster. It'll take years to play itself out, but it's like the mice vs. the dinosaurs, only this time the mice don't need an asteroid to give them a head start.

This situation is very threatening for the established software companies. Almost all of the big ones are based on old-style development, using large teams of programmers to create ponderous software programs with every feature you could imagine. The scale of their products alone has been a huge barrier to entry -- you'd have to duplicate all the features of a PowerPoint or an Illustrator before you could even begin to attack it. Few companies can afford that sort of up-front investment.

But the component paradigm, combined with open source, turns that whole situation on its head. The heavy features of a big software program become a liability -- because the program's so complex, you have to do an incredible amount of testing anytime you add a new feature. The bigger the program becomes, the more testing you have to do. Innovation gets slower and slower. Meanwhile, the component guys can sprint ahead. Their first versions are usually buggy and incomplete, but they improve steadily over time. Because their software is more loosely coupled, they can swap modules without rewriting everything else. If one module turns out to be bad, they just toss it out and use something else.

There are drawbacks to the component approach, of course. For mission-critical applications that require absolute reliability, something composed of modules from various vendors is scary. Support can also be a problem -- when an application breaks, how do you determine which component is at fault? And it's hard (almost laughable at this point) to picture a major desktop application replaced by today's generation of online modules and services. The ones I've seen are far too primitive to displace a mainstream desktop app today.

But I think the potential is there. The online component crowd is systematically working through the problems, and if you project out their progress for five years or so, I think there come a time when their more rapid innovation will outweigh the integration advantages of traditional monolithic software. Components are already winning in online consumer services (that's where most of the Web 2.0 crowd is feeding today), and there are some important enterprise products. Over time I think the component products will eat their way up into enterprise and desktop productivity apps.

In this context, the fuss about software you can download on the fly, and support through advertising, is a sideshow. For many classes of apps it will be faster to use locally cached software for a long time to come, and I don't know if advertising in a productivity application will ever make much sense. But I'm certain that the change in development methodology will reshape the software industry. The real game to watch isn't ad-supported services vs. packaged software, it's atomized development vs. monolithic development.


What does it all mean?

I think this has several very important implications for the industry:

The big established software companies are at risk. The new development paradigm is a horrible challenge for them, because it requires a total change in the way they create and manage their products. Historically, most computing companies haven't survived a transition of this magnitude, and much of the software industry doesn't even seem to be fully aware of what's coming. For example, I recently saw a short note in a very prominent software newsletter, regarding Ruby on Rails (an open source web application framework, one of the darlings of the Web 2.0 crowd). "It's yet another Internet scripting language," the newsletter wrote. "We don't know if it's important. Here are some links so you can decide for yourself."

I guess I have to congratulate them for writing anything, but what they did was kind of like saying, "here's a derringer pistol, Mr. Lincoln. Don't know if it's important or not, but you might want to read about it."

Some software companies are trying to react. I believe the wrenching re-organization that Adobe's putting itself through right now is in part a reaction to this change in the market. The re-org hasn't gotten much coverage -- in part because Adobe hasn't released many details, and in part because the press is obsessed with Google vs. Microsoft. But Adobe has now put Macromedia general managers in charge of most of its business units, displacing a lot of long-time Adobe veterans who are very bitter about being ousted two weeks before Christmas, despite turning in good profits. I've been getting messages from friends at Adobe who have been laid off recently, and all of them say they were pushed aside for Macromedia employees. "It's a reverse acquisition," one friend told me.

I personally think what Adobe's doing is grafting Macromedia's Internet knowledge and reflexes into a company that has been very focused on its successful packaged software franchises. It's going to be a painful integration process, but the fact that Adobe's willing to put itself through this tells you how important the change is. Better to go through agonizing change now than to lose the whole company in five years.

What does Microsoft do? In the old days, Microsoft's own extreme profitability made it straightforward for the company to win a particular market. Microsoft could spend money to bleed the competitor (for example, give away the browser vs. Netscape), while it worked behind the scenes to duplicate and then surpass the competitor's product. But the component software crowd doesn't want to take over Microsoft's revenue stream; it wants to destroy about 90% of it, and then can be very successful living off the remaining 10% or so. To co-opt their tactics, Microsoft would have to destroy most of its own revenue.

Here's a simplified example of what I mean: some of the component companies are developing competitors to the Office apps. A couple of examples are Writely and JotSpot's Tracker. Microsoft could fight them by trimming down Word and Excel into lightweight frameworks and inviting developers to extend them. The trouble is that you can't charge a traditional Word or Excel price for a basic framework; if you do, competing frameworks will beat you on price. And if you enable third parties to make the extensions, then they'll get any revenue that comes from extensions. I don't see how Microsoft could sell enough advertising on a Word service to make up the couple of hundred dollars in gross margin per user that it gets today from Office (and that it gets to collect again every time it upgrades Office).

The Ozzie memo seems to suggest that Microsoft will try to integrate across its products, to make them complete and interoperable in ways that will be very hard for the component crowd to copy. But that adds even more complexity to Microsoft's development process, which is already becoming famous for slowness. If you gather all the dinosaurs together into a herd, that doesn't stop the mice from eating their eggs.

I wonder if senior management at Microsoft sees the scenario this starkly. If so, a logical approach might be to make an all-out push to displace the Google search engine and take over all of Google's advertising business, to offset the coming loss of applications revenue. Will Microsoft suddenly offer to install free WiFi in every city in the world? Don't laugh; historically, when Microsoft felt truly threatened it was willing to take radical action. Years ago the standard assumption was that Gates and Ballmer utterly controlled Microsoft -- they held enough of the company's stock that they could ignore the other shareholders if they had to. I'm not sure if that's still true. Together the stock holdings of Gates and Ballmer have dropped to about 13% of the company. Microsoft execs hold another 14%, and Paul Allen has about 1%. Taken together, that's 28%. Is that enough to let company management make radical moves, even at the expense of short-term profits? I don't know. But I wouldn't bet against it.

The rebirth of IT? The other interesting potential impact was pointed out to me by a co-worker at Rubicon Consulting, Bruce La Fetra. Just as new software companies can become more efficient by working in the component world, companies can gain competitive advantage by aggressively using the new online services and open source components for their own in-house development. But doing so requires careful integration and support on the part of the IT staff. In other words, the atomization of software makes having a great IT department a competitive advantage.

How about that, maybe IT does matter after all.

6 comments:

Bill Lee said...

Hi Michael,
Just found your blog by reference through Chris Dunphy's.

I just wanted to make a couple of quick comments (that should be elaborated much more extensively, but I am very tired at the moment).

I've still yet to see open source be the panacea that a lot of people have been touting. For lots of reasons I dont ever think it will be. I do believe that it has and will continue to be a positive influence on the innovation we see in software, but its biggest successes have been, and will continue to be, due to the individuals with a high degree of personal dedication to seeing a project through its entire lifecycle. This is rare. It is rare enough in closed development environments, let alone open source.

Component software construction is old news. What changes are the methodologies and technologies used to describe and implement them and the application architectures that contain them.

There are no substantial modern applications that are not componentized. What appears to be monolithic is more a matter of perspective... it is a matter of whether the inferface definitions of the components are available.

Component models of the past and in the future share the same shortcomings. One of the biggest is in the lack of skill, experience, or foresight of the designers to define generally useful interfaces. Theoretical discussions on this topic always focus on narrow (and not so narrow) scenarios in which a small set of components interact. In the real world, a variety of contentious demands make large complex systems stray far away the ivory tower ideal.

Usually, discussion of all of these issues lack a solid understanding of software developers--their thought processes and motivations, esp. in combination with the motivations of those they work for (business, technical, or otherwise). This applies to both closed and open source environments.

Let's have lunch and I can talk more in depth about why this matters so much. ;-)

Bill...

Anonymous said...

Bill Lee is quite right on two very subtle points here: that open-source projects often depend on the passion and commitment of a couple of insiders (see Ruby on Rails' David Hannson), and that the biggest key to recent development trends is understanding the motivations and preferences of developers.

I'm a lurker in the RoR world, but it's been remarkable to view it as a peice of "user-centered design" with developers as the users. UCD, as it's typically practiced, looks to understand the goals and motivations of users, then design products and feature behavior around them. This is rarely done with development tools or environments, although nearly every developer has a set of hand-crafted tools that serve his or her methods and needs.

RoR really did look to address a certain kind of developer's needs and motivations. (And this is another key with UCD: you build for a specific person, not for a demographic, not for a market type. You build for a specific person.)

Sam said...

Hi,

Noticed the Writely mention, came to read about us. :)

I agree wholeheartedly. This is very well put, and I linked back to you from the Writely blog.

I'll repeat myself instead of linking - I think one additional thing that's interesting is that, on the internet, the mode of integration is looser and more platform independant than it ever was on the desktop. Think of COM or OpenDoc integration versus what's being done with Flickr or Google maps. It's much easier to do something that just...works....across all kinds of localized, mac/win/linux systems. Like it or not, XML, JavaScript, JSON, AJAX and the rest are really, actually, finally, systems that can be used to reach a really broad number of users with minimal hassle.

I think this is what's really amazing about the current situation, and it's what reminds me of "Web 1.0", when it was suddenly apparent that the combination of HTML and TCP/IP was a really, really powerful global publishing and communication mechanism. As a salty old desktop developer, it's thrilling.

Michael Mace said...

>>on the internet, the mode of integration is looser and more platform independant than it ever was on the desktop

Sam, thanks for the comment, and I think you nailed something important. The granularity of the components in Web 2.0 is very different from what I've seen attempted in component software systems in the PC world. That seems to make a huge difference in speed of adoption.

The PC component systems I saw were very carefully architected, and allowed you to break things down into extremely small modules. The Be developers working at PalmSource were all over this, and knew how to design very sophisticated systems of hierarchy and inheritance so that a small code change could ripple through the whole OS. It was incredibly powerful, but also hard to create and document, and hard for a developer to learn.

It's difficult for me to picture a component system like that emerging from the open source world without the sort of intense leadership Bill Lee wrote about in his comment.

On the other hand, most of the components I've seen in the Web 2.0 world are a lot more self-sufficient; many of them are almost mini-applications. But you can still string them together to do interesting things. This sort of component architecture is probably less elegant than the desktop component systems I've seen, but far easier to implement and evolve rapidly. And I think speed of evolution is the factor that'll challenge the old-style software companies.

Mike

Walt French said...

“I don't see how Microsoft could sell enough advertising on a Word service to make up the couple of hundred dollars in gross margin per user that it gets today from Office…”

No, not for desktop software. But why not for WP7? They could be quite happy to give away the basic inter-module communication APIs, knowing that every install was on a Microsoft-licensed piece of hardware.

And of course, your former employer must have a skunkworks project investigating a resurrection of OpenDoc in some new fashion, one that'd be based on its own iLife / iWork packages, but with user-extensions.

Which, of course, is almost what the app-focused mobile OS's are all about. So far, Apple has let Androids do more of it (alternate keyboards, skins, etc), but it'd seem they don't need to do TOO much more in the way of inter-process communications before this could be the runaway hit you imply.

Michael Mace said...

>>a resurrection of OpenDoc in some new fashion

Don't toy with me, Walt. I spent too much time at Apple dreaming about what OpenDoc ought to grow into.

Next you'll have me fantasizing about the rebirth of Hypercard.