Web 3.0

Or, why Web 2.0 doesn't cut it for mobile devices

One of the hottest conversations among the Silicon Valley insider crowd is Web 2.0. A number of big companies are pushing Web 2.0-related tools, and there’s a big crop of Web 2.0 startups. You can also find a lot of talk of “Bubble 2.0” among the more cautious observers.

It’s hard to get a clear definition of what Web 2.0 actually is. Much of the discussion has centered on the social aspirations of some of the people promoting it, a topic that I’ll come back to in a future post. But when you look at Web 2.0 architecturally, in terms of what’s different about the technology, a lot of it boils down to a simple idea: thicker clients.

A traditional web service is a very thin client -- the Browser displays images relayed by the server, and every significant user action goes back to the server for processing. The result, even on a high-speed connection, is online applications that suck bigtime when you start to do any significant level of user interaction. Most of us have probably had the experience of using a Java-enabled website to do some content-editing or other task. The experience often reminds me of using GEM in 1987, only GEM was a lot more responsive.

The experience isn’t just unpleasant -- it’s so bad that non-geeks are unlikely to tolerate it for long. It’s a big barrier to use of more sophisticated Web applications.

Enter Web 2.0, whose basic technical idea is to put a user interaction software layer on the client, so the user gets quick response to basic clicks and data entry. The storage and retrieval of data is conducted asynchronously in the background, so the user doesn’t have to wait for the network.

In other words, a thicker client. That makes sense to me -- for a PC.

Where Web 2.0 doesn’t make sense is for mobile devices, because the network doesn’t work the same way. For a PC, connectivity is an assumed thing. It may be slow sometimes (which is why you need Web 2.0), but it’s always there.

Mobile devices can’t assume that a connection will always be available. People go in and out of coverage unpredictably, and the amount of bandwidth available can surge for a moment and then dry up (try using a public WiFi hotspot in San Francisco if you want to get a feel for that). The same sort of thing can happen on cellular networks (the data throughput quoted for 2.5G and 3G networks almost always depends on on standing under a cell tower, and not having anyone else using data on that cell).

The more people start to depend on their web applications, the more unacceptable these outages will be. That’s why I think mobile web applications need a different architecture -- they need both a local client and a local cache of the client data, so the app can be fully functional even when the user is out of coverage. Call it Web 3.0.

That’s the way RIM works -- it keeps a local copy of your e-mail inbox, so you can work on it at any time. When you send a message, it looks to you as if you’ve sent it to the network, but actually it just goes to an internal cache in the device, where the message sits until a network connection is available. Same thing with incoming e-mail -- it sits in a cache on a server somewhere until your device is ready to receive.*

The system looks instantaneous to the user, but actually that’s just the local cache giving the illusion of always-on connectivity.

This is the way all mobile apps should work. For example, a mobile browser should keep a constant cache of all your favorite web pages (for starters, how about all the ones you’ve bookmarked?) so you can look at them anytime. We couldn’t have done this sort of trick on a mobile device five years ago, but with the advent of micro hard drives and higher-speed USB connectors, there’s no excuse for not doing it.

Of course, once we’ve put the application logic on the device, and created a local cache of the data, what we’ve really done is create a completely new operating system for the device. Thsat's another subject I'll come back to in a future post.

_______________

*This is an aside, but I tried to figure out one time exactly where an incoming message gets parked when it’s waiting to be delivered to your RIM device. Is it on a central RIM server in Canada somewhere, or does it get passed to a carrier server where it waits for delivery? I never was able to figure it out; please post a reply if you have the answer. The reason I wondered was because I wanted to compare the RIM architecture to what Microsoft’s doing with mobile Exchange. In Microsoft’s case, the message sits on your company’s Exchange server. If the server knows your device is online and knows the address for it, it forwards the message right away. Otherwise, it waits for the device to check in and announce where it is. So Microsoft’s system is a mix of push and pull. I don’t know if that’s a significant competitive difference from the way RIM works.

17 comments:

Mike Rohde said...

I think another issue that's going to start bubbling up is where the data resides and in what format.

Many Web 2.0 ventures seem to rely on network storage of information and many of these are small startups that for all we know, could dissapear tomorrow. Where is my data? Where is a backup of my del.ico.us links should that project go dark tomorrow?

I think this relates to the topic youlre discussing with 3.0 as you are addressing in some sense a local cache of data. So, why not have features that can allow me to regularly have data store somewhere, in an open-source format, so that if or when my favorite services blinks out, I can at least have the data.

And if one of those "places" that the data could be sent is my phone or PDA, why not send it there and allow the browser to access that data as well?

Maybe it's just that I'm from the old-school camp of liking my data on my machine that makes this an issue for me — it could be that the generation coming up has no problem trusting services with email, PIM info, links or whatever.

I always come back to the adage:

"there are only 2 types of computer users in the world: those who have lost data and those who will."

I've already lost stuff and I suspect this is going to be a pretty big issue as more and more folks depend on web services for critical data. Hopefully someone is thinking about these issues in Web 2.0 and 3.0.

Michael Mace said...

Good point, Mike. Maybe you and I are both old school -- or maybe we're just old enough to have seen people get jerked around in the past.

When Gemstar closed down the SoftBook and Rocket e-book businesses in 2003, they promised to keep the servers going for three years (through mid-2006). After then I'm not sure what will happen. The e-book devices were designed to sync to an online bookshelf and can't necessarily hold all your books at once. If the server is shut down, your extra books disappear.

Scary stuff.

Mike

Frank said...

All email goes through RIM's servers which is why the threat of RIM being shut down is so real. If the RIM servers are forced to be shut down mail won't be delivered to the devices.

RIM is reportedly working on a work around and I suspect that is closer to Microsoft's model than what they currently use.

Michael Mace said...

Thanks for the comment, Frank. I agree the fact that all the mail goes through RIM's servers is an issue if the courts rule against RIM. I've been wanting to do a post on the RIM case, just because it feels so important, but I can't figure out anything original to say. This is one of the few times when I wish I were a lawyer -- I'd have a better estimate of how this is likely to play out.

The thing I still don't understand about RIM's system is where physically the message gets parked while waiting for final delivery. Is it on that central RIM server in Canada, or does it get passed along to a server somewhere in the carrier's infrastructure? I suspect that the answer may differ from carrier to carrier...

jian wu said...

What you described is actually called "Smart Client Architecture" for intermitted connection, Microsoft does have Smart client definition and Update Application Block Pattern to deal with the problem from .Net Perspective.

I happened to work on a R&D Project
using RIM for about 3months early this year and have some good and bad experience with RIM device and tools, I think that the key issue here is whether the success in RIM's Email solution can be replicated to other Application Areas either Web Application or Enterprise Application

Thomas Landspurg said...

I totally agree, and I've write something abouth this:
synchronisation
or browsing
some time ago...

I do belive that mobile will prodive opportunity for a better solution to the current problem solved by Web2.0 applications....

Michael Mace said...

Hi, Thomas.

Thanks for your comment. I'd love to read your post, but unfortunately the link to it is broken. Could you post the correct one?

Thanks.

tomsoft said...

Sorry michael, second attempt with a correct link: http://blog.landspurg.net/synchronisation-or-browsing-mobile-20-is-not-mobile-web-20

NotesTracker said...

Definitely so!

A few observations .. I have been with Vodafone since their early days down here in Australia (1993, or maybe it was 1994). Vodafone were the first to major telco to offer digital mobile (GSM) in Australia. But even though they were the first out of the starting blocks, I'm disappointed that their signal coverage and signal strength/reliability after more than a decade still is not always dependable. They don't have GSM coverage in a lot of outback Australia -- nor to my considerable annoyance and to their shame a network-sharing arrangement with Telstra or other telco -- in places like Coober Pedy (the popular opal mining town in central South Australia, where I have a client).

As far as new mobile technologies go, I personally wouldn't touch 2G or 3G with the proverbial "forty-foot pole" for exactly the reasons you mention in your post.

I suppose you might call blogging a Web 2.0 technology (or close to it). Like you, I use Google's blgger.com for my weblogs. Unfortunately I've been intensely frustrated at times when blogger.com's performance during the creation/editing of posts has been abysmally slow (minutes per interaction) or it has even "bombed" completely. Yet in distinct contrast, at at many if not most other times blogger.com's reliability and responsiveness have been outstanding (a few seconds per interaction, dependably).

Just consider that this is all at broadband speeds, using the vast server farms and networking infrastructure that Google is famous for. So it seems to me, when you can't get utterly consistent responsivess with a major player like Google, then what can you expect from all those smaller players with far less resources? There's a vast amout of work to be done before the bulk of these new services become dependable for anything much more serious than watching short video clips or interacting with maps (as amusing or useful as such apps might be)!

The hype is all a bit amusing really. See my tongue-in-cheek post at 'Web 2.0' and 'Web Pi' -- Reject Reality and Substitute Your Own!

Michael Mace said...

Notestracker wrote:

>>I've been intensely frustrated at times when blogger.com's performance during the creation/editing of posts has been abysmally slow (minutes per interaction) or it has even "bombed" completely.

Thanks for the interesting comment, and please don't get me started on the subject of Blogger.

My other weblog is hosted using WordPress, and the contrast between the two blogs is striking. WordPress has its quirks, and I'd never recommend it to a novice. But it has been incredibly reliable for me, and I love the power it gives me.

In contrast, I often feel like I'm fighting against Blogger to get it to do things. In addition to slow performance, I've had late night formatting problems that caused me to lose hours of sleep (literally). And sometimes it has eaten posts.

You'd think Google, the World's Most Important Internet Company, could do better than a noncommercial open source project. The fact that Google can't (or won't) speaks volumes about how the software world is changing. But that's a subject for a future post...

Tim Hay said...

RIM's relay servers play an absolutely critical role in the BlackBerry service. They are responsible for managing the transport layer over the radio network out to the device - the result of RIM's heritage in mobile packet data for over 20 years. These servers are managed by RIM and there isn't any other carrier infrastructure - not at mine anyway. Microsoft treats the mobile network as another IP bearer and I think that will continue to be a weakness for them. Interestingly when you use a WiFi enabled BlackBerry in a campus environment then it can talk directly to the BlackBerry Enterprise Server on your LAN without going through the relay - and this model is also used for RIM's VoIP integration in the Ascendent mobile PBX product they acquired earlier in the year.

I agree that business applications must be able to handle some form of on-line/off-line, but for consumer applications I tend to concur with one of the other posters that the mobile browser could be the crucial app. I'd love YouTube on my mobile browser.

And surely one killer mobile app has to be immediate photo/video uploading/blogging - and it would be great for this to be controlled via the browser so as to avoid a little app for every different mobile device.

Cheers,
Tim

Tim Hay said...

And two more comments on RIM's model while I'm at it.

1. The RIM relays also handle device management - knowning when they are in/out of coverage and on/off.

2. The problem with this RIM model is that it will have trouble scaling. When everything was paging type email traffic then no problem. Now they also route web browsing and with the Pearl, photos as email attachments (200KB an image!). And they are thinking about supporting rich HTML email and then imagine multi-media browsers - their back-end must be starting to creak...

pulse dude said...

Hi Mike,

Great Blog.

"Web 2.0" or "3.0" is possible on Mobile but is far more complicated to implement than a php/ajax website. There are two elements I believe are key; caching (like you state above) and purpose developed rich clients.

The client is needed to present the service in a user friendly manner. It will include all the hooks available on the web adapted to a mobile device. Users will be able to configure preferences and then the device will use a data connection when it can (not user initiated) to pull the content they want from the Internet. When the user want info 90% of the time it will already be on their device i.e. Your top 10 friends posts on MySpace or latest 5 videos in your fav category on YouTube.

Vodafone seem to be launching custom clients along these lines already so we'll see how they work.

Cheers,
James Reynolds

Michael Mace said...

Thanks, James.

I think you make a good point about the client -- if you just take a PC-adapted web app and plop it onto a mobile, the differences in UI, pointing technology, and screen size will likely make for a bad experience.

I think it's better for a mobile app designer to ask which Web-based data the user needs in a mobile setting, and adapt the client to deliver just that.

annerose said...

These comments have been invaluable to me as is this whole site. I thank you for your comment.

Mark Seaborne said...

This is one space where I think it is fair to say that the W3C has done some useful work. It has created a "thick-as-you-need" clientside Web platform that works as well on the traditional Web browser as on mobile devices. Unfortunately, few people seem to have noticed.

XForms is touted by the W3C as the replacement for HTML forms, but it is actually a very nice platform for building data-driven Web Applications, and it jumps Web 2.0 by removing the need for most of the scripting that Web 2 implies.

An XForm document (sitting inside ordinary XHTML) maintains its own in-memory data store (the XForms model), elements of which can either be persisted to some server, or to a local store. So losing a connection doesn't have to stop your app. from working. The application UI and logic just use data in the XForms model, where-ever it originally came from.

The significant thing for mobile app.s is that XForms is designed from the ground up to multi-modal. It doesn't even assume a visual user interface - so you may well hear an XForm app. before you see it!

Obviously proof of puddings come from eating, and I can report that XForms is being used, very successfully to build mobile app.s that combine the advantages of a stand-alone application with the platform provided by the Web.

So, maybe XForms + XHTML is Web 3?

trimspa diet said...
This comment has been removed by a blog administrator.