It’s Time to Reinvent the Personal Computer

“In chaos, there is opportunity.”
—Tony Curtis,
Operation Petticoat (and also Sun Tzu)

“Chaos” is a pretty good word to describe the personal computer market in 2013. Microsoft is trying to tweak Windows 8 to make it acceptable to PC users (link), its Surface computers continue to crawl off the shelf (link), PC licensees are reconsidering their OS plans and business models (link), and Apple’s Macintosh business continues a genteel slide into obscurity (link, link).

No wonder many people say the personal computer is obsolete, kaput, a fading figment of the past destined to become as irrelevant as the rotary telephone and steam-powered automobile (link).

I beg to differ. Although Windows and Macintosh are both showing their age, I think there is enormous opportunity for a renaissance in personal computing. (By personal computing, I mean the use of purpose-built computers for productivity tasks including the creation and management of information.) I’m not saying there will be a renaissance, because someone has to invest in making it happen. But there could be a renaissance, and I think it would be insanely great for everyone who depends on a computer for productivity.

In this post I’ll describe the next-generation personal computing opportunity, and what could make it happen.


What drives generational change in computing?

Let’s start with a bit of background. A generational change in computing is when something new comes along that makes the current computing paradigm obsolete. The capabilities of the new computers are so superior that previous generations of apps and hardware are instantly outdated and need replacement. Most people would agree that the transition from command line computers to graphical interface (Macintosh and Windows) was a great example of generational change.

What isn’t as well understood is what triggered the graphical interface transition. It wasn’t just the invention of a new user interface. The rise of the Mac and Windows was driven by a combination of factors, including:

A new pointing device (the mouse) that made it easier to create high-quality graphics and documents on a computer.
Bitmapped, full-color displays that made it easy for computers to display those graphics, pictures, and eventually video. Those displays also made it easier to manage file systems and launch apps visually.
New printing technology (dot matrix and laser printers) that made it easy to share all of those wonderful new documents and illustrations we were creating.
A new operating system built from the ground up to support these new capabilities.
An open applications market that enabled developers to turn all of these capabilities into compelling new apps.

All of those pieces had been around for years before graphical computing took off, but it wasn’t until Apple integrated all of them well, at an affordable price, that the new paradigm took off. The new interface and new hardware, linked by a new or rebuilt OS, let us work with new types of data. That revolutionized old apps and created whole new categories of software.

Windows and Mac took off not because they were new, but because they let us do new things.

Although later innovations, such as the Internet, added even more power to personal computing, it’s amazing how little its fundamental features and capabilities have changed since the mid-1990s. Take a computer user from 1979 and show them a PC from 1995, and they’ll be completely lost in all the change. Take a computer user from 1995 and show them a PC from 2012 and they’ll admire the improved specs but otherwise be feel very much at home.
   
Maybe this slowdown in qualitative change is a natural maturation of the market. After an early burst of innovation, automobiles settled down to a fairly standard design that has changed only incrementally in decades. Same thing for jetliners.

But I think it’s a mistake to look at personal computers that way. There are pending changes in interface, hardware, and software that could be just as revolutionary as graphical computing was in the 1980s. In my opinion, this would be a huge opportunity for a company that pulls them all together and makes them work.


Introducing the Sensory Computer

I call the new platform sensory computing because it makes much richer use of vision and gestures and 3D technology than anything we have today. Compared to a sensory computer, today’s PCs and even tablets look flat and uninteresting.

There are four big changes needed to implement sensory computing.

The first big change is 3D. Like desktop publishing in the 1980s, 3D computing requires a different sort of pointing device, new screen technology, and a new kind of printer. All of those components are available right now. Leap Motion is well into the development of gesture-based 3D control. 3D printers are gradually moving down to smaller sizes and more affordable price points. And 3D screens that don’t require glasses are practical, but have a limited market today because we keep trying to use them for televisions, a usage that doesn’t work with the screen’s narrow viewing angle.

But guess what sort of screen we all use with a very narrow viewing angle, our heads perched right in front of it at a fixed distance? The screen on a notebook computer.

Today we could easily create a computer that has 3D built in throughout, but we lack the OS and integrated hardware design that would glue those parts together into a solution that everyone can easily use.

You might ask what the average person would do with a 3D computer. Isn’t that just something for gamers and CAD engineers? The same sort of question was asked about desktop publishing in the 1980s. “Who needs all those fonts and fancy graphics?” many people said. “For the average person Courier is just fine, and if I need to send someone an image I’ll fax it to them.”

Like that skeptical computer user in the 1980s, we don’t know what we’ll do when everyone can use 3D. I don’t expect us to send a lot of 3D business letters, but it sure would be nice to be able to create and share 3D visualizations of business data and financial trends. I’d also like to be able to save and watch family photos and videos in 3D. How about 3D walkthroughs of hotels and tourist attractions on Trip Advisor? The camera technology for 3D photography exists; we just need an installed base of devices to edit and display those images. And although I don’t know what I’d create with a 3D printer, I’m pretty sure I’d cook up something interesting.

Every time we’ve added a major new data type to computing, we’ve found compelling mainstream uses for it. I’m confident that 3D would be the same.

The second big change is modernizing the UI. User interface is ultimately about increasing the information and command bandwidth between a person and a computer. The more easily you can get information in and out of the computer, the more work you can get done. The mouse-keyboard interface of PCs, and the touch-swipe interface of tablets, were great in their time, but dramatically constrain what we can do with computers. We can do much better.

The first step in next-generation UI is to fully integrate speech. This doesn’t mean having everything controlled by speech, but using speech technology where it’s most effective.

Think about it: What’s the fastest way to get information in and out of your head? For most of us, we can talk faster than we can type, and we can read faster than we can listen to a spoken conversation. So the most efficient UI would let us absorb information by reading text on the screen, but enter information into the computer by talking. Specifically, we should:
—Dictate text to the computer by via speech, with an option to use a keyboard if you’re in public where talking out loud would be rude.
—Have the computer present information to us as printed text on screen, even if that information came over the network as something else. For example, the computer should convert incoming voice messages to text so you can sort through them faster.

We can do all of these things through today’s computers, of course, but the apps are piecemeal, bolted on, and forced through the funnel of an old-style interface. They’re as awkward as trying to do desktop publishing on a DOS computer (something that people did try to do for years, by the way).

Combine speech with 3D gestures and you’ll start to have a computer that you can control very richly by having a conversation with it, complete with head nods and waves of the hand. Next we’ll add the emerging science of eye tracking. I’m very impressed by how much progress computer scientists are making in this area. It’s now possible to build interfaces that respond to the things we look at, to facial expressions, and even to our emotional response to the things we see. This creates an incredibly rich (and slightly creepy) opportunity to build a computer that responds to your needs almost as soon as you realize them.

Once we have fully integrated speech, gesture recognition, and eye tracking, I’m not sure how much we’ll need other input technologies. But I’d still like to have the option to use a touchscreen or stylus when I need precision control or when a task is easier to do manually (for example, selecting a cell in a spreadsheet or drawing something). And as I mentioned, you’ll need a keyboard option for text entry in public places. But these are backups, and a goal of our design should be to make them options rather than a part of the daily usage experience.

The third change is a new paradigm for user interaction In a word, it’s time to ship cyberspace. The desktop metaphor (files and folders) was driven by the capabilities of the mouse and bitmapped graphics. The icons and panels we use on tablets are an adaptation to the touchscreen. Once we have 3D and gesture recognition on a computer, we can rethink how we manage it. In the real world, we remember things spatially. For example, I remember that I put my keys on my desk, next to the sunglasses. We can tap into that mental skill by creating 3D information spaces that we move through, with recognizable landmarks that help to orient us. Those spaces can zoom or morph interactively depending on what we look at or how we gesture. Today’s interface mainstays such as start screens and desktops will be about as relevant as the flowered wallpaper in grandma’s dining room. Computer scientists and science fiction authors have played with these ideas for decades (link); now is the time to brush off the best concepts and make them real.

The fourth change is to modernize the computing ecosystem. The personal computer software ecosystem we have today combines 20-year-old operating system technology with a ten-year-old online store model created by Apple to sell music. There’s far more we could do to make software easy to develop, find, and manage. The key changes we need to make are:

—The operating system should seamlessly integrate local and networked resources. Dropbox has the right idea: you shouldn’t have to worry about where your information is, it should just be available all the time. But we should apply that idea to both storage and computer processing. We shouldn’t have web apps and native apps, we should just have apps that take advantage of both local computing power and the vast computational resources of the web. An app should be able to run some code locally and some on a server, with some data stored locally and some on the network, without the user even being aware of it. The OS should enable all of that as a matter of course.

In this sense, the advocates of the netbook have it all wrong. The future is not moving your computing onto the network; it’s melding the network and local computer to produce the best of both worlds.

Discovery needs work. App stores are great for making apps available, but it’s very hard to find the apps that are right for you. Our next generation app store should learn your interests and usage patterns and automatically suggest applications that might fit your needs. If we do this right, the whole concept of an app store becomes less important. Rather than you going to shop for apps, information about apps will come to you naturally. I think we’ll still have app stores in the future because people like to shop, but they should become much less important: a destination you can visit rather than a bottleneck you must pass through.

Security should be built in. The smartphone operating systems have this one right: each app should run in a separate virtual sandbox where malicious code can’t corrupt the system. No security system can be foolproof, but we can make personal computers far more secure than they are today.

Payment should be built in as well. This is the other part of the software and content ecosystem that’s broken today. Although the app and content stores have made progress, we’re still limited to a small number of transaction types and fixed price bands. You can’t easily sell an app for half a cent per use. You can’t easily sell a software subscription with variable pricing based on usage. As an author, you can’t easily sell an ebook for more than $10 or less than 99 cents without giving up 70% of your revenue. And you can’t easily sell a subscription to your next ten short stories. Why? Because the store owners are manipulating their terms in order to micro-manage the market. They mean well, but the effect is like the worst dead-hand socialist market planning of the 1970s. The horrible irony is that it’s being practiced by tech companies that loudly preach the benefits of a free market.

It’s time for us to practice what we preach. The payment system should verify and pass through payments, period. Take a flat cut to cover your costs and otherwise get out of the way. The terms and conditions of the deal are between the buyer and the creator of the app or content. Apple or Google or Amazon has no more business controlling what you buy online than Visa has controlling what you buy in the grocery store. The free market system has been proven to produce the most efficiency and fastest innovation in the real world; let’s put it to work in the virtual world as well.

Adding it up. Let’s sum up all of these changes. Our next-generation computer now has:
—A 3D screen and 3D printing support built in, with APIs that make it easy for developers to take advantage of them.
—Speech recognition, gesture recognition, and eye tracking built in, with a new user interface that makes use of them.
—A modernized OS architecture that seamlessly blends your computer and the network, while making you more secure against malware.
—An app and content management system that makes it easy for you to find the things you like, and to pay for them in any way you and the developer agree to.

I think this adds up to a new paradigm for computing. It’s at least as revolutionary as the graphical computing revolution of the 1980s. We’ve opened up new usages for your computer, we’ve enabled developers to revolutionize today’s apps through a new interface paradigm, and we’ve made it much easier for you to find apps and content you like.

Why can’t you just do all this with a tablet? You could. Heck, you could do it with a smartphone or a television set. But by the time you finished adding all these new features and reworking the software to make full use of them, you would have completely rebuilt the whole device and operating system. You’ll no longer have a cost-efficient tablet, but you’ll still have all the flaws and limitations of the old system, jury-rigged and still adding cost and inefficiency. Windows 8, anyone?

It’ll be faster and cheaper just to design our new system from scratch.


When will we get a sensory computer?

If you agree that we’re overdue for a new computing paradigm, the next question is when it’ll arrive. Unfortunately, the answer is that it may not happen for decades. Major paradigm changes in technology tend to creep forward at a snail’s pace unless some company takes on the very substantial task of integrating and marketing them. Do you think ebooks would be taking off now if Amazon hadn’t done Kindle? Do you think the tablet market would be exploding now if Apple hadn’t done the iPad? I don’t think so, and the proof is that you could have built either product five years earlier, but no one did it.

So the real question is not when we’ll get it, but who might build it. And that’s where I get stuck.

Microsoft could do it, and probably should. But I doubt it will. Microsoft is so tangled up now in tablet computing and Windows 8 that I find it almost impossible to believe that it could take on another “replace the computer” initiative. I think there’s a very good argument that Microsoft should have done a sensory computer instead of Windows 8, but now that the decision’s made, I don’t think it can change course.

Google could do it, but I don’t think it will. Google is heavily invested in the Chrome netbook idea. It’s almost a religious issue for Google: as a web software company, the idea of a computer that executes apps on the web seems incredibly logical, and is emotionally attractive. Google also seems to be hypnotized by the idea that reducing the cost of a PC to $200 will somehow convert hundreds of millions of computer users to netbooks. I doubt it; PC users have been turning up their noses for decades at inexpensive computers that force them to compromise on features. The thing they will consider is something at the same price as a PC but much more capable. But I don’t think Google wants to build that.

One of the PC companies might take a stab at it. Several PC companies have tried from time to time to sell computers with 3D screens. Theoretically, one of those companies could put together all the features of a sensory computer. I think HP is the best candidate. It already plans to build the Leap Motion controller into some of its computers, and I can imagine a beautiful scenario in which HP creates a new ecosystem of sensory computers, low-cost home 3D printers that render in plastic, and service bureaus where you can get your kid’s science fair design converted to aluminum (titanium if you’re rich). It would be glorious.

But it’s not likely. To work right, a sensory computer requires a prodigious amount of new software, very careful integration of hardware and software features, and the courage to spend years kick-starting the ecosystem. I don’t think HP has the focus and patience to do it, not to mention the technical chops, alas. Same thing for the other PC companies.

Meg, please prove me wrong.

Apple is the company that ought to do it, but does it have the will? Apple has the expertise and the market presence to execute a paradigm change, and its history is studded with market-changing products. I love the idea of Apple putting a big 3D printer at the back of every Apple store. Maybe you could let Sensory Mac users sell their designs online, with pickup of the finished goods at any Apple store, and Apple (naturally) taking a 30% cut...

But I’m not sure if today’s Apple has the vision to carry out something like that. The company is heavily invested in smartphone and tablet computing, with an ongoing hobby around reinventing television. There’s not much executive bandwidth left for personal computing. The company’s evolution has taken it steadily toward mobile devices and entertainment, and away from productivity.

Think of it this way: If Apple were really focused on personal computing innovation, would it be letting HP take the lead in integrating the Leap Motion controller? Wouldn’t it have bought Leap Motion a year ago to keep it away from other companies?

I think personal computing is a legacy market to Apple, an aging cash cow it’ll gently milk but won’t lead. I hope I’m wrong.

We’re out of champions, unless...  At this point we’ve disposed of most of the companies that have the expertise and clout to drive sensory computing. I could make up scenarios in which an outlier like Amazon would lead, but they’re not very credible. I think the other realistic question is whether a startup could do it.

It’s hard. Conventional wisdom says that you need $50 million to fund a computer system startup, and that sort of capital is absolutely positively not available for a company making hardware. But I think the $50 million figure is outdated. The cost of hardware development has been dropping rapidly, driven by flexible manufacturing processes and the ability to rapidly make prototypes. You could theoretically create a sensory computer and build it in small lots, responding to demand as orders come in. This would help you avoid the inventory carrying costs that make hardware startups so deadly.

The other big barrier to hardware startups has been the need to get into the retail channel, which requires huge investments in marketing, sales support, and even more inventory. Here too the situation has been changing. People today are more willing to buy hardware online, without touching it in a store first. And crowdfunding is making it more possible for a company to build up a market before it ships, including taking orders. That business model actually works pretty well today for a $100 gizmo, but will it work for a $2,000 productivity tool?

Maybe. I’m hopeful that some way or another we’ll get a sensory computer built in this decade. At this point, the best chance of making it happen is to talk up the idea so one or more companies will make it happen. Which is the point of this post.

[Thanks to Chris Dunphy for reviewing an early draft of this post. He fixed several glaring errors. Any errors that remain are my fault, not his.]


What do you think?  Is there an opening for a sensory computer? Would you buy one? What features would you want to see in it? Who do you think could build it? Please post a comment and share your thoughts.

25 comments:

  1. I so excited for Google Glass

    ReplyDelete
  2. This comment has been removed by the author.

    ReplyDelete
  3. /quote
    Apple or Google or Amazon has no more business controlling what you buy online than Visa has controlling what you buy in the grocery store.
    /unquote

    So very true...

    ReplyDelete
  4. The monkey wrench I see, and the reason I think it is likely Microsoft or Apple or no one, is that it needs to be a system that extends the existing systems, not replaces it. It still needs to run the existing desktop tools but do all the new things, too. There are probably very few people who are willing to have two systems on their desks. DOS didn't go away when Microsoft brought out Windows. It was still there, slowly moved off the front, relegated to the back end.

    ReplyDelete
  5. 3D is missing what is really going on. Visit many desktop power users and with many what you'll find is a large amount of screen real estate. My own desk has 4,000 square cm of display (compared to 160 for the iPad mini). In recent years the PC vendors have been continually making their displays worse. Going from 16:10 to 16:9 removed 11% of the display area. Laptops have default resolutions like 1366x768 ("hi def"). My most recent Lenovo laptop purchase had screen options all worse than the previous purchase (no more IPS, bigger bezels, 11% less display area, lower pixel density). It is hardly surprising that no one wants to upgrade to this nonsense, or picks tablets instead.

    I suspect that a desktop renaissance will be led by an increase in display area, and it is something that tablets can't match. 3D is a "cheap" way of increasing display area by using the Z axis, but if you had a choice between double the 2D display area or doubling it by going into the Z axis I suspect most would pick 2D. (And of course both can be combined.)

    What we should be seeing is cooperation between tablets etc and desktop systems. Why can't I fling apps and Windows between them? Why don't they immediately extend each other when next to each other?

    The biggest revolution will come when the size of the device in use is considerably smaller than when idle or in transit. Imagine your tablet/desktop was the size of a matchbox, but when you use it, it becomes normal sizes today. A laptop doubles in size when you open it, but the keyboard is still limited by the size of when it is closed (Thinkpad butterfly keyboard from years ago excluded). We keep seeing glimpses of technologies that can provide the size differentials but nothing has stuck, yet.

    ReplyDelete
  6. I think the claiming speech being faster than entering text by keyboard misses three major points:
    1. It might be faster, but is far less exact and computers will always be bad correcting inexactness.
    2. Imagine an office where every one is talking to their computers, how productive will that be? When will the computers understand who says what?
    3. Imagine 200 students taking note simultaneously by talking to their computers at the same time a professor is tutoring...

    Speech is, and will always be impractical.

    Speech has the same problem as case tools and object modelling tools - yes a picture says more than a thousand words, but is it more exact?

    ReplyDelete
  7. You left out Intel in your list of potential champions; they are working on pieces of this, they have the strategic imperative to breath life into the PC (because no matter how power efficient they make x86, ARM is the de facto mobile architecture), they have the capital to fund it, and they have influence over the PC vendors. Sadly, Intel's history as a system software innovator is... poor. And, frankly, I'm not sure PC vendors trust Intel enough to make a huge bet on sensory computing after they bet small on Ultrabooks and lost.

    Your bigger problem is that you haven't convinced me what the baseline use case is for sensory computing. Better data visualization? OK. So you're talking the top end of the workstation market with some trickle down to knowledge workers if it works really well. Perhaps there are other applications that haven't been invented yet, or perhaps I'm just not intrigued by the tech, but... you know me - I'm ALWAYS intrigued by the tech! And yet, I don't know why I want a PC with 3D display, 3D printer, voice and gesture UI, and optional keyboard. It's cool, but what do I do with it? I can certainly imagine building my own action figures, but after I do that a few times, why did I just spend $6,000? Or $3,000 (if costs come way way down)?

    I'm sure someone will say, "gaming!" which is certainly enhanced by 3D, voice, and gesture control. That's where Microsoft and Sony are taking consoles today at a fraction of the price.

    ReplyDelete
  8. Mace, As usual, a wonderful dialog.
    @ Avi Greengart

    "Interesting thoughts" Intel, anyone? I absolutely have no clue.

    As Avi Greengart says there doesn't seem to be a big business case. Of cause as Steve's mindset "people don't know what they want until they are shown" can happen but no guarantee there either since the man is gone now. And a man who is that arrogant would rise easily again in corporate ladder. Hope I'm wrong.

    Also as Anonymous syas general computer experience is waning... all seem to have cutting corners that much needed z axis seems no body's child. I personally can never be imagined being wowed in the same scope as I did when I saw the Power Mac G4 tower. Ok let it be a match box, even.

    Voice, gestures, 3D have something to it, something I can't figure out. They all have mentioned impracticalness. May be if my little one can decipher what I murmur, so can be a machine someday.

    People on the otherhand, seems have less and less passion on beige boxes, even the gamers waning out to pads from consoles. Unless they are given a "virtual reality, a Matrix World." and I am not sure what are the latest development in that space.

    I too have feeling, no major would do it but a "kick-starter" may!

    ReplyDelete
  9. Mozilla?
    Canonical?

    ReplyDelete
  10. Interesting. One hurdle I think that a new paradigm has to pass is the amount effort one has to put in to access the computer functionality. I agree speech is almost necessary, because if your making gesture too often, In general the person will tire and give up eventually. We all know now that using a keyboard or mouse doesn't take much effort, and when new interface paradigms do materialize hopefully they don't wind like like using a Wii :)

    ReplyDelete
  11. It's certainly interesting at times to speculate and you do it better than most Michael. I can't detect any glaring mistakes in your analyses, though I would like to add this potential scenario...

    I don't see any of the current PC establishment players doing this...like the Fairchilds, Xeroxes, IBMs, and HPs of the past, within their *labs* the components of this new machine will live, but market pressures and internal political infighting within Microsoft, Google, Apple (minus Jobs) will prevent them from releasing it.

    Rather, I see some kids working or with friends working, in those labs as starting a new venture, and that and a few other new upstarts, will get things rolling. But as for when that'll happen...20 years? I hope I'll still have enough neuroplasticity left to make use of the new interfaces!

    The interface I'm dreading, like my grandfather dreaded the VCR's programming buttons, is the neural-controller, where you wear a headband with sensors and think yourself thru the menus. Jeez, I'll surely need to ask over a future, assumed grandkid to navigate that for me. :D

    ReplyDelete
  12. The future is not 3D. Personal computers took off initially because of word processing and spreadsheets, two things that were done without computers.

    Where in life do you organize your information in 3D? Bookshelves are 2D structures. File cabinets are 2D as well (1. which drawer is the file in, 2. how deep in the drawer is the file). Your physical desktop is two dimensional.

    I believe this is why there hasn't been a good 3D interface designed that has caught on. Researchers are constantly trying to push boundaries there but they never turn out to be good for real world use.

    ReplyDelete
  13. As always, wonderful insights...

    ReplyDelete
  14. The million dollar question for 3D has always been "how can 3D increase productivity?".
    For the desktop, that question was difficult to answer: a 3D desktop is a messy desk (files piled on top of each other). The question became "how can we do it better in 3D, without breaking the desktop analogy?"
    Now that tablets have broken the desktop analogy, and replaced it with a "draw of tools" one, we could ask the question again... but it would be pointless, because tablets decrease productivity.

    I believe the answer is to do the opposite of tablets :- instead of being tool-centric, to become data-centric. There are many dramatic ways we might interact with data in 3D, exploring it more intuitively and exploratively. "Cyberspace" is a world of data, visualized, not a drawer of tools.

    ReplyDelete
  15. GUI was invented by Xerox to virtualize the office
    into the PC.
    1. What problem does your list solve. It is not clear.
    2. Corporations no longer rely on PCs for productive gain. They rely on full automation, less people, more quants, more algorithms.
    3. Future computing comes from Wall-street needing high frequency trading to cheat.
    4. Most Computing trends are fashion like Ellison said long time ago.
    5. today CPUs barely increasing in speed only through GPU offloading.
    6. DRAM is stuck in DDR3.
    7. Flash has almost reached its life cycle may be 2 more generations left.
    8. Company in Boston is selling a robotic Arm that can pick up any repetitive task as long as you show it the steps first. That is the future of computing.
    no 3D printing unless it can do more than cheap plastic molds.
    9. Apple is focused on Consumer. Google on Ads. Microsoft on Corporate.
    So obviously they are going to come up with difference solutions.

    I could go on but I think you got my point.

    ReplyDelete
  16. Many of us who have been in the business for decades realize 3D and sensory use have really always been the goals. Built-in security is also a given.

    ReplyDelete
  17. Of course, Apple is the only company that can do this. They are such artists, after all; innovation is in their DNA!!!

    Their approach will be to: 1) redesign the icons; 2) add some more annimations; 3) adopt new transparent overlays. Such INNOVATION! Then the marketing people can go crazy with new launches etc, with emphasis on how magical, revolutionary and amazing this innovation really is.

    As for the other little nits you suggest ... well, the Apple engineers might get to those later, if time permits. (They are so busy campaigning for marriage equality, you know, and first things must come first.)

    ReplyDelete
  18. The PC is dead... I use Windows, Mac and Linux, they're all too slow. I'm old, my reflexes are poor, yet I'm constantly sat waiting whilst the PC does trivial tasks. With all the memory and fast processors why is that. Let me show you the future, flick the switch and within 5 seconds you're typing in a word processor. To turn off, flick the switch, instantly off - no messages about Windows is doing security upgrades. No virus's, no software upgrades. Alas this machine is the BBC Micro circa 1981.

    The PC has sunk to the level of poorness that is acceptable to the masses. Something similar happened with HiFi in the 70's.



    ReplyDelete
  19. David, this is like saying a Swiss Army knife is crap because the blade is inferior compared to a hunters knife. No one questions that is is easy to handle one task really, really good. I'm not that old, maybe, but I watched computers evolve from calculating to writing to manipulating and creating images, sound, video, handling communication, navigation and getting access to the data and information of the world. And I enjoyed every bit. Of course machines that can handle anything that you throw at them have a certain level of complexity and a certain lack of direct response, this is exactly why it is such a joy to use a touchscreen, but still... slow? Really? Wake a Mac from sleep, fire Text Edit and type, this should be possible within 5 seconds. Same should be possible on windows if you don't unleash Word.

    Returning to the main thesis of the article, I am not sure how any of the new input paradigms would help me to edit movies better or photoshopping faster. I'm not into 3D and avoid 3D movies, and I do not see real value in 3D printing unless you can print food.
    The big transformations are already happening on the information front, meaning: to understand me better, my computer must know me better. And since Google is the master here, there is always a strange smell about it.

    ReplyDelete
  20. I'm sorry, but it's hard to read a piece about user interfaces when the article itself doesn't even use links correctly. Please put the damn link on the text itself instead of a stupid (link) link.

    ReplyDelete
  21. Ralf, thanks for your well-reasoned comment. I agree that 3D wouldn't help you edit photos faster, but I'm more interested in the new tasks we'd be able to do with a sensory computer. And almost by definition, we won't know what those are until they get invented.

    Anonymous, sorry you don't like the links and I'll think about it, but FYI, the link style I use is a deliberate choice I made a long time ago. Traditional links call far too much attention to themselves, so your eye falls on them rather than the parts of the text that I want to emphasize. They interfere with clear communication.

    ReplyDelete
  22. I don't think 3D is directly useful enough for enough people to make it the next paradigm.

    What I see as the next leap forward is when some of the items you mentioned - gestures, natural speech - combine with AI directed at recognising the work you do and actively assisting.

    Thus you won't need to design your new widget with a 3D interface in a CAD program. Instead, the computer will find you what you want - or one like it which you then tweak. Similar for many office jobs where the task is information assembly or analysis. The computer will do the grunt work and leave the user to make decisions.

    ReplyDelete
  23. Perhaps 3D really is the next imaging. Think of how in 1937 you almost literally had to pay to see any kind of image, be it Scarlett on the silver screen, FDR in Life magazine, or yourself with the family in a studio. Daguerrotypes started imaging revolution but only computers and now mobile have made it into a reality so ubiquitous as to almost compete with the physical one. I wonder what percentage of computing power worldwide is consumed by manipulation of visual information.

    So maybe when you can shop from your home turning that sneaker in your hands so as to see it from all angles in 3D, or when you can step inside an image of your extended family reunion for Christmas 5 years ago, this will really become our new reality, and the devices that support it our new computers.

    OTOH, I still think that voice control is much less useful than direct touch. Anything we do physically, we do with hands (discounting soccer). I'd much rather place text on the page by dragging it into place than by commanding it to move to "3in from the left edge, 5 in from the top". It is touch that has the immediacy to match the immersiveness of 3D.

    ReplyDelete
  24. Thanks for the interesting comment, Alexander.

    I wanted to follow up on one thing you said:

    >>OTOH, I still think that voice control is much less useful than direct touch.

    I think a combination of voice and touch would be more useful than either one alone. Your example is a great case in which speech commands would not be helpful. On the other hand, whenever I use a draw or paint program, I spend a lot of time mousing back and forth to change tools. It would be so nice to be able to say "paintbrush," "eraser," etc, and have the right tool chosen instantly.

    To me, the big win comes from combining multiple interface modes in intuitive ways. It's all about increasing the bandwidth between the user and the computer.

    ReplyDelete
  25. >>To me, the big win comes from combining multiple interface modes in intuitive ways
    Even when a new input modality is more natural, people will still find the incumbent more intuitive. Proponents of new input modalities often underestimate how much the present interfaces have become the intuitive modality. Windows has had speech and handwriting input for over 10 years, but neither has become mainstream.
    >> I think a combination of voice and touch would be more useful than either one alone
    Combining interfaces can exacerbate the problem. Now we ask the user to relearn two interfaces that have already become intuitive. It is the same reason that learning a second language is more difficult.
    There needs to be a compelling task that the incumbent can't perform. The question isn't "what can new interfaces do better?", it is really "what application can't existing interfaces do?" Wearable computers like Google Glass are an example of a new platform, where touch is incapable of performing the fundamental tasks, so speech has a chance to become the de facto intuitive interface.
    What is the new task or platform for 3D?

    ReplyDelete