Would Microsoft's Kinect make a viable input technology?

If Microsoft brings Kinect to the PC, how will it work and will it be worth using?

This morning, ITworld's Peter Smith discussed rumors that Microsoft may be closer than anyone expects to bringing the gesture-based input of Xbox Kinect into Windows. The concept easily conjures up images from science fiction including the gesture-based displays seen in the movie Minority Report and TV shows like Caprica and Earth Final Conflict.

His post got me thinking about what a Kinect-style user interface would look like - and whether it would make sense as an option for Windows at all beyond gaming.

First up, the interface. Broad gestures for gaming are one thing, but everyday computing is going to be about more than simple gestures (and it certainly isn't likely to be about full body gestures). A Kinect-style PC interface is going to focus largely on the hands and fingers. The scenes in Minority Report that involved a gesture-based control system focused on finger manipulation of screen content (mostly images and video). The involved using a finger to reach out and "touch" an object, expand or contracts it with a pinch-style gesture, and a simple drag of the hand to move it.

Do those gestures sound familiar? They're virtually identical to the multi-touch input on any tablet or smartphone. In fact, I remember watching Minority Report again a couple of years of back and thinking that the Kinect-style screen was nothing but a giant iPhone.

So, clearly a Kinect-style computing interface is going to borrow heavily from what we're already used on these devices. That raises a big question about such technology: is there a real point to it? After all, it is possible to make large touchscreens and they don't require the processing power to interpret gestures that a Kinect-style system does.

There are a couple of related questions that also need to be asked.

Are there potential gestures or other advantages for everyday computing that Kinect (or Kinect-type) hardware can offer that a solid surface can't? Actually, there are a couple.

First up, there's less physical limit to the input area. If the camera had a wide enough field, you could continue manipulating objects once you run past what would be the edge of the screen. This could work for scrolling or zooming in/out on content (though I'm not sure it's a completely compelling argument). It could also allow the screen content to follow your gestures and begin displaying off-screen content. The obvious analogue is gaming, but a more interesting one is the way Windows Phone 7's interface allows you to swipe to display additional off-screen application content but treats that content as being a part of a single logical display rather than discreetly different screens. That would require Microsoft and developers to borrow from Windows Phone 7 in design concept, but that probably wouldn't be a problem.

The other advantage is support for 3D gestures. Obviously, you can't do these on a solid surface. Obviously, this has natural implications for gaming. I'm not entirely sure how this would apply to most everyday computing tasks. Such gestures would need to be simple and natural and face the challenge of working with a 2D screen environment (until we see 3D computing follow in the footsteps of 3D TV). One area that could work is some types of graphics work where 3D gesturing could function similar to pressure sensitive graphics tablets.

That said, 3D gestures could be useful in terms of TV and home theater control. Onscreen menus manipulated by hand gestures could allow access to all kinds of onscreen menus and, in the right control device, replace universal remotes. A control device could even be integrated into home automation systems and offer its own suite of apps. Of course, you could make a similar argument for voice controlled systems, but a screen-based solution would probably be more versatile.

Of course, the solid surface or tablet-style interface has it's own advantages for both general computing and home theater or automation. There's no need to stand in front of a camera, for one. There's also physical feedback. I'm not even talking about things like haptic feedback. The simply act of tapping, dragging, and releasing an onscreen object may be simpler and feel more natural to some people than touching thin air and seeing something occur onscreen.

Then there's the question of whether Microsoft is the right company to pull this off. Windows tablets were around for years before the iPad, but Microsoft remained (and still remains) wedded to the idea of old school computing interfaces on tablets. Creating an everyday computing environment for 3D gesture based use is going to take at least as much re-imagining, if not more, that what Apple put into the iPhone and iPad.

In any case, you have to stand back and think that it's somewhat amazing that we're seeing technology arrive on the market that was the realm of science fiction ten or fifteen years ago. It definitely makes me think that the future centuries in which so much science fiction is set will probably look nothing like the writers, set-designers, and producers dreamed up.

Ryan Faas writes about personal technology for ITworld. Learn more about Faas' published works and training and consulting services at www.ryanfaas.com. Follow him on Twitter @ryanfaas.

What’s wrong? The new clean desk test
Join the discussion
Be the first to comment on this article. Our Commenting Policies