Kinect, Surface, and the future of computing interfaces [video]

Last week after I reported on rumors saying Microsoft will be releasing a Kinect SDK for Windows in the next few months, I was talking to some friends about the potential of Kinect as a controller for a PC. What I got out of that conversation is that while these new interfaces are intellectually interesting, there's some skepticism about practical applications. People tend to resist change when it is presented to them as such. My buddies are just fine with keyboard and mouse and see motion controls as more of a gimmick than anything. At least now they do.

[ Get news and reviews on tech toys in ITworld's personal tech newsletter]

I don't think that will last. Once Kinect-like controls become commonplace, we'll find places they work well and places they don't. Just a few years ago the idea of using a touchscreen was met with skepticism in some circles, and, at least in my opinion, it was because those imagining their use imagined replacing current interface elements with a touchscreen. Of course for the most part that hasn't happened. Most of us still use a mouse when using a large vertical screen. But on a horizontal surface touchscreens — accompanied by new 'gesture' controls like flicking, pinching and sliding — feel completely natural. So as new control schemes like Kinect's become prevalent, I believe our user interfaces will change to take advantage of them in ways that most of us haven't really imagined yet. This is where the Kinect hackers are really blazing trails by trying new things; at some point someone is going to invent the Kinect equivalent of pinch-to-zoom. It may be a Kinect hacker or it may be a Microsoft engineer. Which brings me to a blog post at Microsoft's TechNet blog: Microsoft is Imagining a NUI future. In it, author Steve Clayton talks about Natural User Interfaces and points out ways they've already infiltrated our lives: touch screens at grocery store checkouts, voice commands in our cars. We don't really think about these things as new ways of controlling technology: at this point they're more or less transparent to us (at least, until they malfunction). I've been paying the most attention to Kinect, and Clayton talks about Kinect hacks, but he also discusses programs that use Microsoft Surface to offer therapy to kids with cerebral palsy and a robotic triage nurse that Microsoft Chief Research and Strategy Officer Craig Mundie demonstrated at a Cleveland Clinic "Ideas for Tomorrow" lecture. Here's a clip from that talk. If you're pressed for time skip to the 5 minute mark to see the rudimentary system in action:

Keep in mind this is an early prototype. Obviously a lot of the back and forth we see here could be done via an on-screen form and a keyboard, but as the graphics and "personality" improve (making the system easy to use even for people not familiar with computers) and the system gets better sensors to observe the patients, this idea could get really interesting. Kinect on Windows might be a bit of a gimmick at first, but over time we'll find places where a gesture or a spoken word is easier than reaching for a mouse or typing out a command. From there we'll branch out to other ways of interacting with our computer, and, I suspect, moving away from it at the same time. Soon enough we won't be tethered by mouse and keyboard. We'll have a nice big LCD display mounted on the wall and our desktop will be Surface (or something similar)-enabled. Now the computer becomes another part of the home or office. We can pace and gesture and pick things up and move from room to room and the computer will know where we are, what we're doing and what we're holding, and be able to assist us in whatever task we're trying to accomplish. If we leave the house the whole process will jump to our tablet or handset and go with us. At least that's the future I imagine. It still seems a bit like sci-fi but we're moving in that direction. What's the next step after that? Now that our computers have "eyes" to watch us, will they start to be pro-active in assisting us? How about mobile manipulators for the computer? Robots, essentially, but with no on-board smarts. My mother is getting on in years and as I write this she's in the midst of an extended stay in the hospital. A big question on all my family member's minds is whether she'll be able to take care of herself at home or if she'll have to move into some kind of elderly care facility. My imagined house-computer could keep an eye on her. If she fell it could call for help. If she forget to take her medicine it could remind her. These aren't new ideas; anyone who remembers Rosie the Robot from the Jetsons knows that. But they're starting to seem achievable in the near future. I hope that a generation from now people like my mom will have household computer systems with robotic avatars that can help them stay in their own homes for longer. So yes, I'm very excited about the Windows SDK for Kinect; it may seem like a gimmick today but I think it's a next step on the path to giving our technology awareness of what we're doing, and the ability to help us in new and interesting ways.

Peter Smith writes about personal technology for ITworld. Follow him on Twitter @pasmith.

From CIO: 8 Free Online Courses to Grow Your Tech Skills
Join the discussion
Be the first to comment on this article. Our Commenting Policies