Did you know the keyboard (not even counting the typewriter) is over 70 years old? That the mouse is 40 years old? What are you using right now to control your computer? I'd be willing to bet that you're using a keyboard or a mouse - okay, maybe a trackpad (14 years old).
There's no denying that the keyboard and mouse were master strokes of human-computer interaction (HCI). The amount that we can do today using these two ancient input devices (in computer years, anyways) is pretty impressive. We managed to send men to the moon without mice, we sequenced the human genome with input devices developed in the mid 1800s and ground-breaking data visualization is still geared toward the keyboard and mouse.
Are we stuck on these devices? Will we be using the keyboard and mouse for another 60, 80, 100 years? No. We're finally making some progress when it comes to HCI.
David Pogue, a tech columnist for the New York Times uses voice recognition for almost all of his column content and emails, and has for several years.
For all that Pogue loves his system, it's not right for everyone. Voice recognition still requires training, still makes corrections cumbersome and isn't greatly suited for an office environment. When I can say "computer, set up a new project in the repository" – then we'll be getting somewhere.
If you're reading this on a recent Mac or a lot of recent PCs, you already have a built-in camera. Check this out: software to control iTunes using hand gestures, detected by the built-in camera. While the implementation may be sketchy the idea is absolutely not. Humans naturally communicate with gestures and how we interact with our computers should not be any different.
The iPhone made motion detection hot. The Nintendo Wii built an entire platform around it. Accelerometers + gravity make for a killer input type, one we're only seeing the first steps into right now. Give iPhone app developers a little more time to work with the idea, then give Nintendo a while to put out a Wii 2 and we'll really be off to the races.
Of course we can't talk about the iPhone without talking about touchscreens. While touchscreens can simply function as keyboard / mouse analogues the real magic is happening in multi-touch interfaces such as the iPhone, recent trackpads and Microsoft's Surface concept. All of these recent products bring us baby steps closer to being able to manipulate digital content exactly the way we manipulate real-world objects.
Believe it. Multiple prototypes have already been developed and consumer products are on their way. Get excited about something and your computer changes your music playlist to match. Wink and your online avatar winks. Concentrate to cast a spell in a game. Use a computer to control a robotic arm. These aren't idle ideas, these are applications people are working on right now and with great success. Think about that.
What can we do now?
At ZURB and in a lot of agencies and freelancing projects we all work in a highly constrained environment – IE6 doesn't exactly support thought control, Firefox doesn't respond when you say "go to apple.com/macosx" (though it now supports multi-touch gestures!). Maybe they will someday – in fact I'm sure they will. In the meantime we have a few new tricks we can work with.
So why do we still use the keyboard and mouse?
They're pretty good input devices, and everyone is used to them. Change in any industry, especially one as large as computing, doesn't usually happen overnight. Or does it?
It can, under exceptional circumstances. Touch screens have existed for over three decades but they were relegated to gimmicky, poorly executed kiosks for years until Apple took the touch screen idea and polished it to a mirror shine. Love it or hate it the iPhone changed the game almost overnight when it comes to mobile device interaction. Not just good ideas but good design will bring new HCI to the masses: great design will bring it faster.
Input is important for everyone who uses a computer. The keyboard and mouse (and offshoots like trackballs, trackpads and tablets) have served us well for a long time, but evolving hardware capabilities mean we can explore more natural, more innovative input methods now. We as designers can work to push this forward in a few ways already, with more on the horizon. It's time to start thinking of not only what the best way to output information is, but the best way to bring it in.