We've moved beyond devices. Our smartphones, our tablets have become extensions of ourselves. And the most perfect example of this is Google's upcoming glasses Certainly, it's the most organic, electronic extension. Yet it might be the most limiting.
WIth Glass, we'll see the world slightly differently. Products, apps will all be within the blink of an eye. Now Google has given us a peek into how folks will interact and use the device. Check out this video and notice how it might not be as liberating as you might think:
A lot of the interactions are through voice, which could hinder where Glass could actually be used. Sure, it'll be terrific in a car, where we're handsfree. But we know how frustrating using voice commands can occasionally be (we're looking at you, Siri). For example, Siri:
- Can't deal with heavy accents
- Misinterprets voice commands
- Doesn't support many other languages
That's not to say that they aren't advantages. There are. Like we said, it's convenient for remaining handsfree while driving, especially when sending text messages. You don't have to use your fingers to actually use your phone. And Glass seems like the next step in this handsfree evolution. But you can't use voice all the time.
Think about it. You're in a library and you need to Google something. It becomes difficult to use voice commands, you might disturb others. Or the reverse, you're at an outdoor concert with wind, crowds and speakers. The noise might render your Glass useless. It wouldn't be able to make heads or tails of what you're saying. You won't be able to tell it to record the concert or snap pictures of your friends. And for those of us that wear prescription glasses, we might be out of luck (although, we could foresee a prescription model down the road). And using them for scuba diving might be a bit difficult.
Right now, it also seems that you'd be completely reliant on voice, so you won't be able to manipulate data. Although, that might not be too far off.
While Glass might seem limiting, it's still exciting to see this one step closer to being in our actual hands. For all it's limitations, there's plenty of more opportunity opened up by it. How do designers work around these limitations? How do we build products where we can manipulate data or tools without the use of our hands or gestures, only our voice?
Limitations are only constraints by any other name. And those can only force us to design smarter and can actually be liberating.