14
Lecture 3: Text and gesture interaction Guest lecturer Per Ola Kristensson will present these ideas, using a case study based on his own recent research, leading
to a successful product, recent buyout and extensive press coverage. When technical people are commenting on, or even creating, user interfaces, they often get distracted or hung upon the hardware used for input and output. This is a sign that they haven’t thought very hard about
what is going on underneath, and also that they will never keep up with new technical advances. There have always been good and bad examples of interface designs using control panels, punch cards, teletypes,
text terminals, bitmap displays, light pens, tablets, mice, touchscreens, and soon.
With every generation, you can hear people debating whether, for example, the mouse is better than a touchscreen or voice input is better than a keyboard. Debates like this demonstrate only that those involved haven’t been able to see past the surface appearance (and the marketing spiel of the device manufacturers. And opinions or expertise on these matters quickly gets out of date. Within the past few weeks, I’ve heard a leading researcher tell his sponsors that we have
added a GUI to our prototype, as if that was an important thing. 20 years ago, it was something of an achievement to get some output on a bitmap display rather than a command line text application. Nowadays, it is more challenging to work with projection surfaces or augmented reality (more of that in a later lecture. But sensing and display technologies change fast, and it’s more important to understand the principles of interaction than the details of a specific interaction device. The lecture on visual representation was based on display principles that are independent of any particular display hardware. If we consider the interaction principles that are independent
of any particular hardware, these are
How does the user get content (both data and structure) into digital form
How does the user navigate around the content
How does the user manipulate the content (restructuring,
revising, replacing These are often interdependent. The Dasher system for text entry presents an interface in which the user navigates through a space of possible texts as predicted by a probabilistic language model, so it can be considered both as content creation and navigation. It is relatively hard to structure and revise text using Dasher, because the language model only uses a character context, and many text documents have structure on a larger-scale than that. However, Dasher provides an excellent example of an interaction paradigm that is independent of any particular hardware – it
can be controlled using mouse, keys, voice, breath, eyetracking, and many other devices.