The direct-manipulation metaphor has made computers more easily accessible to many users, but as the computer world changes we see the limits of that metaphor. One emerging problem is search and filtering in large information spaces. The passive behaviour that the direct-manipulation metaphor offers may have to be complemented with an active behaviour on the part of the system. We must trust our system to take some of the responsibility in searching/filtering for information.
One way of making the system active is by making it adaptive to its users. An adaptive system will actively follow users’ actions and try to infer some characteristics of them. Based on its beliefs about users, in the so-called user model, the system can then actively help users to perform actions.
The currently most well-known active systems are so-called agents or personal assistants. Personal assistants have been proposed as a means to deal with the problem of shared responsibility between system and user (Maes, 1994). From the user perspective an agent will take on parts of a problem and ”run errands” on behalf of the user. Some of these agents exhibit more or less intelligent behaviour. For example, there are agents that observe users’ habits when reading and sorting their mail. The agent tries to infer any regular behaviour of the user and may then ask the user whether a rule should be added so that the system can automatically perform some action on behalf of the user corresponding to the user’s regular behaviour. A typical rule can be ”place any mail sent to the mailing list eace in the eace-folder and then delete it from my mail-box”.
There are very few studies on how users react to and trust systems that are active and adaptive towards users. Some studies indicate that they are not always accepted (Maskery, 1985; Morris, Rouse, and Ward, 1988). If the adaptations are such that they change how the user interacts with the system they might be disturbing: the user might have learnt the instructions and is then forced to relearn how to interact with the system as it adapts. Furthermore, if is not possible to inspect and alter the assumptions made in the user model, there will be no way for the user to correct erroneous assumptions made by the system. Some systems make quite sophisticated inferences based on users’ actions and it might therefore be quite hard for users to predict how the system will adapt to them.
Adaptivity of systems is therefore a two-edged sword: it may hinder users as much as help them. Much research is needed both to determine how adaptivity can be mastered technically and to determine how users accept, adapt to and adapt the adaptivity features.
Some research questions related to these issues concern:
• how can the limitations of the systems be made transparent to users?
• how can adaptations be designed not to confuse users – should the system be predictable in some sense?
• which limitations are acceptable/learnable in different work situations?
• which adaptation strategy should be used for different users/work situations?
• how can the adaptivity be made robust and reliable, both from an algorithmic point of view and in terms of users’ trust in these systems?
• how do we develop adaptive systems – which methods are applicable, how do we identify relevant user characteristics, and when are adaptive solutions relevant/useful?
These questions relate to basic issues of technological methods as well as issues of work task analysis, general learning issues and individual differences. For the latter problems, the work on adaptivity requires an understanding of human cognition and learning.
In this thesis we study the problem of making an information system adaptive to its users and their needs and still maintain users’ acceptance of the system. We describe the development of one adaptive hypermedia system, starting from the studies of users, via the design of the system, the implementation, the bootstrapping of the adaptive parts, and finally the evaluation of the prototype system. Of the research questions outlined above, we shall try to address some aspects of the first five, with an emphasis on making the adaptivity transparent to the user through making it predictable and thus gaining the user’s trust in the system. The last research question, on methods for developing adaptive systems, is partly tackled in this thesis insofar as we discuss when adaptive solutions may be relevant and useful, but we do not provide an efficient method for developing adaptive systems.
Research Challenges for Adaptive Systems
Most research in the area of adaptive systems (or intelligent interfaces) has its roots in artificial intelligence (AI). With some notable exceptions such as (Oppermann, 1994; Meyer, 1994; Benyon and Murray, 1993), the focus has not been on the usability aspects of these system or on the practicality of the technical approach, but rather on the classical artificial intelligence problems, like the choice of knowledge representation, inferencing,
machine learning, etc. This has lead to numerous problems for the practical application of adaptive techniques.
Active but Not Magical
One serious problem is that systems developed starting from the AI viewpoint sometimes violate basic principles of user interface design. One such basic principle is that a system should be
predictable: there should be a stable relationship between actions made by the user at the interface and system responses (Shneiderman, 1987). Obviously, an adaptive system cannot adhere to a strict interpretation of the predictability principle since the whole purpose of an adaptive system is to change its behaviour in reaction to the user’s actions. Thus when we design an adaptive system, we must be aware of the fact that we are abusing some of the fundamental principles of usability. This means that we should keep in mind to try to find substitutes for the extremely straightforward stimuli-response behaviour of ordinary systems.
In the childhood of another field of AI, knowledge-based systems, we saw similar usability problems. Some knowledge-based systems appeared almost magical since they would ask for some facts and then, without any interaction, come back with the answer or diagnosis. This magical behaviour alienated users (Berry and Broadbent, 1986; Pollack et al., 1982). Still, some of the problems we are facing, like information overflow, cannot be easily solved without making systems actively help the user – so, the challenge lies in how to make the system active without appearing to be magical.
Continuous Improvisation
Another problem with the research in the areas of adaptive systems is the focus on how to extract knowledge about the individual user, rather than focusing on which adaptive behaviour would improve the system most and solve most problems on behalf of the user (Self, 1988; Sparck-Jones, 1991). Much work has been devoted to the acquisition of the user / student model, the representation of the model in the system, and the maintenance of the model. Maybe this is the reason why there has been such a strong focus on modelling users’ knowledge rather than other aspects of human cognition.
When modelling users’ knowledge we run into another problem, and that is in the reliability of these models. A user’s knowledge is not static – it keeps changing, we learn and we forget, we make mistakes for other reasons than lack of knowledge (like getting tired or being distracted by other tasks). Most models of users’ knowledge will be unreliable (Kay, 1994). Unfortunately, the same is true for models of users’ goals and plans. The emerging theories of situated cognition, constructive cognition, and distributed cognition, challenge both the goal-plan-oriented view on human behaviour and the previously held symbolic view on cognition (Suchman, 1987). Users may not be as goal-oriented and rational as some of the adaptive features of proposed systems require. People will act based on the situation they are in right now, so their goals and plans keep changing in response to how the situation changes and develops. If the adaptive system assumes a too rigid and static model of the user’s plans and goals, it will not be able to capture the ”continuous improvisation” that people are involved in (Suchman, 1987). We need to further our knowledge of human cognition in order to find good adaptive features that will, in fact, correspond to human behaviour, increase the usability of systems, and be computationally feasible to implement.
Scale Up
In addition to all the problems outlined above, the field of adaptive systems suffers from the same problems as the whole artificial intelligence field: these systems have mostly tackled small, toy-world examples and have not convincingly shown how they can scale up to large, real-world problems (Schank, 1991). There are two promising directions of research that may tackle this problem. One is the machine learning approach where the whole purpose is to make the program learn and thereby handle new situations and scale up (even if they will not be able to scale up in the sense of covering unanticipated consequences of real-world problems). Some promising machine learning systems are emerging, particularly in the agent-field. The other direction
is in marrying simple tools, like hypermedia, with more sophisticated adaptive/intelligent programs and turning to the Internet. By adding robust and useful adaptivity to simple, wide-spread tools, like World Wide Web (WWW) or e-mail, we stand a chance of succeeding in spreading (moderately) intelligent tools to a general audience. In this thesis, we are mainly concerned with the second approach.
In this context, we would also like to point at the difficulties in developing adaptive systems. There are few attempts at providing a good methodology, preferably including the whole software life cycle, for how to develop and maintain adaptive systems. In particular, it is difficult to maintain most adaptive systems. Only the expert designer has enough knowledge to be able to change the knowledge base or the adaptive behaviour of the system. Much the same problems were encountered in the childhood of knowledge-based systems.
Imitating Human-Human Communication
Some of the intelligent systems creators have put forth human behaviour as the ultimate goal for what the system should be capable of in order to be of best use to users.
This is the case in, for example, some natural language processing systems and some agent systems. Even if the human behaviour is the best intelligent system we can study, this does not mean that we should necessarily imitate it. There are two reasons for this. First, a computer system acting like a human creates high expectations – not necessarily the
right expectations. Users might assume that the system is very capable in certain ways, for example, attributing real-world knowledge to a system that is only capable of processing user input in a very limited sense. At the same time as users may overestimate the system’s capabilities, they will not treat the system as a fellow human being in every respect (Dahlbäck et al. 1993). Users are, for example, not polite to computers, they do not indicate shifts in dialogue focus in the same way as with fellow human beings, etc. Any system design based on trying to imitate some aspect of human behaviour must not assume that this will be sufficient in making the user and the system work together smoothly.
The second reason not to imitate human behaviour is that we should utilise computers for what computers are good at. Computers are, for example, good at handling, searching for and sorting information, but not (yet) as good at solving problems needing real-world knowledge, or communication using natural language.
Imitating human-human communication is not only difficult, is may not lead to an optimal design. Instead, our starting point is that the adaptive parts of the system should be used to tackle problems that cannot easily be tackled by other means. Such problems include information overload, navigation in large information spaces, complex tasks such as managing complex machines or factories, and real-time critical tasks such as traffic control. The adaptive parts of the system should be designed to be part in the total solution, but not be considered to be the sole remedy in tackling the problem. Usability principles must be used to ensure that the whole solution, both the adaptive and non-adaptive parts, is usable.