A glass Box Approach to Adaptive Hypermedia



Download 0.88 Mb.
Page17/26
Date15.07.2017
Size0.88 Mb.
#23335
1   ...   13   14   15   16   17   18   19   20   ...   26

Where is the Glass Box?


We claimed that it was important to allow the user to inspect the user model but only through a glass box, preventing the user from seeing the more gory details of the adaptive mechanism hidden in the black box. This would enable the user to stay in control of the adaptive mechanisms, and achieve transparency and some form of predictability. The relation between the system’s assumption and the corresponding adaptation should be made visually clear so that the user could learn how the adaptive system works at a glass box level.

We also said that in order to make the adaptivity controllable, the interface must make it obvious in what ways the explanations are different. Finally, we said that we should only attempt to model such characteristics of users that will profoundly affect the interaction and help to tackle the information overflow problem.

Has the decision to adapt to users’ information seeking tasks in the way outlined above, the potential of fulfilling our design criteria?

Choosing to adapt to the task does tackle the information overflow problem within an answer page, and it does profoundly affect the interaction with the user. I believe that adapting to users’ tasks is much better than adapting to users’ background knowledge. This adaptation will make the answers provided relevant to users’ needs, it will reduce the amount of information provided in the answer page, it lays the basis for a (to the user) comprehensible relation between adaptation and resulting answer, and, finally, as a by-product it makes the authoring problem much simpler. Also we first attempted a classical approach adapting to users’ background knowledge, and it did not work very well. In particular, the adaptations to a particular user’s knowledge are unreliable as we do not have a good enough basis for making any detailed assumptions about his/her knowledge, and, furthermore such a user model will sometime only achieve minor adaptations of the explanations (as discussed when we examined KNOME and KN-AHS in chapter 2).

Concerning the visual clues to how explanations differs, we provide a whole range:


  • as we open and close whole information entities using stretchtext, it is obvious that the system is adapting the answer page since whole chunks of text will either be shown or not

  • the different explanations in the various information entities are different in terms of choice of style (declarative or procedural, subject-direct or neutral, etc.), choice of concepts mentioned (either basic SDP terms are deliberately included or only included when called for), etc.

  • we colour in red the headings of the information entities that the adaptive system has deemed relevant to a particular task

  • we also place a red dot in front of the chosen information entities in the guide frame – the guide frame will thereby form a pattern and by switching back and forth between tasks, users can come to recognise those patterns

  • finally, in the top of the page the system explicitly states that: This explanation has been generated assuming that the user is …

Thus we fulfil the second criteria, that it should be obvious that the variants of explanations are different.

In the black box we hide the plan recognition mechanism. Users will not be able to see exactly how the system infers their task from their actions or how various actions are weighted together using probabilities and their history of actions. The glass box allows users to see that a certain task has been inferred, and it allows them to change that task and see which alternatives the system has deemed less probable. Also by switching to another task, users can create an understanding for the relation between assumed task and corresponding adaptation of the answer. Finally, the tasks have been given domain-dependent names that are more or less fuzzy but such that users can roughly understand their intended meaning.

The adaptivity in POP can be characterised as being in-between a user-controlled and a self-adaptive system. According to Oppermann (1994) this middle route is to be preferred since users must have control over the adaptivity, but they will not spend much time adapting the adaptivity when their main task is really something else. We allow users to set which task they are working with at any point during the interaction with the system, and then we use plan inference to update their assumed current task continuously (Wærn, 1994a, Wærn, 1996).

As pointed out by Johanna Moore and Cécile Paris, it is of crucial importance that users are allowed to be dissatisfied with an explanation, (Moore, 1989, Moore and Paris, 1992). Their conclusion is that the system must allow for a dialogue between user and system, where in the end the total dialogue will provide users with explanations that fulfils their goals.

Our system also allows for a kind of dialogue, but it is not imitating the kind of dialogue that takes place between experts and novices. Instead, users can pose follow-up questions that are associated with the hotlists in the texts or in the graphics, and open or close the information entities that are available as an answer to a question, etc. This way, users can in fact turn an answer originally directed at somebody learning about SDP into a description fitted for a user trying produce an object or follow a process, or they can turn an explanation mainly fitted for an expert in the domain, into an explanation more appropriate for a novice.

We view this ‘dialogue’ as an example of how a computer can offer other and different ways to communication, and still reach the same goal as if we had chosen to imitate a more human-human kind of dialogue. The great advantage is of course, that it is not as difficult to implement, and it also contributes to communicating the limits of the system functionality to the user. In fact, it can even work as a form of tutoring in that users can always see what can be asked and more importantly, they can see what the authors/designers of SDP value as important (or possible) questions that users should/may ask. The concepts marked as hotlists signal to users that they should make sure that they understand them in order to be able to understand the whole description. In fact, it was shown by Zhao (1994) that visible link-types improved learning in a hypertext system.




Download 0.88 Mb.

Share with your friends:
1   ...   13   14   15   16   17   18   19   20   ...   26




The database is protected by copyright ©ininet.org 2024
send message

    Main page