Animated Pedagogical Agents: A survey
Ruth O. Agada
Bowie State University
Computer Science Department
Dr. Jie Yan – Faculty Advisor
Bowie State University
Computer Science Department
ABSTRACT
Animated pedagogical agents offer great promise for broadening the bandwidth of tutorial communication and increasing learning environments' ability to engage and motivate students. It is becoming apparent that this new generation of learning technologies will have a significant impact on education and training. In this paper we will discuss the several animated pedagogical agents and the technology behind them, as well as the technical issues they face.
-
INTRODUCTION
In this paper we are concerned with the research and development of computer animated pedagogical agents. These agents have 3D appearances and they perform on in 3D (web) environments. They can appear as a talking head showing facial expressions, text-to-speech synthesis and lip synchronization or as more fully embodied agents with postures, gestures and the ability to walk or fly around, demonstrate products or guide visitors to certain locations [35]. This paper is a survey of research on computer animated pedagogical agents used from an educational standpoint, through either a 3D visualized environment or through instructional material.
There is plenty of research being conducted in the development of an effective animated pedagogical agent. Animated pedagogical agents are designed to be lifelike autonomous characters that support human learning by creating rich, face – to – face learning interactions [20]. These agents are capable of taking full advantage of verbal and non – verbal communication reserved for human interactions [1]. They have been endowed with human like qualities to make them more engaging, making the learning experience more beneficial, and prevent distracting behaviors (unnatural movements) [20, 21]. They take full advantage of the face – to – face interactions to extend and improve intelligent tutoring systems [20, 39].
Studies have shown that effective individual tutoring is the most powerful mode of teaching. However, individual human tutoring for each and every student is logically and financially impossible, hence the creation and development of intelligent tutoring systems [36] to reach a broader audience. According to Suraweera and Graesser [39, 14] there have several intelligent tutoring systems that have been successfully tested, have shown that this new device does in fact improve learning.
-
EXAMPLES OF ANIMATED AGENTS
Animated agents have many uses [7]. They are employed on commercial web pages, in educational, training, and simulation environments, and in entertainment applications. In 3D virtual reality environments they move around and they have 3D gestures and pointing actions in order to guide and explain [35]. Considering Johnson’s research, the animated pedagogical agents adapt their behavior taking into consideration the learning opportunities that emerges during the software interaction [19]. They individualize the learning process and promote the student motivation. They give the user an impression of realism that is similar to human interaction. They engage into continuous dialog copying human dialog aspects. A learner that enjoys interacting with a pedagogical agent may have a more positive perception of the overall learning experience and may want to do more [36].
This paper will make frequent reference to several implemented animated pedagogical agents. These agents will be used to illustrate the range of behaviors that such agents are capable of producing and the design requirements that they must satisfy. Some of these behaviors are similar to those found in intelligent tutoring systems, while others are quite different and unique.
Virtual Sign Animated Pedagogical Agents are computer generated 3D sign language characters with pedagogical roles are onscreen characters which help in guiding the learner during the learning experience. Because many new technologies are interactive [17], it is now easier to create environments in which deaf or hearing-impaired students can learn by doing, receive feedback, and continually refine their understanding and build new knowledge. The new technologies can also bring exciting curricula based on real world problems into the classroom and providing scaffolds and tools to enhance learning.
“Andy” is a 3D animated internet-enabled virtual tutor that teaches deaf and hard-of-hearing children develop language and reading skills through sign language [34]. Developed by Sims and Carol Wideman, the SigningAvataraTM characters interpret words, sentences and complicated concepts into sign language, combining signing, gestures and body language to simulate natural communication [34]. The animations are based on in-depth research of how both hearing and deaf persons use the face and body to communicate. Used in several Florida school districts and at schools serving deaf students around the country, he software has been praised by teachers of the deaf and experts in computer technology for putting virtual 3D technology, widely used in video games, to use for educational purposes [34].
Then there is PAULA (Practical ASL Using Linguistic Animation). PAULA is also a sign language tutor, designed for institutional use, that, like Andy instructs its users in American Sign Language. The PAULA sign language tutor presents sign demonstrations via a series of 3-D graphical animations and the sign transcription process records both geometric and linguistic information for the sign [11]. From this, the resulting database/animation synchronization eliminates the need for the tedious and error-prone post process of manual annotation of sign presentations. The system allows the user to look up each sign in a glossary format and also provides a multiple choice quiz [11]. The design of PAULA is centered on the needs of sign language learners and the iterative nature of the development cycle has provided many opportunities for the sign learners to make their needs known [11].
Researchers face a single problem of how to create an effective user interface to provide the user with a believable experience. The idea is to create a system that will use intelligent and fully animated agents to engage its users in natural face – to – face conversational interaction. To use agents most powerfully, designers can incorporate suggestions from research about agents concerning speech quality, personality or ethnicity of the agent, or the frequency and verbosity of reward. Designers can also incorporate what research says about effective human teachers or therapists into the behavior of their agent [10]. In the development of Marni, both kinds of research were incorporated to make this agent more powerful. The virtual tutor Marni gives hints and encouragement to students based on specific errors or error patterns and built-in knowledge about handling these errors [10].
What makes Marni so unique is the technology that animates her. Developed at CSLR, she can produce convincing facial emotions and accurate movements of the lips, tongue and jaw during speech production. She was developed using CU Animate [28] a toolkit designed for research, development, control and real time rendering of 3-D animated characters. To accurately depict visual speech [29, 30], the team at CSLR used motion capture data collected from markers attached to a person’s lips and face while the person is saying words that contain all permissible sequences of adjacent phonemes in English words. The motion capture data for these phoneme sequences are stored in a database and are concatenated to create a representation of the movements of the lips for any English word or sentence. By mapping the motion capture points from concatenated sequences to the vertices of the polygons on the lips and face of the 3-D model, and by applying sophisticated algorithms to assure accurate movements of all associated polygons, the movements of the 3-D model will mimic the movements of a person producing the same speech [28].
Marni can be made to produce arbitrary movements of the eyes, eyebrows and head using CU Animate Markup Language, CU-AML; an easy-to-use yet flexible and powerful tool for controlling Marni’s face movements by marking up text. CU-AML enables designers to control facial expressions and emotions while Marni narrates a text or provides instructions, hints, encouragement or feedback to students in learning tasks [10].
Educational Software should have teaching instructions focused on students needs, taking care of different student categories or model teaching processes. It should be adapted to Student Models to satisfy specific difficulties of each student category. [36]. The Animated Pedagogical Agent was inserted at the IVTE to promote adapted teaching through teaching strategies based on a Student Model Base. The main aim of inserting an Animated Pedagogical Agent in the IVTE is to reach a high pedagogical level, as it works as a Tutor at the teaching - learning process [36].
An Intelligent Virtual Teaching Environment project is justified by new teaching, learning technologies that will be provided to Intelligent Tutoring System improving the efficiency level of teaching processes made by Animated Pedagogical Agent. It makes the cognitive process better [36].
According to Oliveira, the Animated Pedagogical Agent of IVTE software is a cognitive agent, taking into consideration its autonomy, memory of past actions knowing the environment and other society agents, making plans for the future, being pro-active [37]. Cognitive agents are based on knowledge, it means, they show intelligent behavior in many situations, and they have implicit and explicit knowledge representation [36].
In the IVTE software, the Animated Pedagogical Agent is represented by a “worm”, called Guilly, whose name was chosen during the field research, selects the correct teaching strategies according to the specific student model [36].
The environment operates on a non-immersed virtual reality where the student has the feeling of being in a real environment. The student is only entitled to a partial view of the environment where the student can only interact with elements within the immediate vicinity of the student [36].
-
ENHANCING LEARNING ENVIRONMENTS WITH ANIMATED AGENTS
This section lists several key benefits provided by animated pedagogical agents by describing the types of human-computer interaction they support. Most of the current agents support some but not all these types of interactions described. Each type can significantly enhance a learning environment without the others, and different combinations will be useful for different kinds of learning environments. To provide a summary of achievements to date, we use existing agents to illustrate each type of interaction.
As mentioned in the previous section, there are a multitude of pedagogical agents in existence and all with different intentions and audiences. These agents provide an interactive learning environment for their students. Creating a simulated mock-up virtual world where the pedagogical agent inhabits provides the user the unique opportunity to learn how to perform tasks in the real world that relate to the virtual world. An example of such a system would be the “Guilly” agent where the student action into the environment is adjusted by the existing elements in the scenery [36].
Demonstrating a task may be far more effective than trying to describe how to perform it, especially when the task involves spatial motor skills, and the experience of seeing a task performed is likely to lead to better retention [20]. Moreover, an interactive demonstration given by an agent offers a number of advantages over showing students a videotape. Students have the opportunity to move around in the environment and view the demonstration from different perspectives. They can interrupt with questions, or even ask to finish the task themselves [20], in which case Guilly monitor’s the student's performance and provide assistance [36].
Because of significant advances in the capabilities of graphics technologies in the past decade, tutoring systems increasingly incorporate visual aids. These range from simple maps or charts that are automatically generated to 3D simulations of physical phenomena and full-scale 3D simulated worlds [20]. To draw students' attention to a specific aspect of a chart, graphic or animation, tutoring systems make use of many devices, such as arrows and highlighting by color. An animated agent, however, can guide a student's attention with the most common and natural methods of non-verbal cues like gaze and gestures. In human-computer interaction people often interpret the interaction with the computer as interactions with humans. The social agency theory suggests that social cues like the face and voice of the agent motivate this interpretation. In two off-line experiments in which comprehension scores and liking ratings were collected, we found that participants preferred natural agents with natural voices [27].
According to Louwerse et al [27], when building an intelligent tutoring system with an embedded pedagogical agent there are several relevant questions to take into account like do users enjoy interacting with a computational agent? Do users interact with the agent as they would interact with a human? Do animated conversational tutoring agents yield pedagogical benefits [27]? In answering these questions, some studies have shown that there is evidence that these animated conversational agents will never be able to reach human-level intelligence, as well as the human ability to correctly interact with other human by providing the correct and appropriate social cues and at best the empirical results from other studies that state otherwise are at best inconclusive [12, 27]. Nonetheless, there is more optimistic research showing that people interpret HCI as human-to-human interaction as long as sufficient social cues are provided. In general, it is recommended that a pedagogical agent should have a human-like persona to better simulate social contexts and to promote learner-agent interaction [21]. According to Louwerse et al [27], because of a human tendency to blur what is real with what is perceived to be real, people automatically use social rules to guide their actions in the virtual environment. The social agency theory argues that people interpret computers as social partners. Consequently theories from social psychology can be applied to HCI according to the social agency theory [27]. Steve is an example of such a system.
Steve uses gaze and deictic gestures in a variety of ways [26]. Steve is capable of pointing out objects to discuss, manipulate, either by him, students or other agents, or observe their state. He looks at a student or another agent when waiting for them, listening to them, or speaking to them. Steve is even capable of tracking moving objects; for example, if the student is moving around, he will track it over one shoulder until it moves directly behind him, at which point he will track it over the other shoulder [20]. Based on a student’s actions in the learning environment, the tutor will provide either positive or negative feedback to the student. In addition to providing verbal feedback, an animated agent can also use nonverbal communication to influence the student [20, 3]. For example, Steve nods his head to approve of the student's actions and shakes his head to indicate disapproval. Moreover, body language can help indicate to students that they have just committed (or are on the verge of committing) a very serious error. This can make a strong impression on them.
When people carry on face-to-face conversation, they employ a wide variety of nonverbal cues to help regulate the conversation and complement their verbal utterances. While tutorial dialogue in previous tutoring systems resembles communication over a medium like the phone or internet, animated pedagogical agents allow us to more closely model the face-to-face interactions to which people are most accustomed [10]. Some nonverbal signals are closely tied to spoken utterances, and could be used by any animated agent that produces speech output. For example, intonational pitch accents indicate the degree and type of salience of words and phrases in an utterance, including rhematic elements of utterances and contrastive elements [20]; to further highlight such utterance elements, a pitch accent is often accompanied by a short movement of the eyebrows or head, a blink of the eyes, and/or a beat gesture (i.e., a short baton-like movement of the hands) [20]. As another example, facial displays can provide the speaker's personal judgment of the accompanying utterance [20].
Other nonverbal cues help regulate the flow of conversation, and would be valuable in tutoring systems that support speech recognition as well as speech output, such as Steve [20]. This includes nonverbal feedback, such as head nods to acknowledge understanding of a spoken utterance. It also includes the use of eye contact to regulate turn taking in mixed-initiative dialogue. For example, during a pause, a speaker will either break eye contact to retain the floor or make eye contact to request feedback or give up the floor. Although people can clearly communicate in the absence of these nonverbal signals, communication and collaboration proceed most smoothly when they are available [21].
Motivation is a key ingredient in learning [39], and emotions play an important role in motivation. By employing a computational model of emotion, animated agents can improve students' learning experiences in several ways [10]. First, an agent that appears to care about a student's progress may encourage the student to care more about her own progress. Second, an emotive pedagogical agent may convey enthusiasm for the subject matter and thereby foster similar levels of enthusiasm in the learner. Finally, a pedagogical agent with a rich and interesting personality may simply make learning more fun [38, 13]. A learner that enjoys interacting with a pedagogical agent may have a more positive perception of the overall learning experience and may consequently opt to spend more time in the learning environment [20].
In addition to the types of interactions described above, animated pedagogical agents need to be capable of many of the same pedagogical abilities as other intelligent tutoring systems. For instance, it is useful for them to be able to answer questions, generate explanations, ask probing questions, and track the learners' skill levels. An animated pedagogical agent must be able to perform these functions while at the same time responding to the learners' actions. Thus the context of face-to-face interaction has a pervasive influence on the pedagogical functions incorporated in an animated pedagogical agent; pedagogy must be dynamic and adaptive, as opposed to deliberate, sequential, or preplanned [20].
The ability to deliver opportunistic instruction, based on the current situation, is a common trait of animated pedagogical agents. Herman the Bug [25], for example, makes extensive use of problem solving contexts as opportunities for instruction. When the student is working on selecting a leaf to include in a plant, Herman uses this as an opportunity to provide instruction about leaf morphology. Adele (Agent for Distance Learning: Light Edition) constantly assesses the current situation, using the situation space model of Marsella and Johnson [32], and dynamically generates advice appropriate to the current situation. Another type of opportunistic instruction provided by Adele is suggesting pointers to on-line medical resources that are relevant to the current stage of the case work-up. For example, when the student selects a diagnostic procedure to perform on the simulated patient, Adele may point the student to video clips showing how the procedure is performed [20].
-
TECHNICAL ISSUES
With all the good that comes with the development integration of animated pedagogical agents into our daily lives, they also pose new technical challenges [20]. This section outlines a few of the challenges and some of the relevant work to date in addressing them.
Johnson, Rickel and Lester [20] stated that designing the behavior of an agent requires addressing two issues: designing the building blocks from which the agent's behavior will be generated and secondly, developing the code that will select and combine the right building blocks to respond appropriately to the dynamically unfolding tutorial situation.
The behavior space approach is the most common method for generating the behavior of a pedagogical agent. A behavior space is a library of behavior fragments [20]. To generate the behavior of the agent, a behavior sequencing engine dynamically strings these fragments together at runtime [20]. When this is done well, the agent's behavior appears seamless to the student as it provides visually contextualized problem-solving advice [20]. To allow the behavior sequencing engine to select appropriate behavior fragments at runtime, each fragment must be associated with additional information describing its content [20].
One of the biggest challenges in designing a behavior space and a sequencing engine is ensuring visual coherence of the agent's behavior at runtime [20]. When done poorly, the agent's behavior will appear discontinuous at the seams of the behavior fragments [20]. For some pedagogical purposes, this may not be serious, but it will certainly detract from the believability of the agent, and it may be distracting to the student [20]. Thus, to assist the sequencing engine in assembling behaviors that exhibit visual coherence, it is critical that the specifications for the animated segments take into account continuity [20]. One simple technique employed by some behavior sequencing engines is the use of visual bookending. Visually bookended animations begin and end with frames that are identical [20, 21]. Just as walk cycles and looped backgrounds can be seamlessly composed, visually bookended animated behaviors can be joined in any order and the global behavior will always be flawlessly continuous. Although it is impractical for all visual segments to begin and end with the same frame, judicious use of this technique can greatly simplify the sequencing engine's job [20].
To achieve more flexibility, the alternative approach is to completely generate behavior as it is needed, without reusing any canned animation segments or even individual frames [20]. These characters each include a 3D graphical model of the agent, segmented into its movable parts. In addition, each includes algorithms that can take a specification of a desired posture and generate the appropriate body motions to transition from the agent's current posture to the desired one [20].
The flexibility of this generative approach to animation and speech comes at a price: it is difficult to achieve the same level of quality that is possible within a handcrafted animation or speech fragment [20]. For now, the designer of a new application must weigh the tradeoff between flexibility and quality [20]. Further research on computer animation and speech synthesis is likely to decrease the difference in quality between the two approaches, making the generative approach increasingly attractive [20].
According to Johnson and Rickel [20], the immediate and deep affinity that people develop for interactive lifelike characters, the direct pedagogical benefits that pedagogical agents provide are perhaps exceeded by their motivational benefits [5]. By infusing animated agents with simulated life, significantly increases the time users spend with educational software. With the many advances in computer capabilities and their low cost today make the widespread distribution of real-time animation technology a reality. Johnson and Rickel [20] said that believability of an animated agent is a combination of two aspects. The first being the visual qualities of the agent and secondly, the computational properties of the behavior control system that creates its behaviors in response to evolving interactions with the user. In particular, techniques for increasing the believability of animated pedagogical agents should be perceived as lifelike, but not too much as to be labeled distracting. They should have controlled visual impact and be equipped with complex behavior patterns without telegraphing. To achieve believability, agents typically exhibit a variety of believability-enhancing behaviors that are in addition to advisory and ``attending'' behaviors. For example, the PPP Persona exhibits ``idle-time'' behaviors such as breathing and foot-tapping to achieve believability. To deal with the concerns of controlled visual impact for sensitive pedagogical situations in which the student must focus his attention on problem-solving, a competition-based believability-enhancing technique is used by one version of the Herman agent. The net result of the ongoing competition is that the agent behaves in a manner that significantly increases its believability without sacrificing pedagogical effectiveness [20].
Web-based environments provide new potential to enhance learning through a visual and interactive delivery of instruction [23]. They tend to trim down the limitations and complications encountered in trying to distribute the system to a very large audience [39]. A Web-based ITS can be deployed according to a number of architectures. The common solutions include Java-only, HTML-CGI and distributed Client-Server [39]. A Java-only solution would be to create the Tutor as an applet and allow students to download it from a specific URL. The users would interact with HTML entry forms in a Web browser in HTML-CGI architecture. This consists of a server that possesses total functionality [39]. A client-server model, on the other hand, distributes functionality between a client and a server. This would consist of a downloadable applet that delivers the user interaction module and communicates directly with a server application [39]. A number of ITSs that are based on the Web have been developed in the last few years. These ITSs cover a variety of pedagogical areas, e.g. Mathematics, Computer Science and Medicine, and are intended for a wide range of ages [39].
-
CONCLUSION
We surveyed the research issues in designing virtual agents that are, in our view, the most important to tackle. Clearly, graphics and animations are important; however it is also important to have behavior and derived animations generated from what has to be presented and the available multimedia resources.
Animated pedagogical agents offer enormous promise for interactive learning environments. It is becoming apparent that this new generation of learning technologies will have a significant impact on education and training. With the technological advancements being made and applied to human – human tutoring, animated pedagogical agents are slowly but surely becoming something akin to what ITS founders envisioned at the inception of the field. Now, rather than being restricted to textual dialogue on a terminal, animated pedagogical agents can perform a variety of tasks in surprisingly lifelike ways.
Despite the great strides made in honing the communication skills of animated pedagogical agents, much remains to be done. While the ITS community benefits from the confluence of multidisciplinary research in cognition, learning, pedagogy, and AI, animated pedagogical agents will further require the collaboration of communication theorists, linguists, graphics specialists, and animators. These efforts could well establish a new paradigm in computer-assisted learning, glimpses of which we can already catch on the horizon.
References
-
R. Atkinson, “Optimizing Learning From Examples Using Animated Pedagogical Agents,” Journal of Educational Psychology, vol. 94, no. 2, p.416, 2002. [online] Academic Search Premier Database [Accessed: August 11, 2009].
-
A. L. Baylor, R. Cole, A. Graesser and L. Johnson, Pedagogical agent research and development: Next steps and future possibilities, in Proceedings of AI-ED (Artificial Intelligence in Education), Amsterdam July, 2005.
-
A. L. Baylor and S. Kim, “Designing nonverbal communication for pedagogical agents: When less is more,” Computers in Human Behavior, vol.25 no.2, pp.450-457, 2009.
-
A. L. Baylor and J. Ryu, “Does the presence of image and animation enhance pedagogical agent persona?” Journal of Educational Computing Research, vol. 28, no. 4, pp.373-395, 2003.
-
A. L. Baylor and R. B. Rosenberg-Kima, Interface agents to alleviate online frustration, International Conference of the Learning Sciences, Bloomington, Indiana, 2006.
-
A. L. Baylor, R. B. Rosenberg-Kima and E. A. Plant, Interface Agents as Social Models: The Impact of Appearance on Females’ Attitude toward Engineering, Conference on Human Factors in Computing Systems (CHI) 2006, Montreal, Canada, 2006.
-
J. Cassell, Y. Nakano, T. Bickmore, C. Sidner & C. Rich, Annotating and generating posture from discourse structure in embodied conversational agents, in Workshop on representing, annotating, and evaluating non-verbal and verbal communicative acts to achieve contextual embodied agents, Autonomous Agents 2001 Conference, Montreal, Quebec, 2001.
-
R. E. Clark and S. Choi, “Five Design Principles for Experiments on the Effects of Animated Pedagogical Agents,” J. Educational Computing Research, vol. 32, no. 3, pp.209-225, 2005.
-
R. Cole, J. Y. Ma, B. Pellom, W. Ward, and B. Wise, “Accurate Automatic Visible Speech Synthesis of Arbitrary 3D Models Based on Concatenation of Diviseme Motion Capture Data,” Computer Animation & Virtual Worlds, vol. 15, no.5, pp.485-500, 2004.
-
R. Cole, S. van Vuuren, B. Pellom, K. Hacioglu, J. Ma, J. Movellan, S. Schwartz, D. Wade- Stein, W. Ward and J. Yan, “Perceptive Animated Interfaces: First Steps Toward a New Paradigm for Human Computer Interaction,” Proceedings of the IEEE: Special Issue on Human Computer Interaction, vol. 91, no. 9, pp.1391-1405, 2003.
-
M. J. Davidson, (2006). “PAULA: A computer – Based Sign Language Tutor for Hearing Adults,” [online] Available www.facweb.cs.depaul.edu/elulis/Davidson.pdf [Accessed: June 15, 2008]
-
D. M. Dehn and S. Van Mulken, “The impact of animated interface agents: a review of empirical research,” International Journal of Human-Computer Studies, vol. 52, pp.1–22, 2000.
-
A. Graesser, K. Wiemer-Hastings, P. Wiemer-Hastings and R. Kreuz, “AutoTutor: A simulation of a human tutor,” J. Cognitive Syst. Res., vol. 1, pp. 35–51, 1999.
-
A. C. Graesser and X. Hu, “Teaching with the Help of Talking Heads,” Proceedings of the IEEE International Conference on Advanced Learning Techniques, pp. 460-461, 2001.
-
A. C. Graesser, K. VanLehn, C. P.Rosé, P. W. Jordan and D. Harter, “Intelligent tutoring systems with conversational dialogue,” AI Mag, vol. 22, no.4, pp. 39-51, 2001.
-
A. Graesser, M. Jeon and D. Dufty, “Agent Technologies Designed to Facilitate Interactive Knowledge Construction,” Discourse Processes, vol. 45, pp.298-322, 2008.
-
Greenfield, P.M. and Cocking, R.R. Interacting with video: Advances in applied developmental psychology, vol. 11, Norwood, NJ: Ablex Publishing Corp. 1996, p.218.
-
X. Hu and A. C. Graesser, “Human use regulatory affairs advisor (HURAA): Learning about research ethics with intelligent learning modules,” Behavior Research Methods, Instruments, & Computers, vol. 36, no. 2, pp. 241-249, 2004.
-
W. L. Johnson, “Pedagogical Agents,” ICCE98 - Proceedings in the Six International Conference on Computers in Education, China, 1998.[online] Available http://www.isi.edu/isd/carte/ped_agents/pedagogical_agents.html [Accessed: June 15, 2008]
-
W. L. Johnson and J. T Rickel. “Animated Pedagogical Agents: Face-to-Face Interaction in Interactive Learning Environments,” International Journal of Artificial Intelligence in Education, vol. 11, pp. 47-78, 2000.
-
Y. Kim and A. Baylor, “Pedagogical Agents as Learning Companions: The Role of Agent Competency and Type of Interaction,” Educational Technology Research & Development, vol. 54, no. 3, pp.223-243, 2006.
-
A. Laureano-Cruces, J. Ramírez-Rodríguez, F. De Arriaga, and R. Escarela-Pérez, “Agents control in intelligent learning systems: The case of reactive characteristics,” Interactive Learning Environments, vol. 14, no. 2, pp.95-118, 2006.
-
M. Lee & A. L. Baylor, “Designing Metacognitive Maps for Web-Based Learning,” Educational Technology & Society, vol. 9, no.1, pp.344-348, 2006.
-
J. C. Lester, S. A. Converse, S. E. Kahler, S. T. Barlow, B. A. Stone, and R. S. Bhogal, “The persona effect: Affective impact of animated pedagogical agents,” in Proceedings of CHI '97, pp.359-366, 1997.
-
J. C. Lester, B. A. Strone and G. D. Stelling, “Lifelike Pedagogical Agents for Mixed-Initiative Problem Solving in Constructivist Learning Environments,” User Modeling and User-Adapted Interaction, vol. 9, pp.1-44, 1999.
-
J. C. Lester, J. L. Voerman, S. G. Towns and C. B. Callaway, “Deictic Believability: Coordinated Gesture, Locomotion, and Speech in Lifelike Pedagogical agents,” Applied Artificial Intelligence, vol. 13, no. 4, pp. 383-414, 1999.
-
M. Louwerse, A. Graesser, L. Shulan and H. H. Mitchell, “Social Cues in Animated Conversational Agents,” Applied Cognitive Psychology, vol. 19, pp. 693-704, 2005.
-
J. Ma, J. Yan and R. Cole, CU Animate: Tools for Enabling Conversations with Animated Characters, in International Conference on Spoken Language Processing (ICSLP), Denver, 2002.
-
J. Ma, R. Cole, B. Pellom, W. Ward and B. Wise, “Accurate Automatic Visible Speech Synthesis of Arbitrary 3D Models Based on Concatenation of Di-Viseme Motion Capture Data,” Journal of Computer Animation and Virtual Worlds, vol. 15, no. 5, pp. 485-500, 2004.
-
Ma, J. and Cole R., “Animating Visible Speech and Facial Expressions,” Visual Computer, vol. 20, no. 2-3, pp. 86-105, 2004.
-
V. Mallikarjunan, (2003) “Animated Pedagogical Agents for Open Learning Environments,”[online] Available: filebox.vt.edu/users/vijaya/ITMA/portfolio/docs/report.doc [Accessed December 9, 2009]
-
S. C. Marsella and W. L. Johnson, An instructor's assistant for team-training in dynamic multi-agent virtual worlds in Proceedings of the Fourth International Conference on Intelligent Tutoring Systems (ITS '98), no. 1452 in Lecture Notes in Computer Science, pp. 464-473, 1998.
-
D.W. Massaro, Symbiotic value of an embodied agent in language learning, proceedings of the 37th Annual Hawaii International Conference on System Sciences (HICSS'04) - Track 5 – vol. 5, 2004.
-
“Animated 3-D Boosts Deaf Education; ‘Andy’ The Avatar Interprets By Signing” sciencedaily.com March 2001, [online] ScienceDaily, Available: http://www.sciencedaily.com/releases/2001/03/010307071110.htm [Accessed April 11, 2008]
-
A. Nijholt, “Towards the Automatic Generation of Virtual Presenter Agents,” informing science journal vol. 9 pp.97 -110, 2006.
-
M. A. S. N. Nunes, L. L. Dihl, L. C. de Olivera, C. R. Woszezenki, L. Fraga, C. R. D. Nogueira, D. J. Francisco, G. J. C. Machado and M. G. C. Notargiacomo, “Animated Pedagogical Agent in the Intelligent Virtual Teaching Environment,” Interactive Educational Multimedia, vol. 4, pp.53-61, 2002.
-
L. C. de Olivera, M. A. S. N. Nunes, L. L. Dihl, C. R. Woszezenki, L. Fraga, C. R. D. Nogueira, D. J. Francisco, G. J. C. Machado and M. G. C. Notargiacomo, “Animated Pedagogical Agent in Teaching Environment,” [online] Available: http://www.die.informatik.uni-siegen.de/dortmund2002/web/web/nunes.pdf [Accessed: June 30, 2008]
-
N.K. Person, A.C. Graesser, R.J. Kreuz, V. Pomeroy, and the Tutoring Research Group, “Simulating human tutor dialog moves in AutoTutor,” International Journal of Artificial Intelligence in Education, in press 2001.
P. Suraweera and A. Mitrovic, “An Animated Pedagogical Agent for SQL-tutor,” 1999, Available: http://www.cosc.canterbury.ac.nz/research/reports/HonsReps/1999/hons_9908.pdf [Accessed: August 11, 2009
]
Share with your friends: |