Human-Robot Interaction in a Robot Theatre that Learns



Download 189.05 Kb.
Page1/3
Date03.05.2017
Size189.05 Kb.
#17129
  1   2   3
Human-Robot Interaction in a Robot Theatre that Learns
Clemen Deng, Mathias Sunardi, Josh Sackos, Casey Montgomery, Thuan Pham, Randon Stasney, Ram Bhattarai, Mohamed Abidalrekab, Samuel Salin, Jelon Anderson, Justin Morgan, Surendra Maddula, Saly Hakkoum, Dheerajchand Vummidi, Nauvin Ghorashian, Aditya Channamallu, Alvin Lin, Melih Erdogan, Tsutomu Sasao, Martin Lukac, and Marek Perkowski,

Department of Electrical and Computer Engineering,

Portland State University, Portland, Oregon, 97207-0751,

clemen.deng@gmail.com, msunardi@ee.pdx.edu , mperkows@ee.pdx.edu



Text in yellow is taken from our previous paper on breast cancer and is given only to show you the ready structure scaffolding of this paper. All this work must be repeated for our new data base. The new work is to create the database of motions and to repeat the calculations of Clemen for this new data base

The final order of authors will be based on the final contribution of each author. I included students from previous projects that have done good work, important to the overall success.
Abstract

The paper presents a new approach to create robot theatres. We developed a theatre of interactive humanoid bipeds, based on a play about robots by Polish writer Maciej Wojtyszko. The theatre includes three small robots, Jimmy, equipped with vision, speech recognition, speech synthesis and natural language dialog based on machine learning abilities. The needs for this kind of project result from several research questions, especially in emotional computing and gesture generation, but the project has also educational, artistic, and entertainment values. Programming robot behaviors for Robot Theatre is time consuming and difficult. Therefore Machine Learning is proposed to be used. Machine learning methods based on multiple-valued logic are used for representation of knowledge and machine learning from examples. Supervised learning requires however a data base with attribute and decision values. Such databases do not exist for robot emotional and artistic behaviors. A novel Weighted Hierarchical Adaptive Voting Ensemble (WHAVE) machine learning method was developed for learning behaviors of robot actors in Portland Cyber Theatre, using the developed by us database. This method was constructed using three individual ML methods based on Multiple-Valued Logic: Disjunctive Normal Form (DNF) rule based method, Decision Tree, Naïve Bayes and one method based on continuous representation: Support Vector Machines (SVM). Results were compared with individual ML methods and show that the accuracy of the WHAVE method was noticeably higher than any of the individual ML methods tested.

Keywords— Robot Behavior, Robot Theatre, Machine Learning; Ensemble; Majority Voting System; Multi-Valued Logic.



  1. Introduction

What is a mystery of puppet theatre? Puppets are only pieces of wood and plastic, and yet the audience members of a puppet theatre soon become immersed in the play and experience the artistic thrill of the drama. Does the art lay in the hand that animates the puppet—indeed, a human hand? Will it still be an art if this hand be replaced by computer-controlled servomotors? What about animated movies? Children laugh and cry while perceiving a fast-changing sequence of pictures as a truly live action. The movement has been recorded once for all on a tape – it never changes – and yet this does not detract from its artistic value. Can this art be recorded in a computer program, as in video games, which are also an emergent art form? Another closely related form of art is an interactive installation, and the first of those with robots start to appear.
The ultimate goal of our work is to create the system-level concept of interactive, improvisational robot theatre. By experimentally analyzing issues on the boundary of art, science and engineering, we hope to build a new form of art and entertainment – the theatre of humanoid robots that interact with audience. Like the movies in the early Auguste and Louis Lumiere brothers era, interactive robot theatre is not an art yet, but is definitely capable of attaining this level (see the ”Artificial Intelligence” movie by Spielberg). It is only a question of time and technology.
In our long-term research we intend to progress in this direction. The existing robot theatres in the world, usually in theme parks and museums, are nowadays at their very beginning stages. They are based on programmed movements synchronized with recorded sounds. They do not use any Computational Intelligence techniques. They do not learn. There is no interaction with the child, for instance by voice commands. They do not teach audience much, either. Current robot toys for adults are programmable but they rarely learn from interactions with their users, and the keyboard programming necessary to operate them is too complex for many users. Thus, such robots are not applicable in advertisement and entertainment industries, nor good as educational tools for early ages. Some toys or theatres have high quality robot puppets that are not interactive and have no voice capabilities. Other “theatres” use computer-generated dialog but have very simple, non-humanoid robots, such as wheeled cars. Yet other theatres are based on visual effects of robot arm movements and have no humanoid robots at all [1,7,8,9,10,11].
The future ideal of our research is a robot muppet that would draw from the immortal art of Jim Henson [6]. Since 2000, we have been in the process of designing a set of next-generation theatrical robots and technologies, that, when taken together, will create a puppet theatre of seeing, listening, talking and moving humanoids. Such robots can be used for many applications, such as video-kiosks, interactive presentations, historical recreations, assistive robots for children and elderly, foreign language instruction, etc. Thus, we can categorize them as “intelligent educational robots.” These robots will truly learn from examples, and the user will be able to reprogram their behavior with a combination of three techniques: (1) vision-based gesture recognition, (2) voice recognition and (3) keyboard-typing of natural language dialog texts. Thus, programming various behaviors based on multi-robot interaction will be relatively easy and will lead to the development of “robot performances” for advertising, entertainment and education.
In 2003/2004 we created a theatre with three robots (version 2) [ISMVL 2006]. We related our robots to Korean culture and tradition. Several methods of machine learning, human robot interaction and animation were combined to program/teach these robots, as embedded in their controlling and dialog softwares. The original machine learning method that has been developed by the PSU team [3,4,5,17,18,19,23,24,25,26,27] and is based on the induction of multiple-valued relations from data has been applied to these robots to teach them all kinds of verbal (simplified natural language) and non-verbal (gestures, movements) behaviors [2,3,4,5]. This logic-based “supervised” learning method induces optimized rules of robot behavior from the sets of many behavioral examples.
In this paper we discuss the newest variant of our theatre and especially its software. The long-term goal of this project is to perform both the theoretical research and the practical development leading to a reproducible, well-described system of highly educational value. The paper covers only some theoretical issues. The rest of the paper is organized as follows. Section 3 presents the design of the Version 3 of the Theatre, its background, research objectives, mechanical design and challenges. Section 3 describes software principles, modules and layers of the system, speech and vision tools. Section 4 concentrates on Machine Learning subsystem based on supervised learning and using Multiple-Valued logic. Section 5 describes the robot behaviors data base used for testing the new method in this paper. Section 6 details the new methodology for the new method, Weighted Hierarchical Adaptive Voting Ensemble (WHAVE). Section 7 shows the experimental results of the new methods and comparative results with individual ML methods as well as conventional Majority Voting System (MVS) method results. Section 8 concludes the paper and outlines future work.
2. The design of the Version 3 of the Theatre

2.1. Background of our theatre

The design of a theatrical robot is a complex problem. Using modern technologies, two basic types of robot theatre are possible:



  1. Theatre of human-sized robots. A natural-sized humanoid robot is difficult to build and may be quite expensive if high quality components are used. Even the head/face design is very involved. The most advanced robot head of “theatrical” type, Kismet, is a long-term research project from MIT that uses a network of powerful computers and DSP processors. It is very expensive, and the head is much oversized, so it cannot be commercialized as a “theatre robot toy”. Also, Kismet did not use a comprehensive speech and language dialog technology, has no machine learning and no semantic understanding of spoken language. Robot theatres used also larger humanoid robots on wheels [ref]. Developments of many generations of our robots were presented in [ISMVL, oo,oo0-]. Such robots cannot walk on legs and move like cars which is unnatural and removes many motion capabilities that would be very useful for a theatre. After building a theatre of this type [ISMVL] we decided that having a full body realistic robot is a better choice, because controllable legs and respective whole body design are fundamental to theatre realism. Therefore, we moved to small robots that can walk and perform many body movements.

  2. Theatre of small robots. Small humanoid bipeds, or other walking robots, like dogs or cats. Japanese robot toys; Memoni, dog.com and Aibo have primitive language understanding and speech capabilities. They have no facial gestures at all (Memoni), they have only few head motions (Aibo) or only 3 degrees of freedom (to open mouse, move eyes and head in dog.com.) The Japanese robots cost in the range of $120 (Memoni and dog.com) to $850 (the cheapest Aibo). After experiences with small biped humanoids, Isobot, Bioloid and KHR-1, we decided to purchase new Jimmy robots [ref], $1600 each. DESCRIBE HERE THESE ROBOTS. Theatrically, the weakest point of Jimmy robots is that their faces are not animated and heads have only two degrees of freedom. Based on our comparisons with all previous types of theatrical robots that we built or purchased and programmed, we decided that Jimmys are a better choice to create a robot theatre.

We developed interactive humanoid robot theatre with more complex behaviors than any of the previous toys and human-size robots used in theatres; this was possible thanks to their small size. The crux of this project is sophisticated software rather than hardware design. From what we know, nobody so far in the world created a complete theatre of talking and interacting humanoid biped robots. This is why we want to share our experiences with potential robot theatre developers. Building a robot theatre can be done only based on many partial experiences accumulated over years, and this experience has to include knowledge of components, design techniques, programming methods and available software tools. The mechanical Jimmys themselves are rather inexpensive for such an advanced technology, which will allow even high-schools to reuse our technology. A successful robot for this project should be not costly, because our goal is that the project will be repeated by other universities and colleges. Creation of a Web Page for this project will also serve this goal [ref].


2.2. Research objectives

The main research objectives of this project were the following:



  1. Develop inexpensive and interactive robot puppets based on the play Hoc Hoc about robot’s creativity, written especially for robot theatre by a well-known Polish writer and director Maciej Wojtyszko.

  2. The movements and speech behaviors of these puppets should be highly expressive.

  3. Students who work on robot theatre related mini-projects should learn the mechanical assembly of robots from kits, their mechanical and electrical modifications for the theatre, motion, sound and lights animation, artistic expression and emotion animation, computer interface and control/learning software development.

  4. Develop basic level of software with all kinds of parameterized behaviors that are necessary to play the complete particular puppet play, Hoc Hoc, with three bipeds, two iSOBOTs and a larger Anchor Monster robot.

  5. Develop and analyze an “animation language” to write scripts that describe both verbal and non-verbal communication of the robots interacting with themselves and with the public. The language should include: “language of emotions”, language of dialog, language of interaction and language of control.

  6. On top of this technology, develop a machine-learning based methodology that, based on many behavioral examples, will create expressions (rules) in animation language. This will allow the robots to: understand a limited subset of English natural language, talk in English without references to ready scripts, and be involved in a meaningful verbal and non-verbal interactive natural language (English) dialog with humans, but limited mostly to subjects of the Hoc Hoc play. These capabilities are additional to the scripted behavior of robots in the play.

  7. Use theatre to teach students practically the concepts of: kinematics, inverse kinematics, PID control, fuzzy logic, neural nets, multivalued logic and genetic algorithm. In a very limited way, the theatre audience is also taught as robots explain their knowledge and visualize their thinking process on monitors.

Below we will briefly discuss some issues related to meeting these objectives in the practical settings of this project.


2.3. Mechanical design of Hoc Hoc Theatre

How should the robots look like? How to create a technology that is both inexpensive and well-suited for robot theatre and interactive displays? In contrast to current toys that are either robot arms, animals with non-animated faces or mobile robots, the core of our “theatre robot” is a set of three synchronized humanoid biped robots with computer vision, speech recognition and speech synthesis abilities and a distributed computer network to control them. Other robots can be added or removed from the performance.


Many interesting effects can be achieved because robots as a group have many degrees of freedom. This is a clear advantage over our previous theatre which was too static and mechanical [ISMVL, Sunardi]. Our small bipeds can walk, turn left and right, lay down and get up, do karate and boxing poses, dance, and perform many more poses and gestures (Pose is static, gesture is a sequence of poses that does not require external sensor information). The robots have built-in microphones, gyro, accelerometers and cameras. A Kinect device looks at humans and is used as part of the system to respond to their gestures and words. A ceiling cameras make a map of positions and orientation of all robots.
2.4. The Hoc Hoc Play and its adaptation.

The Hoc Hoc play has been translated and adapted by us from the original Polish text to biped and mobile robots. Because we want to perform this play for US children audiences, we rewrote the original script to actualize the play and make it easy to understand (the original play was perhaps for adults). There are three main characters: TWR – a Text Writing Robot, MCR – Music Composing Robot, and BSM – Beautifully Singing Machine which sings and dances [29]. Together, these robots write a song, compose music, dance, sing and perform, explaining as a byproduct the secret of creativity to young audience. We 3D printed bodies of the aluminum-skeleton robots. The colors of robots are used to easy distinguish them by the ceiling camera (Figure 2.1). Other robots, iSOBOTs and Anchor Monster are shown in Figure 2.2.




Figure 2.1. From left to right: TWR robot, MCR robot with the face of the last author, the BSM robot.
Figure 2.2. iSOBOTs and Anchor Monster.
In our theatre there are two types of robot behavior in every performance.

  1. The first type is like a theatrical performance where the sentences spoken by the actors are completely “mechanically” automated using the XML-based Common Robotics Language (CRL) that we developed [28,Bhutada]. The same is true for body gestures. The individual robot actions are programmed, graphically edited, or animated directly on the robot by posing its mechanical poses and their sequences (section 4.4). All robot movements, speech, lights and other theatrical effects are controlled by a computer network. Every performance is the same, it is “recorded” once for all as a controlling software script, as in Disney World or similar theme parks.

  2. The second type, much more interesting and innovative, is an interactive theatre in which every performance is different, there is much improvisation and interaction of robots with the public. (The software scripts are not fixed but are learnt in interaction processes of humans and robots). In case of human theatre such elite experimental performances are known from the “Happening Movement”, Grotowski’s Theatre, Peter Brooks’ Theatre, and others top theatre reformers in the world. In this part, the public is able to talk to the robot actors, ask them questions, play games, and ask to imitate gestures. This is when the robots demonstrate language understanding, improvisational behaviors, “emergent” emotions and properties of their personalities. Autonomous behavior, vision-based human-robot interaction and automatic speech recognition are demonstrated only in this second part. The second type has many levels of difficulty of dialog and interaction and what was practically demonstrated in version 2 was only the first step. In every performance the two interactive types can be freely intermixed. Methods used in version 3 make the main part of this paper.


2.5. Challenges.

This project brings entirely new challenges, artistic closely related to technical or even scientific:



  1. What should be the voices of the robots? Recorded, text-to-speech, or something else?

  2. How to animate emotions, including emotional speech patterns?

  3. How to combine digitized speech with text-to-speech synthesized voice?

  4. What is the role of interactive dialogs. From the point of view of the play itself? From the educational point of view?

  5. How to animate gestures for interactive dialogs?

  6. How to use uniformly the machine learning technology -- that we developed earlier or a new one -- to the movements, emotions, voice, acting and dialogs?

  7. How much of the script of the play should be predefined and how much spontaneous and interactive?

  8. Development of a language, including its voice synthesis and emotion modeling aspects, that will be easy enough to be used by artists (directors) that will program future performances, without the help of our team of designers/engineers.

The design has been done with the goal in mind that the whole performance should be no longer than 25 minutes, each run of it should be sufficiently different to make it not boring for a viewer of repeated performances.


3. Software development THIS PART WILL BE CHANGED. TEXT IS GIVEN HERE ONLY TO HELP YOU WRITE NEW DESCRIPTION.

3.1. Principles of software

The system is programmed in Visual Basic 6, Visual Basic.NET and other languages from Visual Studio of Microsoft. We use the most modern technology for speech recognition and speech synthesis (Fonix and Microsoft SAPI). Vision programming uses heavily OpenCV from Intel [ref]. In this paper, our emphasis is not on speech or vision; we use existing speech and vision tools in which the internals of speech processing or vision are accessed from Visual Basic or Visual C environments. These are high quality tools, the best available, but of course the expectation for the recognition quality must be realistic. What we achieved so far is speech recognition of about 300 words and sentences, which is enough to program many interesting behaviors, since speech generation and movement are more important for theatrical effect than speech recognition. The commercial speech tools are improving quickly every year, and they are definitely far ahead university software. In general, our advice to robot builders is: “use the available multi-media commercial technology as often as possible”. Let us remember that words communicate only about 35% of the information transmitted from a sender to a receiver in a human-to-human communication. The remaining information is included in: body movements, face mimics, gestures, posture, external view - so called para-language.


Figure 2.3. Stage and Window of Portland Cyber Theatre

Figure 2.4. …



Figure 2.5. Face detection localizes the person (red rectangle around the face) and is the first step for feature recognition and face recognition and emotion recognition.
In our theatre, the audience is at the corridor and sees the stage, located in the Intelligent Robotics Laboratory, through a large glass window (Figure 2.3). One member of the audience communicates with the theatre by speech and sounds, typing on his smartphone, and by his gestures recognized by a Kinect camera. A monitor close to stage gives him feedback. Face detection (see Figure 2.5) can find where the person is located, thus aiding the tracking movement of the robot, so that the given particular robot that is now interacting with the human refers and turns to this human. The CRL scripts link the verbal and non-verbal behavior of the robots. Figure 2.6 shows the human recognition software that learns about the human and what he/she communicates to the robot. Of course, in our theatre the quality of animation is limited by the robots mechanical size, simplified construction, limited number of DOFs and sound. It is therefore interesting that high quality artistic effects can be achieved in puppet theatres and mask theatres, which are also limited in their expressions. How can we achieve similar effects in limitations specific to our theatre? Movement animation is an art more than science, but we want to extract as much science from art as possible [ref Mathias].
In brief, the dialog/interaction software has the following characteristics:

  1. Our system includes Eliza-like dialogs based on pattern matching and limited parsing [ref Eliza]. Memoni, Dog.Com, Heart, Alice, and Doctor all use this technology, some quite successfully. For instance, Alice program won the 2001 Turing competition [ref]. This is a “conversational” part of the robot brain and a kind of supervising program for the entire robot, based on blackboard architecture principles. We use our modification of Alice software [ref. Josh Sackos].

  2. Model of the robot is used. Robot knows about its motions and behaviors. They are all listed in a database and called by intelligent and conversational systems. Robot recognizes also simple perceptions such as “I see a woman”, “I see a book”, “human raised his right hand”, “human wants me to kneel”, “human points to TWR”, “word “why” has been spoken”.

  3. Model of the user is used in conversational programs. Model of the user is simple: the user is classified to one of four categories; child, teenager, adult (mid-age), old person (professor). Suppose that a question is asked: “what is a robot”. If the human is classified as a child the answer is “I am a robot”. If the user is a teenager, the answer is “A robot is a system composed of perception subsystem such as vision or radar, a motion subsystem such as a mobile base and intelligence subsystem that is software that links robot behavior to its perceived environment. If the user is adult the full definition from Wiki is given, with figures and tables. If the user is qualified as old, which often is a professor, the answer is “Can you explain first what is your background? Are you a robotic specialist?”

  4. Scenario of the situation is given. This means, that in the script of action there are some internal states. The robot has to follow the sequence of states, which can be however modified by external parameters. In addition, in every state variants of behavior are available, from which the robot can select some to improvise behaviors.

  5. History of the dialog is used in conversational programs. This means that the robot learns and memorizes some information about the audience members like their names and genders.

  6. Use of both word spotting and continuous speech recognition. The detailed analysis of speech recognition requirements can be found in [14].

  7. Avoiding “I do not know”, “I do not understand” answers from the robot during the dialog. Our robot will have always something to say, in the worst case, nonsensical and random. Value “zero” of every variable in learning means “no action”. False positives lead to some strange robot behaviors with additional unexpected movements or words, while every false negative leads to an avoidance of the action corresponding to this variable. Thus, in contrast to standard learning from examples, we are not afraid of false positives, on the contrary, they often create fun patterns while observing the results of learning. In one of our past performances, when robot was not able to answer the question it was randomly selecting one of three hundred Confucian Proverbs and many times the user was fooled to think that the robot is actually very smart.

  8. Random and intentional linking of spoken language, sound effects and facial gestures. The same techniques will be applied for theatrical light and sound effects (thunderstorm, rain, night sounds).

  9. We use parameters extracted from transformed text and speech as generators of gestures and controls of jaws (face muscles). This is in general a good idea, but the technology should be further improved since it leads sometimes to repeated or unnatural gestures.

  10. Currently the large humanoid robot (the showman Anchor Monster) tracks the human with its eyes and neck movements. This is an important feature and we plan to enhance it to small bipeds. To maintain eye contact with the human gives the illusion of robot’s attention. Camera is installed on the head. In future there will be more than one camera for a robot. There will be also more “natural background” behaviors such as eye blinking, breathing, hand movements, etc. Simplified diagram of the entire software is shown in Figure 2.7.


Figure 2.6. Acquiring information about the human: face detection and recognition, emotion recognition, speech recognition, gender and age recognition. TO BE CHANGED.



Figure 2.7. A simplified diagram of software explaining the principle of using machine learning to create new interaction modes of a human and a robot theatre. ID3 is a decision tree based learning software, MVSIS (Orange) are the general purpose multiple-valued tools used here for learning. The input arrows are from sensors, the output arrows are to effectors.
3.2. Software modules

Here are the main software modules.



  • Motor/Servo. Driver class with a large command set, relative and direct positioning as well as speed and acceleration control and positional feedback method.

  • Text To Speech. Microsoft SAPI 5.0, Direct X DSS speech module because of its good viseme mapping and multiple text input format

  • Speech Recognition. Microsoft SAPI 5.0, using an OCX listbox extension the speech recognition can be easily maintained.

  • Alice. One of the most widely used formats for Alice languages on the Internet uses *.aiml files, a compatible openSource version was found and modified. [Josh Sackos]

  • Vision. An openSource Package by Bob Mottram using a modified OpenCV dll that detects facial gestures was modified to allow tracking, and mood detection.[REF REF}

  • IRC Server. To allow for scalability an OpenSource IRC server was included and modified so that direct robot commands could be sent from a distributed base.

  • IRC Client. An IRC client program was created to link and send commands to the robot, this will allow for future expansion. Coded for in both .NET and VB 6.


3.3. Layers of software.

In order for the illusion of natural motion to work, each module must interact in what appears to be a seamless way. Rather than attempting to make one giant seamless entity, multiple abstractions of differing granularities where applied. The abstractions are either spatial or temporal, the robot’s positional state is taken care of at collective and atomic levels, i.e. Right arm gesture(x), and left elbow bend (x), where the collective states are temporal and dynamic.


All functions are ultimately triggered by one of multiple timers. The timers can be classified as major and minor timers; major timers run processes to process video, sound, etc. and are usually static in frequency settings and always are enabled, minor timers are dynamic and based on situation they are enabled and disabled routinely throughout the operation of the robot. To help mask the mechanical behavior of the robot more, many of the major timers are intentionally set to non-harmonic frequencies of one another to allow for different timing sequences and a less “clock like” nature of behaviors.
Motions can be broken into reflex, planned / gesture related, and hybrid functions, with some body parts working in certain domains more than others. The mouth and eyes are mostly reflex, with arms being planned, and the neck slightly more hybrid. Each body part of course crosses boundaries but this allows for each function to be created and eventually easily prototyped. In fact, the actual accessing of all functions becomes ultimately reflexive in nature due to the use of triggers that all ultimately result from reflexive and timed reflexive subroutines. For instance, a person says “hello” moments later a speech recognition timer triggers, Alice runs and generates a response, the response creates mouth movement, the mouth movement occasionally triggers gesture generation, all types of functions triggered by just one of the reflex timers.
3.4. Alice, TTS and SR

A standard Alice with good memory features was employed as the natural language parser. Microsoft SAPI 5.0 has a Direct Speech object which can read plan text, just like that provided from the Alice engine, it also has viseme information that easily is used to control and time mouth and body movement in a structured form. MS SAPI 5.0 was also used for the speech recognition. With most speech recognition programs the library of words it searches are either based on Zipf’s law or have to be loaded with a tagged language, meaning on the fly generation of new language recognition is troublesome. Fortunately SAPI has a listbox lookup program that requires no extra tagged information. The problem with using a finite list is that SAPI will attempt to identify things so hard it will make mistakes quite often. To combat this, short three- and two- letter garbage words for most phonemes were created. The program will ignore any word three letters or less when not accompanied by any other words.
Currently the mouth synchronization can lag due to video processing and speech recognition, to combat this on a single system computer the video stream had to be stopped and started during speech. The video processing and speech recognition are extremely taxing for one laptop computer and in order to get optimal responses the robot should be improved with the addition of a wireless 802.11 camera to enable other computers to do the video processing.


4. Using Machine Learning system for robot learning

4.1. The system.

While commercial dialog systems are boring with their repeating “I do not understand. Please repeat the last sentence” behaviors, our robots are rarely “confused”. They always do something, and in most cases their action is slightly unexpected. This kind of robot control is impossible for standard mobile robots and robot arms but is an interesting possibility for our entertainment robots. This control combines also some logic and probabilistic approaches to robot design that are not yet used in robotics. In addition to standard dialog technologies mentioned above, a general-purpose logic learning architecture is used that is based on methods that we developed in last 10 years at PSU [3-5,17-19,21,23-27], and just recently applied to robotics [2,21,14]. In this paper we use and compare several Machine Learning methods that have been not used previously in our robot theatre, nor in any other robot theatre. We assume that the reader has a general understanding of Machine Learning principles and here we concentrate mostly on theatre application aspects.


The general learning architecture of our approach can be represented as a mapping from vectors of features to vectors of elementary behaviors. There are two phases of learning:

  1. the learning phase (training phase), which is preparing the set of input-output vectors in a form of an input table (Figure xx) and next generalizing the knowledge from the care input-output vectors (minterms) to don’t care (don’t know) input-output vectors. The learning process is thus a conversion of the lack of knowledge for a given input combination (a don’t know) to a learned knowledge (a care). In addition to this conversion, certain description is created in a form of a network parameters or a Boolean or Multiple-valued function.

  2. The testing phase, when the robot uses the learned description to create outputs to input patterns that were not shown earlier (the don’t knows). For instance, answering questions to which answers were not recorded, or using analogy to create motions for command sequences which were not used in teaching samples.

  3. While the learning process itself has been much discussed in our previous papers, the system and preprocessing aspects are especially of interest to robot theatre. For every sample, the values of feature are extracted from five sources: (1) frontal interaction camera (Kinect), (2) speech recognition (Kinect), (3) text typed on smartphones, (4) ceiling cameras, and (5) skin/body sensors of robots. They are stored in a uniform language of input-output mapping tables with the rows corresponding to examples (samples, input-output vectors, minterms of characteristic functions) and the columns corresponding to feature values of input variables (visual features, face detection, face recognition, recognized sentences, recognized information about the speaker, in current and previous moments) and output variables (names of preprogrammed behaviors or their parameters, such as servo movements and text-to-speech).

Such tables are a standard format in logic synthesis, Data Mining, Rough Set and Machine Learning. These tables are created by encoding in the uniform way the data coming from all the feature-extracting subroutines. Thus the tables store examples for mappings to be constructed. If the teaching data is encountered again in table’s evaluation, the same exactly output data from the mapping specified by the table is given as found by the teaching. But what if a new input data is given during evaluation, one that never appeared before? Here the system makes use of analogy and generalization based on Machine Learning principles [24-27].





Figure 2.8. Facial features recognition and visualization in an avatar.



Figure 2.9. Use of Multiple-Valued (five valued) variables Smile. Mouth_Open and Eye_Brow_Raise for facial feature and face recognition.
4.2. Various patterns of Supervised Learning in our system

The input-output vector serves for “teaching by example” of a simple behavior. For instance the vector can represent a directive “If the human smiles and says “dance” then the robot dances”. Observe that this

directive requires the following:


  1. the camera should recognize that the person smiles, this is done by a pre-programmed software that answers the question “smiles”?”. Similarly we use Kinect to recognize gestures “hand up”, “hand down”, etc.

  2. the result of smile recognition is encoded as a value of the input variable “smile” in set of variables “facial features” (Figure 2.9),

  3. the word-spotting software should recognize the word “dance”. Similarly the software recognizes commands “funny”, “kneel” and others.

  4. the word “dance” is encoded as a value of variable “word_command”,

  5. the logic reasoning proves that both smile and dance are satisfied so their logic AND is satisfied.

  6. as the result of logical reasoning the output variable “robot_action” obtains value “robot_dances”,

  7. there exists a ready subroutine “robot_dances” with recorded movements of all servomotors and text-to-speech synthesis/recorded sound”.

  8. this subroutine is called and executed.

This directive is stored in robot memory, but more importantly, it is used as a pattern in constructive induction, together with other input-output vectors given to the robot by the human in the learning phase. Observe in this example that there are two components to the input-output vector. The input part are symbolic variable values representing features that come from processing of the sensor information. They describe “what currently happens”. The teacher can give for instance the command to the robot: “if there is THIS situation, you have to smile and say hello”. “This situation” means what the robot’s sensors currently perceive, including speech recognition. In our theatre the sensors of Jimmy are: accelerometer, gyro, microphone and camera. iSOBOTs have no sensors: their behaviors are just sequences of elementary motions. The Monster Robot has a Kinect camera with microphones.


The presented ML methodology allows for variants:

  1. The input variables can be binary or multi-valued

  2. The output variables can be binary or multi-valued

  3. There can be one output (decision) variable, or many of them.

  4. If there is one output variable, its values correspond to various global behaviors. For instance value 0 can mean “no motion”, value 1 – “turn right”, value 2 – “turn left”, value 3 – “say hello”, value 4 – “dance”.

  5. If there are more than one output variable, each output variable corresponds to certain aspect of behavior or motion. For instance O1 can be left arm, O2 – right arm, O3 – left leg, and O4 – right leg. The global motion is composed from motions of all DOFs of the robot. The same way text spoken by the robot can be added in the synchronized way to the motion.

When the robot communicates in its environment with a human (we assume now that there is only one active human in the audience) the input variables of the vector continuously change, for instance when the person interrupts the smile, says another word or turns away from the camera. The output part of the vector is some action (behavior) of the robot. It can be very simple, such as frowning or telling “nice to meet you” to complex behaviors such as singing a song with full hands gesticulation. The input and output variables can thus correspond not only to separate poses, but also to sequences of poses, shorter or longer “elementary gestures”. This way the “temporal learning” is realized in our system.


Examples of feature detection are shown in Figures 2.8 and 2.9. Eye, nose and mouth parameters of a human are put to separate windows and the numerical parameters for each are calculated (Figure 2.8). The symbolic face in the right demonstrates what has been recognized. In this example the smile was correctly recognized, eyebrows were correctly recognized, but the direction of eyes was not correctly recognized because the human looks to his right and the avatar at the bottom right of Figure 2.8 looks to his left. The teaching process is the process of associating perceived environmental situations and expected robot behaviors. The robot behaviors are of two types. One type are just symbolic values of output variables, for instance variable left_hand can have value ”wave friendly” or “wave hostile”, encoded as values 0 and 1. Otherwise, binary or MV vectors are converted to names of output behaviors using a table.
The actions corresponding to these symbols have been previously recorded and are stored in the library of robot’s behaviors. The second type of symbolic output values are certain abstractions of what currently happens with robot body and of which the robot’s brain is aware. Suppose that the robot is doing some (partially) random movements or recorded movements with randomized parameters. The input-output directive may be “if somebody says hello then do what you are actually doing”. This means that the directive is not taking the output pattern from the memory as usually, but is extracting parameters from the current robot’s behavior to create a new example rule. This rule can be used to teach robot in the same way as the rules discussed previously. Finally, there are input-output vectors based on the idea of reversibility. There can be an input pattern which is the symbolic abstraction of a dancing human, as seen by the camera and analyzed by the speech recognition software. Human’s behavior is abstracted as some input vector of multiple-valued values. Because of the principle of “symmetry of perception and action” in our system, this symbolic abstraction is uniquely transformed into an output symbolic vector that describes action of the robot. Thus, the robot executes the observed (but transformed) action of the human. For instance, in a simple transformation, IDENTITY, input pattern of a human raising his left arm is converted to the output pattern of raising robot’s left arm. In a transformation, NEGATION, input pattern of a human raising his left arm is converted to the output pattern of raising robot’s right arm. And so on, many transforms can be used. Moreover, robot can generalize this pattern by applying the principles of Machine Learning that are used consistently in our system. This is a form of combining the learning by example (or mimicking) and the generalization learning.
4.3. Examples of Robot Learning

Data in tables are stored as binary, and in general, multivalued, logic values. Continuous data must be first discretized to multi-valued logic, a standard task in Machine Learning. The teaching examples that come from preprocessing are stored as (care) minterms (i.e. combinations of input/output variable values). In our experiments, we first use the individual machine learning classification methods for training, testing and tuning the database [38, 40]. In this work we picked four different machine learning methods to use as individual ML methods therefore n = 4: Disjunctive Normal Form (DNF) rule based method (CN2 learner) [37, 38], Decision Tree [38, 40], Support Vector Machines (SVM) [38, 40] and Naïve Bayes [38, 40, 44]. Each ML classification method goes through training, testing and tuning phases [Orange, 2,3,4,5] (See Figure 4.7). These methods are taken from Orange system developed in University of Lublana [ref] and MVSIS system developed under Prof. Robert Brayton at University of California at Berkeley [2]. The entire MVSIS system or Orange system can be also used. The bi-decomposer of relations and other useful software used in this project can be downloaded from http://wwwcad.eecs.berkeley.edu/mvsis/. As explained above, the system generates robot’s behaviors from examples given by users. This method is used in [2] for embedded system design, but we use it specifically for robot interaction. It uses a comprehensive Machine Learning/Data Mining methodology based on constructive induction and particularly on decomposing hierarchically decision tables of binary and multiple-valued functions and relations to simpler tables, until tables of trivial relations that have direct counterparts in behaviors are found.



We explain some of ML principles applied to robot theatre on three very simplified examples.

Example 4.1. Suppose that we want our robot to respond differently to various types of users: children, teenagers, adults and old people. Let’s use the following fictional scale for the properties or features of each person: a = smile degree, b = height of a person, c = color of the hair. (For simplification of tables, we use four values of variable “smile” instead of five values as shown in Figure 2.9). These are the input variables. The output variable Age has four values: 0 for kids, 1 for teenagers, 2 for grownups and 3 for old people. The characteristics of feature space for people recognition are given in Figure 4.1. The robot is supposed to learn the age of the human that interacts with it by observing, using Kinect, the smile, height and hair color of the human.


Figure 4.1. Space of features to recognize age of a person.



Figure 4.2. Input-output mapping of examples for learning (cares, minterms).
The input-output mapping table of learning examples is shown in Figure 4.2. These samples were generated on the output of the vision system, which encoded smile, height and hair color of four humans that stand in front of the front Kinect camera. Here, all variables are quaternary. The learned robot behavior is the association of the input variables (Smile, Height, Hair Color) with the action corresponding to the perceived age of the human (an output variable Age). Thus, the action for value Kid will be to smile and tell “Hello, Joan” (the name was learned earlier and associated with the face). If the value Teenager is the output Age of value propagation through learned network with inputs Smile, Height and Hair Color, then the action of the robot “Crazy Move” and the text “Hey, Man, you are cool, Mike” is executed, and so on for other people. The quaternary map (a generalization of Karnaugh Map called Marquand Chart, variables are in a natural ternary code and not in the Gray code) in Figure 4.3 shows the cares (examples, objects) in presence of many “don’t cares”. The high percent of don’t cares, called “don’t knows” is typical for Machine Learning. These don’t knows are converted to cares as a result of learning the expression (the logic network). When the Age of the human is recognized, all actions of the robot can be personalized accordingly to his/her age. The slot Age in the record of the data base for every person, Joan, Mike, Peter and Frank, is filled with the corresponding learned data.



Figure 4.3. The quaternary Marquand Chart to illustrate cares (learning examples) for age recognition.


Figure 4.4. One result of learning. The shaded rectangle on top has value 3 of output variable for all cells with a=0. The shaded rectangle below has value 1 for all cells with a=2.
This is illustrated in Figure 4.4, where the solution [Age=3] = a0 is found, which means – old person is a person with value low of variable smile. In other words, the robot learned here from examples that old people smile rarely. Similarly, it is found that [Age=0] = a3 which means that children smile opening mouth broadly. Observe that the learning in this case found only one meaningful variable – Smile and the two other variables are vacuous. Observe also that with different result of learning (synthesizing the minimum logic network for the set of cares) the solution would be quite different, [Age=3] = c3 which means, “old people have grey hair”. The bias of a system is demonstrated by classifying all broadly smiling people to children or all albinos to old people. Obviously, the more examples given, the lower the learning error.
Example 4.2. Observe also that the data from Figure 4.3 may get another interpretation. Suppose that the decision (output) variable in the map is no longer Age but Control and has the following interpretation for a mobile robot with global names of behaviors: 0- stop, 1 – turn right, 2- turn left, 3- go forward. Then the response [Control=3] to input abc = [0,1,3] will be 3. This means that when the mobile robot sees a human with a gray hair, not smiling and with middle height then the robot should go forward. As a result of learning, the behavior of the mobile robot is created. If the process of learning is repeated with a probabilistic classifier, a different rule of control will be extracted.
Example 4.3. Braitenberg Vehicle.
Other interesting examples of using Machine Learning in our robot theatre are given in [2,21].

This file provides all servo information that is necessary. The first line shows the details for servo #1. The first number is the most left value, the second is the initialization value and the last value is the most right value for the servo. Below the “Define Behaviors” field we have the “Movement and Behavior Control Panel”. In the first field the user enters the delay for the particular movement in milliseconds. Then there is the “Add Movement” Button to simply add a movement to a behavior. When he is done with defining all movement for a behavior he enters a name for the behavior and clicks the “Add Behavior” Button. With the edit fields “Load Behaviors” and “Save Behaviors” he can load and save the programmed behaviors.


A unified internal language is used to describe behaviors in which text generation and facial and body gestures are unified. This language is for learned behaviors. Expressions (programs) in this language are either created by humans or induced automatically from examples given by trainers. Our approach includes deterministic, induced and probabilistic grammar-based responses controlled by the language. Practical examples are presented in [2,21,28]. Observe that the language is naturally multiple-valued. Not only it has multiple-valued variables for describing humans and situations (like {young, medium, old}, {man, woman, child}, or face features / behavior nominal variables such as smile, frown, angry, indifferent) but has also multiple-valued operators to be applied on variables, such as minimum, maximum, truncated sum and others. The generalized functional decomposition method, that hierarchically and iteratively applies the transformation, sacrifices speed for a higher likelihood of minimizing the complexity of the final network as well as minimizing the learning error (as in the Computational Learning Theory). For instance, this method automatically generalizes spoken answers in case of insufficient information.
Figure 4.5 shows the appearance of the Robot control tool to edit actions. On the right there is the “Servo Control Panel”. Each servo has its own slide bar. The slide bar is internal normalized from 0 to 1000. As one can see, the servo for the eyes reaches from left to right and the two number next to each slide bar is the position of the slider (e.g. half way slider is always 500) and the next number on the most right is the position of the servo. These values are different for every servo and are just for control, that the servo is in its range and works properly. In addition to that there is a checkbox on each slide to select the servo for a movement.


Figure 4.5. Robot control tool to edit actions.
On the left side there is the “Define Behaviors” box in which one can select a particular movement and then load it to the servos. The “Initialization” button must be pressed at the beginning to make sure that all servos are in their initial position. The initial positions are given in the servo.ini file.

The file looks like this:

//Servo.ini

eyes:2600 1690 780

mouth:2000 2000 3200

neck_vertical:600 2000 3300

neck_horizontal:2500 1000 -700

right_shoulder:3700 2900 -400

right_arm:3400 1720 45

right_elbow:-700 -700 2800

left_shoulder:60 650 4000

left_arm:500 1300 3800

left_elbow:3200 3200 -500

waist:1100 2320 3400

right_leg:-300 1500 3200

left_leg:3200 1700 -100


This file provides all servo information that is necessary. The first line shows the details for servo #1. The first number is the most left value, the second is the initialization value and the last value is the most right value for the servo.
Below the “Define Behaviors” field we have the “Movement and Behavior Control Panel”. In the first field the user enters the delay for the particular movement in milliseconds. Then there is the “Add Movement” Button to simply add a movement to a behavior. When he is done with defining all movement for a behavior he enters a name for the behavior and clicks the “Add Behavior” Button. With the edit fields “Load Behaviors” and “Save Behaviors” he can load and save the programmed behaviors. It is so easy that 10-years old children have programmed our robot in Intel’s high-tech show.



Download 189.05 Kb.

Share with your friends:
  1   2   3




The database is protected by copyright ©ininet.org 2024
send message

    Main page