Eca rule-based Agents for User Management of Reactive Environments Introduction



Download 58.79 Kb.
Date30.01.2017
Size58.79 Kb.
#12784

ECA Rule-based Agents for User Management of Reactive Environments



1.Introduction

Sentient Computing aims the creation of perceptive living spaces where users activities are enhanced by software services provided by environment-embedded devices. These environments achieve awareness of their surroundings through sensors that capture contextual information such as the location and identity of objects or the sound and temperature of a physical space. Sentient Computing combines the dynamic information conveyed by sensors with static information from data repositories (e.g. entity attributes, the geometric features of a physical location or the capabilities of devices), in order to build a model of the environment’s state. Assisted by such model, sentient systems’ intend to perform the right service at the right time on behalf of users. In a nutshell, Sentient Computing adds perception to physical spaces so that they can react appropriately to the people and activities taking place within them.



Sentient application development is usually a rather involved task, because it encompasses the cooperation of several distributed elements, such as a network of sensors, a database, a Location Server or the effectors undertaking the actions triggered by sensor inputs. This explains the several research efforts conducted towards facilitating context-aware applications development. For example, Brown et al. [Brown+98] argue that to make context-aware applications reach the marketplace, their development must be made easy enough to not requiring programmers’ expertise. Their contribution, the stick-e note technology, an electronic equivalent of a Post-it note, makes the creation of context-aware applications as simple as creating web pages. Unfortunately, this technology only permits the elaboration of rather simple applications, and still enforces the end user to learn an XML-based language to create stick-e notes. Salber et al. [Salber+99], with their Context Toolkit framework, share a similar goal. Their framework is built on the concept of enabling applications to obtain the context they require without them having to worry about how the context was sensed. It promotes three main concepts: separation of context sensing from context use, context aggregation and context interpretation. The abstractions and architectural services defined make the elaboration of sophisticated sentient systems just as simple as GUI programming.
Dey [Dey00], in a continuation of his work on the Context Toolkit framework, appreciates it would be enticing to go one step further, not only facilitating sentient application development but also, more interestedly, making its creation feasible even for non computer-literate users. Only in this way, sentient applications would respond to users with the right actions at the right time, i.e. the ultimate goal of Sentient Computing. Dey observes that sentient applications present a common behavioural pattern, i.e. the application monitors when the state of relevant entities matches a pre-specified situation and then an action takes place. Thus, if it was feasible for the end user to specify such situation and the set of actions triggered as a consequence, through a GUI interface for example, the user experiences with their surrounding reactive environments would be much more satisfactory and undisruptive. This work, based on this appreciation, proposes a solution for the easy end-user specification of the Event-Condition-Action rules that govern the reactive environments behaviour towards them.
Only sentient environment inhabitants know how they want the system to react when certain specific contextual conditions are met. Therefore, the end user, and not the systems programmer, should be in charge of specifying the conditions over contextual events upon which the sentient system will take an action on their behalf. Thus, end users can specialize reactive environment’s behaviour to their individual needs. Our work, inspired by the Event-Condition-Action (ECA) rules used in Active Databases research [Hanson+92], permits sentient environment inhabitants to construct, through a convenient GUI, sophisticated conditions over contextual events generated by sentient data sources and associate to them both commonly-used pre-defined actions and arbitrary ones. This work envisions the creation of software agents representing entities such as a user, a room or a desk that embody the set of pre-defined ECA rules governing those entities’ expected interactions with the surrounding reactive physical space.
Section 2 gives an overview on related work tackling contextual situation and behaviour specification. Section 3 offers some background on Production Rule systems and the CLIPS programming language, used in the definition of ECA agents’ rules. Section 4 offers an overview of the system architecture within which ECA agents inhabit. Section 5 gives some details on the rule specification grammar we define and in its mapping to CLIPS statements. Section 6 illustrates the GUI front-end for an ECA agent, through which users can easily insert/delete new ECA rules. Section 6 illustrates some complex situations specifications using our proposed system. Section 7 draws some conclusions.

2.Event-Action systems

This section offers an overview of previous work tackling user-friendly mechanisms for situation-action specification. The CybreMinder [Dey+00] tool, based on the Context Toolkit infrastructure, supports users in sending or receiving reminders, e.g. email, voice announcements or SMS messages, that can be associated to richly described situations including time, place and other pieces of context. The main merit of this tool is the capability that provides to users for specifying, through a Situation Editor GUI, arbitrary context specifications associated with a reminder. A situation is specified as a conjunction of sub-situations. A sub-situation corresponds to a pre-defined contextual event consisting of a set of name-value pairs. Within a name-value pair diverse relations, such as =, <= or > can be applied. The major pitfall, however, is that the only action type that can be triggered as a result of a situation match is a notification, no other kind of actions can be enacted on behalf of the user. A more generic action specification mechanism would be desirable. Moreover, it is unclear how well a reminder specification integrates with the Context Toolkit Architecture. Theoretically, a reminder specification is submitted to the context aggregator corresponding to the reminder’s recipient. This aggregator analyses the given situation and registers with the pertinent context sources so that it can determine when the situation has occurred. However, what would occur if those so-smart context aggregators are unaware of a given type of event submitted? It seems to us that their approach of using a procedural language for composite event specification may also present some flaws.


The EventManager [McCarthy+99] tool pursues a similar goal to CybreMinder. It presents to users an Event Management GUI with which they can define event-action specifications. Event-action specifications are constrained to the following pattern: when
is/are then . This signifies that the user is limited to issue personnel location-related subscriptions, no other context sources’ information is accepted. Moreover neither the combination of atomic events using logical operators such as AND, OR or NOT is permitted nor the use of variables. The advantage of EventManager in comparison by CybreMinder is that apart from a set of pre-defined actions, e.g. flash icon, sound bell or popup message, the user can specify an arbitrary program execution as result of the situation match.
Both CyberMinder and EventManager are cut-down versions of other more general-purpose event-action systems, as for instance Yeast [Krishnamurthy+95]. Yeast defines a client/server platform for constructing distributed event-action applications using high-level event-action specifications. The Yeast server accepts, matches and manages specifications on behalf of the user. A specification describes a pattern of events that the user is interested in as well as the action to be triggered by the Yeast server when it has detected a match for the event pattern. The events recognised are either predefined event classes, such as temporal events, or user-defined events, e.g. change in the value of an attribute of an object belonging to some user-defined class. The action is any valid sequence of commands that can be executed by the computer system’s command interpreter. The most attractive features of Yeast are its openness and generality, i.e. its supports for compound event patterns and arbitrary actions. Nevertheless, an important drawback of its rich specification language is the lack of variables support. READY [Gruber+99] is a high performance event notification service extending Yeast and addressing the use of variables in composite event matching.
Our work addresses similar issues as the CybreMinder and EventManager tools, i.e. applies event-action systems’ principles to the real world. Still, it aims to offer a richer ECA specification language, enabling the definition of more complex composite event patterns on both pre-defined and user-defined contextual events, and more importantly, the triggering of arbitrary actions.

3.Production Systems and CLIPS

Declarative languages such as Prolog or CLIPS, used in the context of Logic Programming or Forward Chaining Production Systems respectively, are specialised in the description and analysis of relationships. Given a set of facts and rules applied to them, an inference engine built directly into the language can decide for the user what action to trigger. The programmer needs only to specify rules and facts since the inference engine does the reasoning for him. Surprisingly, very few researchers have exploited the inherent benefits of these languages in the specification of reactive systems’ behaviour rules.


Stafford-Fraser et al. [Stafford-Fraser+96] in their work on Video Augmented Environments define a BrightBoard system that uses a video camera and audio feedback to enhance the facilities of an ordinary whiteboard, allowing a user to control a computer through simple written marks on the board. Their work defines a very interesting method to describe the commands to be executed in terms of the symbols sensed on the whiteboard. Instead of hard-coding the combination of symbols required, i.e. a situation, to trigger actions, they use Prolog language to specify the combination of symbols which constitute, say, a ‘print’ command. In their case, they passed the information from the board recogniser as a set of Prolog facts. The user can later define Prolog rules to analyse the contents of the board and so trigger the wished action. Our work applies a similar approach for defining reactive environment behaviour rules. The language chosen in our case, CLIPS, offers, in our opinion, better expressive power than Prolog for ECA rules specification. The following sub-sections offer some background on Production Systems and the CLIPS programming language in particular.


3.1. Production Systems

A production system codifies knowledge as a set of situation-action (or production) rules. The situation component is conventionally referred as condition, antecedent or left-hand-side (LHS). It takes the form of predicates applied to values of attributes of a specified object. The action component, also referred as the consequent or right-hand-side (RHS), produces new knowledge or triggers an action. For example:





The core components of a Production System are:



  • RuleBase: a set of production rules.

  • Rule Interpreter (or Inference Engine): a program that selects and applies rules whose condition parts match the current situation.

  • Working Memory: contains propositions (facts) about some entities that the system reacts to.

There are two primary types of productions systems: data-driven (forward chaining) and goal-directed (backward chaining). In the context of this work forward-chaining productions systems will be used. Their main features are:



  • Data driven: what the system does is determined by the current situation.

  • Rule Selection: the rules chosen are those whose conditions match the current situation.

  • Iterative: repeated execution of a match-execute cycle is used to choose and fire rules.

There are several advantages on using a production system rather than a procedural programming language to represent situation-action rules:



  • Procedural Representation: the knowledge is represented in a form that indicates how it is to be used.

  • Modularity: each chunk of knowledge is encoded as a separate rule. It is comparatively easy to add or remove individual pieces of knowledge.

  • Explanation: it is easy to add a facility enabling the system to explain its reasoning to the user.

  • Similarity to Human Cognition: the way they operate has a closer resemblance to human cognition than the behaviour of other computer languages does.


3.2. CLIPS: a Production Systems Programming Language

CLIPS (C Language Integrated Production System) [NASA99] is a forward-chaining production system language developed by NASA. CLIPS consists of three fundamental components:



  • Facts, a combination of data fields which are seldom modified or changed.

  • Rules (or knowledge), which are divided into IF (LHS) and THEN (RHS) portions and are often changed according to new facts and conditions.

  • An Inference Engine, a cognitive processor that makes inferences by deciding which rules are satisfied by the facts.

CLIPS’s inference engine cycles through a match-execute cycle:



  • Match step: scans through rules to find one whose condition part is matched by the current state of the environment. The RETE algorithm, a very efficient mechanism to solve the difficult many-to-many matching problem, is used to process rules.

  • Execute step: performs the operation specified by the action part of the rule found to match.

This cycle continues until either no rule matches or the system is explicitly stopped by the action part of a rule. The following three conflict resolution strategies, in the order given, are employed when more than one rule is found to match:



  • Recency: choose the rule whose condition part has been most recently satisfied.

  • Refractoriness: only allow a rule to successfully match against a particular set of facts once.

  • Salience: the programmer can assign a numerical priority to rules.

CLIPS is written in ANSI C, hence being very portable. It is designed for full integration with procedural languages such as C++ and Java. In addition to being used as a stand-alone tool, CLIPS can be called from a procedural language, perform its function, and then return control back to the calling program. Likewise, procedural code can be defined as external functions and called from CLIPS. When the external code completes execution, control returns to CLIPS. CLIPS offer a rich expressiveness for programmers to write facts and rules. It enables the use of a rich set of relational predicates (e.g. =, >) over fact instances and connectives (e.g. AND, OR, NOT) to build compound conditions. More importantly, it also permits the use of variables and conditions not related to facts in the situation side of a rule, e.g. (current_time < 12:23pm). Facts fields can be either strings, numbers expressed in floating point or lists of these types.


It is very interesting to point out that the operation of a forward-chaining production system, such as CLIPS, presents a clear resemblance to the modus operandi of a sentient system. They wait until a pre-defined situation (a rule or composite event pattern) is matched to trigger an action. This has been our main motivation to choose the CLIPS programming language for the implementation of ECA rule-based agents. Section 5 provides a detailed example of how to use CLIPS to specify a complex composite event-action pattern. In the implementation of this work, in fact, Jess (Java Expert System Shell) [Friedman01], a clone of CLIPS entirely written in Java, will be used. The Jess inference engine is entirely compatible with CLIPS, although it provides a simpler integration with the Java language. In this work, only the declarative features of CLIPS subsumed by Jess will be used. ECAgents are written in Java and embed a Jess Inference Engine within them that understands CLIPS-mapped behavioural patterns.


4.ECA Agents for User Expectations Fulfilment

Section 1 showed that context-aware application developers, despite their domain-specific expertise, are incapable of providing solutions that fully satisfy the expectations of reactive environment inhabitants. It concluded that the only way to make sentient application meet different users’ individual needs is to guarantee that users themselves are involved in the behaviour specification process. Section 2 analysed different approaches targeting user involvement in the definition of the Event-Condition-Action rules. Unfortunately, neither of the approaches described offered an open and generic enough solution to this issue. Mechanisms for the construction of complex composite condition-action patterns, involving events generated by diverse context sources and generic, not only pre-determined, actions, are necessary. Moreover, tools, e.g. a sophisticated graphical user interface, for the user being able to build such complex patterns in still a simple way are required. Section 3 gave some hints on the benefits of using a declarative language, CLIPS in particular, for denoting behaviour. This section illustrates a solution to what may be considered as the holy grail of sentient computing, i.e. make reactive environments enact the right action at the right time for users.


In our model, every entity (i.e. user, physical space or object), whose operation wants to be enhanced by the software services provided by the surrounding sentient system, must be associated to an ECA Agent, or ECAgent in short. An ECAgent embodies the behavioural rules associated with an entity. These rules, defined by the entity’s manager, delimit how the entity expects the sentient system to react to context changes and govern its interactions with it. ECAgents are active software objects running continuously. Their state, the rule-base and set of facts associated to an entity, are made persistent in order to cope with transient failures. Every user in a sentient environment manages its associated ECAgent and the agents representing the objects belonging to them. System administrators define ECA rules associated to shared entities’ ECAgents (e.g. room, building). In order to facilitate the insertion, modification or deletion of behaviour associations, users are offered a GUI front-end that enables the definition of complex ECA relations. The emphasis of our approach is to permit non computer-literate entities’ managers to configure and personalize their working or living space’s reactions to their activities. A GUI front-end updates the rule-base of an agent and maps newly input rules into the underlying CLIPS programming language. The heart of the Java-implemented ECAgents is a Jess inference engine undertaking rule-based reasoning. Section 5 provides more details on the precise rule specification language, hidden from users through a GUI front-end, and its mapping to CLIPS. Section 6 gives more details on the ECAgent’s rule specification front-end. The following sub-section explains how ECAgents are integrated within our CORBA-based sentient architecture.


4.1. ECAgents System Architecture

The construction of an ECAgent’s rule-base involves that its front-end application must have access to descriptions of all the events available within a sentient physical space. Moreover, such application must also be aware of all the distributed services available for action triggering and, noticeably, the descriptions of their interfaces. Both, a repository containing event descriptions and another one containing object interfaces metadata are required. A powerful GUI tool should them be able to build complex composite relations with available event types and plug them to actions performed by advertised services. In what follows the different distributed components that make ECAgents operation possible are overviewed.


Sentient systems, as previously remarked, are fully distributed. Therefore, the adoption of a middleware technology that eases distributed object implementation, isolating the programmer from the underlying low-level networking details, is advisable. CORBA is our distributed computing middleware of choice because of its interoperability, multilingual and multi-platform features. CORBA defines specifications for both an Event Type Repository and an object Interface Repository. These two repositories were noted above as essential pre-requisites for ECAgent construction. In addition, CORBA offers a Dynamic Invocation Interface (DII) that permits the construction of CORBA calls to a remote object without compile-time knowledge of its interface. These three tools supply the essential middleware infrastructure needed to support open and generic event-condition-action specification mechanisms.
ECAgents integrate seamlessly with the Sentient Information Framework, an event-driven sentient application construction model we defined in previous work [Ipiña00]. In this model, distributed components processing context information are classified in three different categories, namely Context Generators, Context Abstractors and Context Channels.
Context Generators (CG) are sources of context information in event form. They usually encapsulate either hardware or software sensors, such as an Active Badge sensor or an Active Badge-based location server. They isolate context capture from context use, providing synchronous and asynchronous interfaces for interested parties access to their generated contextual information. They act as contextual event sources pushing monitoring objects’ state changes to their associated Context Channels.
Context Abstractors (CA) behave like proxy-CGs. Their mission is to map low-level contextual events into higher-level events that can directly drive applications. Some of the tasks they perform are to interpret fields of incoming events or to aggregate several incoming events and to generate from them a new higher-level event. For instance, if several people location events are received which reflect that they have been in the same room for more than 5 minutes, a Meeting Proceeding Context Abstractor, could convey a meeting_ON event to interested applications. In fact, the ECAgent concept was derived from our need for automating CA component creation. A CA is a simplified version of an ECAgent that evaluates a complex composite event pattern and produces a fixed action, i.e. it generates a higher-level event notification.
Context Channels (CC) are in charge of receiving event subscriptions, undertaking event filtering and communicating events to registered parties. Consumer event subscriptions are accompanied with an event-filtering pattern or event template that permits the channel determine when an event is of interest to a consumer. On event arrival, the channel evaluates the previously registered consumer filters and transmits the filtered events only to the suitable consumers. CCs are implemented as OMG Notification Channels [OMG98] in our architecture. Their mission is to decouple context sources from sinks, removing context source’s need to deal with event subscription, communication and filtering duties. In this way, a good trade-off between network bandwidth and processing load of event producers/consumers can be established. Figure 1 illustrates some CGs pushing events to their associated CCs.

A centralised Event Type Repository (ETR) server (Figure 1) is introduced that records available event type descriptions and references to the Notification Channels where those events are communicated. On bootstrap Context Generators contact with the ETR to advertise the descriptions of the events they generate and the references to the Notification Channels where those events will be pushed. The ETR indexes the data received by event type and stores the metadata associated to them. Client applications contact the ETR server to query for a specific event type metadata and for the Notification Channels receiving those events. Then, client application can register to the Notification Channels providing their contextual data of interest. On registration, event consumers submit an event pattern that will be used by the Notification Channel to convey to the consumer only the events that fulfil it.


The LocALE Lifecycle Manager [Ipiña+01], LCManager in short, is a centralised server that is aware of all the services, indexed by Interface Repository ID, that are available in a network. Moreover, it records the locations where all the services are running. Its duty is to control the physical location, i.e. the host, and the lifecycle, i.e. activation, migration and deactivation, of services in a network. As result of contextual events sentient systems often react activating a service. The LCManager provides the infrastructure required by these systems for the transparent on demand activation, migration or destruction of services at the location specified. Without LocALE, sentient applications could only react by triggering actions on already running services.
The Interface Repository Server shown in Figure 1 is an on-line database of object definitions offering the standard InterfaceRepository IDL interface defined by OMG. It enables an application to query about the operations and attributes offered by a CORBA interface. Assisted by the Interface Repository and the standard Dynamic Invocation Interface offered by CORBA, applications can dynamically build invocations to objects, without having prior knowledge of their interfaces at compile-time. Unfortunately users still do not have knowledge of the semantics of the services. Services semantics are described in textual mode, being so accessible to users while constructing their specifications.


4.2. ECAgents Interactions

As earlier mentioned, in the creation of the ECA rules of an entity’s agent a GUI front-end is employed to facilitate to non-computer literate users to define the rules of conduct desired from the reactive environment. When an ECAgent GUI front-end is started, this component contacts with the ETR server to get the metadata associated to all the events types known. For each event type, the GUI front-end displays in a List Box widget an event template representation of the event. An event template is formed by an event name followed by a list of attributes. A user may select any of these event templates and edit them to set values to the attributes of an event template. Attributes of an event can be set to a given value, a wildcard meaning it is not important the value of that field, or a variable. After an event template has been edited, an atomic event pattern is added to a second List Box widget defining an entity’s situation. Atomic events must be combined by connectives such as OR, AND or NOT provided by some front-end’s controls.


Once the antecedent definition of a rule is concluded, the user must select an action or group of actions to be triggered as result of the condition fulfilment. Actions can be selected from a collection of pre-defined commonly used actions (e.g. send_email, show_webpage or play_song) or an arbitrary script selected by the user. Moreover, the user may request the service of the LocALE Manager for the activation or migration of a service. Requests on objects, created on demand or already available, can be dynamically built using the CORBA Interface Repository and CORBA Dynamic Invocation Interface. A user simply needs to specify, through adequate GUI controls, the object to be invoked, the operation to be performed, and the values for the set of parameters of the operation. The LocALE Manager will take care of selecting the appropriate object to direct the request.
Once an EVENT-ACTION-CONDITION rule definition has concluded, this is submitted to the agent that will append it to the set of previously defined rules. An important aspect of rule definition is the lifetime definition of such rules. There maybe ECA that we want to activate permanently or other ones that we just want to be executed once. The ECAgent front-end provides facilities to define such behaviour. An ECAgent Manager may also de-activate ECA rules through the GUI front-end. As a consequence, the front-end application will un-register from the pertinent Notification Channels, and deactivate through LocALE services activated on demand.
ECAgents, see Figure 1, are both sinks of sentient information and action triggers. They consume contextual events, apply the conditional part of rules to them, and, when any of these conditions is matched, the action part of the associated rule is triggered. Thanks to the adoption of a rule-based production system language, the built-in inference engine embedded in the agent undertakes the reasoning process, without the requirement for the programmer to implement the complicated event condition monitoring process. Therefore, every ECAgent presents a common internal architecture whose implementation is valid for any kind of entity. The event listener module within an ECAgent receives event notifications from previously registered Context Channels. The event mapper transforms the incoming events into CLIPS’s facts that are passed into the CLIPS embedded inference engine. The CLIPS Engine hands off to the Action Trigger Module the description of the action to be undertaken and the parameters that permit it to take place. The Action Trigger Module embeds the intelligence necessary to translate the CLIPS-passed action representing fact into the suitable action. Next section offers an overview of the mapping between our chosen ECA Specification Language and the CLIPS programming language.

F
igure 1:
ECAgents interactions

5.ECA Rule Specification Language and Mapping to CLIPS

The major complexity of the system proposed will be the mapping of ECA rules to CLIPS rules. The event specification language that will be supported will permit composite event specifications on both contextual events and also some pre-defined events such as timing events. It will incorporate a facility for using variables in values of an event pattern. A BNF of such grammar will be provided in due time.


As an illustration of the expressive power of CLIPS the following complex ECA rule, expressed in natural language, is expressed in CLIPS:
Rule: play-classical-music

IF Diego is in the lab AND

There is nobody else around AND

Diego is typing with high intensity OR

At least has logged in the LAN

THEN


Start playing music from Diego’s classical MP3 playlist
We assume that the following 3 contextual events, as provided by Notification Channels, have been mapped to CLIP facts in the following form:
(location-event (name “Diego”) (where “Room 10”) (timestamp 122324))

(keyboard-activity-event (host “worthingtons”) (level 0.7) (timestamp 122326))

(user-loggedin-event (username “dl231”) (host “worthingtons”) (timestamp 122327))
The rule defining the above ECA rule would be:
(defrule play-classical-music “When Diego wants to listen classical music?”

(location-event (name “Diego”))

(not (location-event (name ~”Diego”))

(or ( (and (keyboard-activity-event (level ?x))

(test (> ?x 0.6))

)

(and (user-loggedin-event (username ?y))



(test (= ?y “dl231”))

)

)



)

=>

(assert (trigger play-music-action user “dl231”))



)

location-event(name == “Diego”) and

not (location-event(name ?N)) and

test (?N != “Diego”) and

((keyboard-activity-event (level ?x) and test(?x > 0.6)) or

(user-loggedin-event (username ?y) and

test (?y == “dl231”)

DO

NotifyEvent(play-music-event(user “dl231”))



:=
DO

:=

|


:= NOTIFY_EVENT($())

| RUN_SCRIPT(


)

:= NOT

| $()

| TEST(
:=

| ,
:=

| ,

:= 

|


|


:= ( )

| ( )


:= ( )
:= AND

| OR


| THEN
:= == | != | < | <= | > | >= | ~


:=
:=
:=
( :=
:=

|

|
:=

| sequence()


:= ?

:= ?
:=

|

|

|


|

|

|

|


|
:= /* */

|


:=
:=

| .

|

| _

Very complex example of what kind of rule specifications we can carry out.
“Notify a Meeting_ON event when when there are more than two people in the meeting room and the level of noise is high”
TRIPEvent(

6.ECAgents in use




7.Conclusion

8.References





[Brown+98] Brown P.J., Bovey J.D. and Chen X. “Context-aware Applications: from the Laboratory to the Marketplace”, IEEE Personal Communications

[CLIPS] CLIPS Manual

[Dey+00] Dey. A. and Abowd G.D. “CybreMinder: A Context-Aware System for Supporting Reminders”

[Dey00] Dey A.K. “Understanding and Using Context”

[Friedman01] Friedman-Hill E. “Jess: the Java Expert System Shell”

http://herzberg.ca.sandia.gov/jess/



[Gruber+99] Gruber R.E., Krishnamurthy B. and Panagos E. “High-Level Constructs in the READY Event Notification System”

[Hanson+] Hanson E.N. and Widom J. “An Overview of Production Rules in Database Systems”

[Hanson+92] Hanson E.N. and Widom J. “An Overview of Production Rules in Database Systems”

[Ipiña+01] López de Ipiña D. and Lo S. “LocALE: a Location-Aware Lifecycle Environment for Ubiquitous Computing”, ICOIN-15, Beppu, Japan, February 2001

[Ipiña00] López de Ipiña D. “Building Components for a Distributed Sentient Framework with Python and CORBA”, IPC8, Arlington, VA, January 2000

[Krishnamurthy+95] Krishnamurthy B. and Rosenblum D.S. “Yeast: A General-Purpose Event-Action System”, IEEE Transactions on Software Engineering, Vol. 21, No. 10, October 1995

[López01] López F. “NASA CLIPS RULE-BASED LANGUAGE”, http://www.siliconvalleyone.com/clips.htm

[McCarthy+99] McCarthy J.F., Anagnost T.D. “Event Manager”: Support for the Peripheral Awareness of Events”,

[NASA99] “CLIPS: A Tool for Building Expert Systems”, http://www.ghg.net/clips/CLIPS.html, August 99

[OMG98] OMG, Object Management Group, “Notification Service – Joint Revised Submission”, November 1998

[Pascoe99] The Context Information Service Architecture

[Salber+99] “The Context Toolkit: Aiding the Development of Context-Enabled Applications”

[Stafford-Fraser+96] Stafford-Fraser Q. and Robinson P. BrightBoard: A Video-Augmented Environment, CHI’96

ECArule = "TRIPevent((TRIPcode ?user1), (cameraID ?camera1), (pose.x ?x1), (pose.y ?y1), pose.z ?z1) AND\

TRIPevent((TRIPcode ?user2), cameraID ?camera1, (pose.x ?x2), (pose.y ?y2), pose.z ?z2) AND\

test(<> ?user1 ?user2) AND\

(test (> 5 (sqrt (+ (* (- ?x1 ?x2) (- ?x1 ?x2)) (* (- ?y1 ?y2) (- ?y1 ?y2)) (* (- ?z1 ?z2) (- ?z1 ?z2))))))\

=>\


notifyEvent(peopleToguether(?user1, ?user2))"
Definition of Production System:
Production Systems consist of a set of if-then rules, and a working memory. The working memory represents the facts that are currently believed to hold, while the if-then rules typically state that if certain conditions hold (e.g. certain facts are in the working memory), then some action should be taken (e.g. other facts should be added or deleted). If the only action allowed is to add a fact to working memory then the rules may be essentially logical implications, but generally greater flexibility is allowed. Production systems capture (relatively) procedural knowledge in a simple, modular manner.
Issue of learning rules. People from the Computer Science Department of Essex working on Intelligent Buildings and talking about 3 generations of Intelligent Buildings, apparently second generation ones have fixed rules, while 3rd generation ones are capable of learning.





Download 58.79 Kb.

Share with your friends:




The database is protected by copyright ©ininet.org 2024
send message

    Main page