AGENT TECHNOLOGY
John-Jules Ch. Meyer, Utrecht University
Benjamin W. Wah (ed.), Encyclopedia of Computer Science and Engineering
Copyright © 2008 by John Wiley & Sons, Inc. All rights reserved.
ABSTRACT
In this article an overview is given of the area of agent technology. First, in the introduction the concept of an agent is explained and some characteristics that are often found in the literature are given. Next the history of the field is outlined, starting out with the philosophical background, the attempts to formalize the notion of an agent by means of formal logic, and next the proposals for realizing agents through agent architectures, agent-oriented programming and software engineering. This is followed by a description of newer developments, where the emphasis is on multi-agent systems. The paper is concluded with sections on applications and further developments.
Keywords: agent technology, artificial intelligence, autonomy, intelligent agent, rational agent, cognitive agent, BDI model of agents, multi-agent systems, agent societies, agent communication, agent coordination, e-institutions, normative systems
INTRODUCTION
Agent technology is a rapidly growing subdiscipline of computer science on the borderline of artificial intelligence and mainstream software engineering studying the construction of intelligent systems. It is centered around the concept of an (intelligent / rational / autonomous) agent. An agent is generally taken to be a software entity that displays some degree of autonomy; it performs actions in its environment on behalf of its user but in a relatively independent way, taking initiatives to perform actions on its own by ‘deliberating’ its options to achieve its goal(s).
Although there is no generally accepted precise definition of an agent, there is some consensus on the (possible) properties of an agents (Wooldridge & Jennings, 1995, Wooldridge, 2002): agents are hardware or software-based computer systems that enjoy the properties of
-
autonomy: the agent operates without the direct intervention of humans or other agents, and has some control over its own actions and internal state.
-
situatedness: agents are situated in an environment: they sense it and act in it.
-
reactivity: agents perceive their environment and react to it in a timely fashion.
-
pro-activity: agents take initiatives to perform actions and may set and pursue their own goals.
-
social ability: agents interact with other agents (and humans) through communication; they may coordinate and cooperate while performing tasks.
Other properties that agents may have are: mobility (the ability of an agent to move around in an electronic network and the Web in particular), veracity (the assumption that an agent will not knowingly communicate false information), benevolence (the assumption that an agent will always try to do what is asked of it), and rationality (the assumption that an agent will act in order to achieve its goals, and will not act in such a way as to prevent its goals being achieved, its beliefs permitting), cf. (Wooldridge & Jennings, 1995).
We thus see that agents have both informational and motivational attitudes, viz. they handle and act upon certain types of information (such as knowledge, or rather beliefs) as well as motivations (such as goals). Many researchers adhere to a stronger notion of agency, sometimes referred to as ‘cognitive’ agents, which are agents that realize the above properties by means of mentalistic attitudes, pertaining to some notion of a mental state, involving such notions as knowledge, beliefs, desires, intentions, goals, plans, commitments, etc. The idea behind this is that through these mentalistic attitudes the agent can achieve autonomous, situated, reactive, proactive and social behaviour in a way that is mimicking or at least inspired by the human way of thinking and acting. So, in a way we may regard agent technology as a modern incarnation of the old ideal of creating intelligent artifacts in Artificial Intelligence (AI). Indeed, some modern textbooks in AI (Russell & Norvig, 1995, Nilsson, 1998) even identify AI with the study of agents!
However, we may also regard agent technology as a next step following the currently very popular object technology in the form of object-oriented programming. Some researchers regard agents as special kind of objects, and some use the term ‘active objects’ for agents (cf. Wooldridge, 2002, p. 26-27). However, in principle there are some fundamental differences between agents and objects. In object-oriented programming (OOP) objects are simply ‘used’: for instance, a call to a method of an object, will, provided everything works normally, just be executed by that object. In some sense objects are just passive entities that can be employed in a system. For agents, however, this is different. An agent cannot be controlled directly by some program outside that agent (such as another agent). The agent should be requested to perform a task or to provide a piece of information. And this request may be denied for reasons that concern that agent: perhaps it is too busy, or the request is not in line with its own goals, or perhaps even the agent is not in the right ‘mood’. (We will later see how far these ‘human-like’ qualities of agents may go.) In short, one may say that the main difference between agents and objects is the lack of autonomy of the latter (no control over its internal state and actions), while for the concept of an agent this is one of the defining properties.
It is also important to note that the property of situatedness that agents are supposed to possess puts them apart from the expert systems from the 80s. Expert systems are not situated: they only get information (symptoms) as input (from a user) and yield information (a diagnosis) as output (to that user). They do not really sense an environment, and neither do they act directly in such an environment.
While the initial research on agents focused on models of individual agents, more recent developments in agent technology emphasize particularly models and architectures of multi-agent systems, in which multiple agents share the same environment and interact with this and each other. Here we see the emergence of an interesting amalgam of the areas of (distributed) artificial intelligence and distributed computing in mainstream computer science. As a consequence one may discern a trend towards distribution (of resources, ‘intelligence’, reasoning) and delegation of tasks between agents. Below we will sketch the main research issues here, but first we turn to the start of agent technology: philosophical ideas about (human and artificial) agents acting autonomously in an environment, based on decisions involving their ‘mental states’, in particular pertaining to, what I for the moment call, in plain terms, their knowledge and objectives.
PHILOSOPHICAL BACKGROUND
The field of agent technology emerged out of philosophical considerations on how to reason about courses of action, and human action, in particular. In analytical philosophy there is an area occupied with what is called practical reasoning, in which one studies so-called practical syllogisms that constitute patterns of inference regarding actions. By way of an example, a practical syllogism may have the following form (Audi, 1999):
Would that I exercise.
Jogging is exercise.
Therefore, I shall go jogging.
Although this has the form of a deductive syllogism in the familiar Aristotelian tradition of ‘theoretical reasoning’, on closer inspection it appears that this syllogism does not express a purely logical deduction: the conclusion simply does not follow logically from the premises. It rather constitutes a representation of a decision of the agent (going to jog), where this decision is based on mental attitudes of the agent, viz. his/her beliefs (‘jogging is exercise’) and his/her desires or goals (‘would that I exercise’). So, practical reasoning is “reasoning directed toward action – the process of figuring out what to do”, as (Wooldridge, 2000) puts it. The process of reasoning about what to do next on the basis of mental states such as beliefs and desires is called deliberation. The philosopher Michael Bratman has argued that humans (and more generally, resource-bounded agents) also use the notion of an intention when deliberating their next action (Bratman, 1987). An intention is a desire that the agent is committed to and will try to fulfill till it believes it has achieved it or has some other rational reason to abandon it. Thus, we could say that agents, given their beliefs and desires, choose some desire as their intention, and ‘go for it’. As such we can view Bratman’s theory as an extension of the more general theory of intentional stance of Daniel Dennett (Dennett, 1987). This stance views complex entities (including humans, animals and computers) as (if they were) rational agents deliberating their beliefs and desires for deciding their next actions. As an aside we remark that Dennett’s intentional stance is very much related to similar theories in psychology and biology, and ethology, in particular, where humans, including children, and animals are supposedly endowed with a theory of mind about other such entities, thus able to reason about the mental states of these entities. (Wilson & Keil, 1999, p. 838-841). Bratman’s philosophical theory has been formalized through several studies, in particular the work of (Cohen & Levesque, 1990), (Rao & Georgeff, 1991) and (van der Hoek et al., 1998), and has led to the BDI (Belief-Desire-Intention) model of intelligent or rational agents (Rao & Georgeff, 1991). Since the beginning of the 1990s researchers have turned to the problem of realizing artificial agents. We will discuss this development in some more detail below.
AGENT LOGICS
As mentioned above, to get a more precise grasp on the philosophical considerations of Bratman, AI researchers started to use formal tools in the form of especially devised logics for formalising the main ideas. These approaches are based on modal logics, logics that concern modalities such as knowledge, belief, time, obligation, and action. Modal logics have been proposed in philosophy to analyze the properties of these modalities. Semantics of these logics are usually provided by ‘possible world semantics’ or ‘Kripke-semantics’ (Kripke, 1963), in which modal operators are interpreted by means of an accessibility (or possibility) relation, relating possible worlds according to the modality at hand. So, for instance, for temporal logic, this accessibility relation expresses the flow of time, while for epistemic logic (the logic of knowledge) the accessibility relation points at possible alternative worlds that the agent deems possible on the basis of its (lack of) knowledge. For example, a knowledge operator then expresses truth in all epistemically alternative worlds designated by the accessibility relation (cf. Chellas 1980, Meyer & Van der Hoek, 1995).
Cohen & Levesque (Cohen & Levesque, 1990) used a (linear-time) temporal logic for this. In their work, which has been very influential in subsequent agent research, they try to capture the notion of intention in terms of more primitive notions such as belief, goal and action. So the culmination of their paper is a definition of intention in terms of these notions. Actually they give two, since there are two natural notions of intention: intention to do/perform an action and intention to be in a particular state or situation. So for instance, an example of the former is the intention to go on a journey, while an example of the latter is the intention to be in a particular city, say Paris. Of course, there are relations between these two kinds of intentions, but they are distinct concepts in principle.
Next, Rao & Georgeff came up with their famous BDI logic, sometimes also called BDI-CTL, since it is based on the (branching-time) temporal logic CTL (computation tree logic), a logic devised in mainstream computer science to reason about concurrent processes (Emerson, 1990). Rao & Georgeff’s approach to formalizing Bratman’s philosophy is quite different from that of Cohen & Levesque. Apart from the fact that they use a branching-time logic (catering for possible choices of actions by the agent in a more direct fashion), they also do not build a definition of intention in terms of other notions. Rather they introduce in their logic three independent operators (‘modalities’) for beliefs, desires (or goals) and intentions, and then they start analyzing possible relations between these three operators, captured by axioms. For instance, they propose to have an axiom Intend(p) Desire(p), indicating that an intention is a (special) kind of desire.
Finally we mention that (van der Hoek et al., 1998) propose yet another approach to formalizing Bratman’s theory of intentions. This approach, called KARO (for Knowledge, Abilities, Results and Opportunities, comprising the core of the logic), is based not on temporal logic but on dynamic logic, which is a logic proposed in mainstream computer science to reason about programs (Harel et al., 2000). Here it is used to reason about the actions of agents. Furthermore several BDI-style operators are added such as knowledge, belief, desire, goal, and commitment to specify the behaviour of agents.
AGENT ARCHITECTURES
Next we turn to the issue of constructing agent-based systems. Since the philosophical and logical work on intelligent agents mentioned in the introduction, researchers have embarked upon the enterprise of realizing agent-based systems. Actually the first architecture for artificial agents was given by the philosopher Bratman himself together with two colleagues from artificial intelligence, David Israel and Martha Pollack, who devised their IRMA architecture (Bratman et al., 1988). At around the same time the influential BDI architecture (Rao & Georgeff, 1991) and its derivative Procedural Reasoning System (PRS) (Georgeff & Lansky, 1987) were proposed, which in turn inspired the dMARS architecture (d’Inverno et al., 1997). In brief, in the BDI architecture, the interpreter executes a sense-reason-act (or deliberation) cycle, repeating: getting (sense) input, leading to possible belief updates, in turn leading to generating new desires / goals, which are then filtered to intentions. These intentions give rise to the execution of actions, usually by means of plans from a pre-compiled plan library. The BDI architecture and its derivatives are called deliberative agent architectures.
However, there are also other agent architectures. Reactive agent architectures perform no deliberation or any kind of reasoning; they simply react to an environment. A typical example of such a reactive agent architecture is the subsumption architecture proposed by (Brooks, 1991). In essence, this architecture consists of layered modules, running in parallel given input from the environment. However the output of these layers may inhibit that of other layers, according to a hierarchy (called a subsumption hierarchy). In this way a priority relation is given amongst several behaviours generated by the modules. E.g., in robotic applications, collision avoidance takes priority over exploring and wandering around.
One may also combine reactive behaviour and deliberation into one system, giving rise to hybrid agent architectures. Typically, these are also layered architectures with both layers dealing with reactive behavours and layers dealing with deliberative behavour(s). Horizontally layered architectures resemble the subsumption architecture, where layers run in parallel after which possible conflicts in outcomes should be resolved in some way by means of a supervisory control framework. On the other hand, there are also vertically layered architectures, in which the input from the environment enters the top (or bottom) layer, and then input/output go through all layers successively, with final output from the bottom (or top, respectively) layer. The advantage of vertically layered architectures is that ‘conflicts’ between layers are solved ‘on the fly’, so to speak, while horizontally layered architectures will typically react faster (because of the layers running in parallel). An example of a horizontally layered architecture is the TouringMachines architecture, proposed by {Ferguson, 1992), while an example of a vertically layered architecture is Müller’s InteRRaP (Müller, 1997). For more about agent architectures the reader is referred to (Wooldridge, 2002).
AGENT-ORIENTED PROGRAMMING AND SOFTWARE ENGINEERING
Agent architectures are suitable for conveying the general ideas about the building blocks of agents. But of course, they are hard to use in building actual agents from scratch in some general-purpose programming language. To get a more systematic grip on the construction of agents, new developments came into existence. Some researchers took it onto themselves to provide methods and techniques (sometimes erroneously called a ‘methodology’) for constructing agents in a principled way, in the same vein as this has been done for ‘traditional’ ways of programming such as object-oriented programming. So, typically one discerns phases of the development of an agent system such as the (requirements) analysis phase, the design phase and the actual implementation phase. This research area is generally referred to as agent-oriented software engineering (AOSE, Ciancarini & Wooldridge, 2001, Bergenti et al., 2004). Several methods have been proposed in the literature so far. We mention here DESIRE (Brazier et al., 1995), GAIA (Wooldridge et al, 1999), AUML (Odell et al., 2001), TROPOS (Castro et al., 2002), OPERA (Dignum, 2004). Most of these methods are meant to ultimately implement agent systems using generic, general-purpose programming languages such as JAVA or C++. However, some researchers feel that using agent concepts (such as those in the BDI model) in the analysis phase and design phase, but not in the implementation phase (since general-purpose languages do not contain agent notions), hampers a smooth and correct construction of these systems (e.g. Dastani et al., 2004).
So these researchers devised dedicated agent-oriented programming (AOP) languages to program agents directly in terms of mentalistic notions in the same spirit as the ones mentioned above. So these languages typically contain beliefs, goals, commitments, and/or plans as built-in programming concepts (and nowadays mostly with a precise and formal semantics). The first researcher who proposed this approach was Yoav Shoham with the language AGENT0 (Shoham, 1993). This language enables the programmer to program timed commitments of agents, based on their beliefs and messages (such as requests to perform actions) from other agents. Other languages include AgentSpeak(L) / Jason, (Concurrent) METATEM, CONGOLOG, JACK, JADEX and 3APL/2APL (Fisher, 1994, Hindriks et al., 1999, de Giacomo et al., 2000, Bordini et al., 2005). Although all these languages have in common that they employ a number of mentalistic notions as mentioned above, they still differ widely from each other. For instance, AgentSpeak(L) (and its JAVA-based interpreter JASON) can be viewed as a programming language based on (a simplified version of) the PRS architecture, and can deal with beliefs and goals, and can generate plans on the basis of those by means of a plan library via an intricate mechanism using triggering events. METATEM is based on the idea of making logical specifications executable, which restricts, of course, the formulas one can use in the specification. These formulas are a certain subset of temporal logic mixed with other modalities such as e.g. knowledge. In a way one can view this approach as ‘temporal logic programming’. The (CON)GOLOG family of languages is based on the situation calculus, a popular specification formalism to reason about action and change, proposed by AI pioneer John McCarthy (McCarthy & Hayes, 1969). In essence, a CONGOLOG program, which is basically an imperative-style program, provides directions while doing planning to reach a goal (‘sketchy planning’). JACK is an extension of JAVA with agent-oriented constructs, while JADEX is a BDI reasoning engine implemented in JAVA that provides an execution environment and an Application Platform Interface (API). 3APL and its successor 2APL are rule-based languages, originally proposed as simplifications of AgentSpeak(L), with rules for plan generation (given goals and beliefs) and plan revision (given beliefs, and – in the case of 2APL - to be applied only when current plans fail).
MULTI-AGENT SYSTEMS AND AGENT SOCIETIES
Agent-based systems become truly interesting and useful if we have multiple agents at our disposal sharing the same environment. Here we have to deal with a number of more or less autonomous agents interacting with each other. Such systems are called multi-agent systems (MASs) (Wooldridge, 2002) or sometimes also agent societies. The field of MASs as such can be viewed as emerging out of that of Distributed AI, but also many computer scientists interested in distributed / parallel computing and concurrent programming, more in general, have been drawn to it. The advantages of a distributed way of solving problems and performing complex tasks are the following
-
computations can be done in parallel (so, in principle, can be done faster)
-
computations and reasoning can be done locally by the agents with limited information about the environment
-
the distributed nature of a MAS may improve robustness and reliability, since other agents may take over from failing agents
However, there is also a price to be paid: performance may be lower (suboptimal) whereas centralized single-agent systems may perform optimally in the ideal case that all information and resources are available. On the other hand, these centralized systems typically are more brittle with respect to robustness and reliability, and they may take a very long time to come up with the optimal solution. Another possible price to be paid concerns the communication overhead in a MAS to let the agents cooperate and coordinate properly. (Cf. Weiss, 1999).
AGENT INTERACTION: COORDINATION AND COMPETITION
When considering multi-agent systems, especially when we want to design such systems, one needs to think about how the agents will mutually interact. Several kinds of interaction are possible. Generally this is dependent on the role the agents play in the system. In fact, in most current agent-oriented software engineering methodologies a multi-agent system is designed by first considering the organizational or social structure of the system, where roles of agents literally play a pivotal role. These roles determine all kinds of (social) properties of the agents playing a particular role, such as objectives, rights, and norms (Dignum, 2004). Because of dependencies and power relations between roles, roles also ultimately determine how (rol-playing) agents will / should interact, which coordination type they follow. In (Dignum et al., 2002) three of these are distinguished: market, network and hierarchy. In markets agents are self-interested and competing with each other, in networks there is mutual interest and cooperation, where trust in other agents is a crucial factor, while in a hierarchy there are dependency and power relations, and delegation of objectives and tasks take place. Note that even in the case of cooperation it is not a priori obvious how autonomous agents will react to requests from other agents: since they ultimately have their own goals, it may be the case that they do not have the time to comply, or simply do not want to since they have incompatible objectives of their own.
Ultimately when designing a MAS it should be specified how role-enacting agents should communicate with each other, depending on the aims and characteristics of the application at hand, and determining the way roles are related to each other, and how role objectives and norms are ‘passed’ between related roles (Dignum, 2004). To this end communication protocols are employed, which specify who is to communicate with whom on what subject in what fashion. Communication protocols consist of communication actions which are mostly taken from standardized agent communication languages (see the next section). These protocols are application-dependent. For instance, agents in an auction scenario will use typical protocols for bidding. There are several well-known protocols in the auction theory literature, such as English, Dutch and Vickrey auctions (Sandholm, 1999). Note that these protocols are not always fixed a priori: depending on the setting (viz. coordination type), these protocols may be subject to negotiation between agents (‘contract negotiation’ in (Dignum, 2004)).
More in general one can say that the designer of a multi-agent system employs mechanism design, i.e., the design of protocols governing multiagent interactions such that desirable properties hold, such as guaranteed success (eventually agreement will be reached), certain efficiency and optimality criteria (such as Pareto efficiency), stability (agents having an incentive to behave in a particular way, such as the Nash equilibrium, in which agents to do not have any incentive to deviate from their behaviour), equity (are all agents treated fairly?) (cf. Sandholm, 1999). Many of these criteria stem from the ‘classical’ discipline of game theory, initiated by (von Neumann and Morgenstern, 1944), which provides a rigorous, mathematical analysis of games.
In cooperative settings one can consider coordination through joint intentions. As we have seen the importance of the notion of intention (and BDI more generally) in the case of a single agent, we might also consider whether intentions (and other BDI notions) may be generalized to group notions. This has resulted in notions such as common knowledge, common/mutual belief and joint intentions (Cohen & Levesque, 1990b). Work along these lines has yielded coordination models based on teamwork, where teams are formed on the basis of joint intentions (goals) and recognized potential for cooperative action (Cohen & Levesque, 1991, Jennings, 1995, Tambe, 1997).
In more open forms of agent societies, such as networks and especially markets, there is room for negotiation between agents. Above we have seen already that agents may negotiate their interaction contracts, including the protocols that will be used to interact with other agents in the system, but, of course, also other things may be negotiated as well, such as tasks and goods, depending on the application at hand (Rosenschein & Zlotkin, 1994). The area of agent negotiation has become a field in its own nowadays. A related area that is also getting more attention is that of agent argumentation. Argumentation is a classical field in philosophy and logic (Walton & Krabbe, 1995). It deals with trying to convince other agents of the truth or falsity of some state of affairs by putting forward reasons (arguments) for and against propositions, together with justifications for the acceptability of these arguments (Wooldridge, 2002). It was already realized that argumentation can play a role in reasoning forms in AI, particularly the defeasible ones, where it is possible that preliminary conclusions have to withdrawn when more information becomes available (Prakken & Vreeswijk, 2001). Since (game-theoretic) negotiation has severe limitations, especially in the setting of agents, viz. that positions can neither be justified nor changed, researchers have turned to argumentation-based negotiation: negotiations in which argumentation is employed to reach an agreement (Sycara, 1989, Parsons et al., 1998). Argumentation takes place through a dialogue, a series of arguments, communicated between agents. This brings us to the important subject of agent communication.
AGENT COMMUNICATION
As we have seen before, MASs will generally involve some kind of communication between agents. Agents may communicate by means of a communication primitive such as a send(agent, performative, content), which has as semantics to send to the agent specified the content specified with a certain illocutionary force, specified by the performative, e.g. inform or request. The area of agent communication (and agent communication languages) is a field of research in itself (Dignum & Greaves, 2000). It includes the languages that are used for communication (agent communication languages or ACLs). The most well-known ACLs are KQML (Knowledge Query and Manipulation Language) (Mayfield et al., 1996) and FIPA (FIPA, 1999). These languages define performatives, such as inform, request, confirm, without mandating any specific language for message content (cf. Weiss, 1999, Wooldridge, 2002). These performatives stem from speech act theory in the philosophy of language, introduced by (Austin, 1962) and further developed by (Searle, 1969), where it was observed that certain natural language utterances could be viewed as actions, changing the state of the world by uttering them. These speech acts (performatives) are now used as building blocks for communication protocols for agents.
A related issue, particularly within heterogeneous agent societies, concerns the (content) language (ontology) agents use to reason about their beliefs and communicate with each other. Of course, if agents stem from different sources (designers) and have different tasks they will generally employ different and distinct ontologies (concepts and their representations) for performing their tasks. When communicating it is generally not efficacious to try to transmit their whole ontologies to each other or to translate everything into one giant `universal' ontology if this would exist anyway. Rather it seems that a kind of ‘ontology negotiation’ should take place to arrive at a kind of minimal solution (i.e. sharing of concepts) that will facilitate communication between those agents (Bailin & Truszkowski, 2002, van Diggelen et al., 2006).
NORMS AND E-INSTITUTIONS
In general, one has to consider the issue of balancing the individual agent’s autonomy with its behaviour in an agent society. Often this is regarded in a way similar to the way human societies operate: the behaviour of an agent in a society is constrained by norms. These are properties that agents should adhere to, either specified declaratively or procedurally by means of protocols. Typically the norms that an agent has to obey concern prohibitions and permissions to perform certain actions, which may be role-dependent. A MAS in which norms govern the behaviour of the agents is called a normative system (Meyer & Wieringa, 1993, Jones & Sergot, 1993, p. 276). The subsystem of a normative system that specifies (and enforces) the norms on agents in a MAS, is called an electronic institution (e-institution) (Esteva et al., 2001).
APPLICATIONS
This brings us to an important question: what kind of applications is particularly suited for an agent-based solution? Although it is hard to say something about this in general, one may particularly think of applications where (e.g. logistical or planning) tasks may be distributed in such as way that subtasks may be (or rather preferably so) performed by autonomous entities (agents). For instance in situations where it is virtually impossible to perform the task centrally, due to either the complexity of this task or a large degree of dynamics (changes) in the environment in which the system has to operate. So applications such as cooperative distributed problem solving agents, task and resource allocation in agent systems, distributed sensing agents, multi-agent planning, robotic co-operating teams, but also workflow and business management and the management of industrial systems fall into this category. These applications have become important subareas of agent research in itself. For instance, although planning is a classical subject within AI, particularly research in distributed planning has been taken up within the context of multi-agent systems (Ephrati & Rosenschein, 1993, Wilkins & Myers, 1998).
Also in the area of information retrieval and management, especially in case of large amounts of (complex) information there are applications of agents of this kind: information agents may, for example, function as brokering and matchmaking entities to connect providers and users of information and services. This is essentially a distributed way of searching information. In these applications one thus views agents more or less as a novel computing paradigm, related to grid, peer-to-peer (P2P) and ubiquitous computing.
Of course, there are also applications where there is a natural notion of a ‘cognitive’ agent, i.e. an agent possessing mental attitudes. For instance, in virtual environments such as (entertaining or serious) gaming where virtual characters need to behave in a natural or ‘believable’ way and have human-like features, agents seem to be the obvious choice for their realization. And the cognitive attitudes of an agent may also be fruitfully employed in human-machine interaction applications, in interaction with advanced software systems, but also in robotic applications. So synthetic, embodied, emotional and believable agents, agent-based simulation and modeling of cognitive and social behaviour, human-machine interfaces, as well as humanoid and sociable robots fall into this category.
But we should also not forget the common-sense meaning of agent, which is “someone representing you or who looks after your affairs on your behalf”. So this yields the application of personal software assistants, in which agents play the role of proactive assistants to users working with some application (Wooldridge 2002). Examples are personal information agents and web agents. In the same vein we have applications in e-commerce / e-business (such as comparison shopping agents an auction bots in auctions and electronic markets) and in applications for the Web more in general, where agents may act on behalf of a user, and in particular mobile agents that travel to other platforms on behalf of their users.
FURTHER DEVELOPMENTS
In this section we will sketch a few further developments. First of all, as agent programming gets more mature, one realizes the need for the formal verification of agent programs. Since it is deemed imperative to check the correctness of complex agent-based systems for very costly and life-critical applications, one tries to develop of formal verification techniques for agents such as model checkers (Rash et al., 2001). This is by no means a trivial matter. Also, having agent logics as mentioned above available does not mean that these logics can directly be used for verification. One of the problems known from the literature (e.g. Van der Hoek & Wooldridge, 2003) is that agent logics such as BDI-CTL are not grounded in computation, that is to say, the notions used in those logics, such beliefs, desires and intentions, as well as semantical structures such as possible worlds and accessibility relations, are not directly related to computational notions. So, researchers are working to bridge the gap between agent programs and agent logics. The aim here is to obtain verification methods, preferably (semi-)automated ones, such as proof systems and model checkers.
On another front, research is going on to consider extensions to the deliberation process of agents, including also notions that seem to be ‘irrational’ at first sight, such as emotions. The idea here is that emotions may provide heuristics in choosing between (many) alternative options of goals and plans, and thus enhance the decision-making capabilities of agents. Also it is to be expected that agent will behave more ‘human-like’, which is an aspect that is important in its own right in certain applications (such as believable virtual characters and advanced human-machine interfaces) (see e.g. Dastani & Meyer, 2006).
Another example of reconsidering agent deliberation is in the context of ‘hybrid’ agent systems with ‘humans in the loop’, where there is ‘mixed initiative’ (from both humans and artificial agents), which gives rise to adjustable autonomy of the agent, i.e. in some cases the agent is more autonomous (has more initiative) than in other cases (Klein et al., 2004).
CONCLUSION
In this article we have reviewed the area of agent technology. In particular we have seen how the idea of agent technology and agent-oriented programming evolved from philosophical considerations about human action to a way of programming intelligent artificial (computer-based) systems. We have also looked at the important and promising subfield of MAS, in particular the main issues of interest here and possible applications. Since (multi) agent programming is a way to construct complex intelligent systems in a structured and anthropomorphic way, it appears to be a technology that is widely applicable, and it may well become one of the main programming paradigms of the future.
ACKNOWLEDGMENTS
Thanks to the anonymous referees of this paper for their valuable suggestions for improvement.
BIBLIOGRAPHY
R. Audi (ed.), The Cambridge Dictionary of Philosophy, Cambridge University Press, Cambridge, 1999.
J.L. Austin, How To Do Things With Words, Oxford University Press, Oxford, 1962.
S. Bailin & W. Truszkowski, Ontology Negotiation between Intelligent Information Agents, Knowledge Engineering Review 17(1), 7-19, 2002.
F. Bergenti, M.-P. Gleizes & F. Zambonelli (eds.), Methodologies and Software Engineering for Agent Systems – The Agent-Oriented Software Engineering Handbook, Kluwer, Boston/Dordrecht, 2004.
R.H. Bordini, M. Dastani, J. Dix & A. El Fallah Seghrouchni (eds.), Multi-Agent Programming (Languages, Platforms and Applications), Springer Science, New York, 2005.
M.E. Bratman, Intentions, Plans, and Practical Reason, Harvard University Press, Cambridge, 1987.
M.E. Bratman, D.J. Israel & M.L. Pollack, Plans and Resource-Bounded Practical Reasoning, Computational Intelligence 4, 1988, pp. 349-355.
F. Brazier et al., Formal Specification of Multi-Agent Systems: A Real-World Case, in: Proc. ICMAS-95, San Francisco, CA, 1995, pp. 25-32.
R.A. Brooks, Intelligence without Reason, in Proc. IJCAI-91, Sydney, Australia, 1991, pp. 569-595.
J. Castro, M. Kolp & J. Mylopoulis, Towards Requirements-Driven Information Systems Engineering: the TROPOS Project, Information Systems 27, 2002, pp. 365-389.
P. Ciancarini &. M.J. Wooldridge (eds.), Agent-Oriented Software Engineering, Lecture Notes in Artificial Intelligence 1957, Springer, Berlin, 2001.
B. Chellas, Modal Logic: an Introduction, Cambridge University Press, Cambridge, 1980.
P.R. Cohen and H.J. Levesque, Intention is Choice with Commitment, Artificial Intelligence 42(3), 213-261, 1990.
P.R. Cohen and H.J. Levesque, Rational Interaction as the Basis for Communication, in: Intentions in Communication (P.R. Cohen, J. Morgan & M.E. Pollack, eds.), MIT Press, Cambridge, MA, 1990b, pp. 221-256.
P.R. Cohen and H.J. Levesque, Teamwork, Nous 25(4), 1991, 487-512.
M. Dastani, J. Hulstijn, F. Dignum & J.-J. Ch. Meyer, Issues in Multiagent System Development. In: Proceedings 3rd International Joint Conference On Autonomous Agents & Multi Agent Systems (AAMAS 2004) (N.R. Jennings, C. Sierra, L. Sonenberg & M. Tambe, eds.), ACM, New York, 2004.
M. Dastani & J.-J. Ch. Meyer,. Programming Emotional Agents, in: Proc. ECAI 2006 (G. Brewka, S. Coradeschi, A. Perini & P. Traverso, eds.), Riva del Garda, IOS Press, Amsterdam, 2006, pp. 215-219.
D. Dennett, The Intentional Stance, Cambridge, MA, Bradford Books / MIT press, 1987.
J. van Diggelen, R.J. Beun, F. Dignum, R.M. van Eijk & J.-J. Ch. Meyer, ANEMONE: An Effective Minimal Ontology Negotiation Environment, in: Proc. Fifth Int. Joint Conf. On Autonomous Agents and Multiagent Systems (AAMAS’06) (P. Stone & G. Weiss, eds.), Hakodate, Hokkaido, Japan, ACM Press, 2006, pp. 899-906.
F. Dignum, Autonomous Agents with Norms, Artificial Intelligence and Law 7, 1999, pp. 69-79.
F. Dignum & M. Greaves (eds.), Issues in Agent Communication, Lecture Notes in Artificial Intelligence 1906, Springer, Berlin, 2000.
V. Dignum, A Model for Organisational Interaction, Based on Agents, Founded in Logic, PhD thesis, University of Utrecht, 2004.
V. Dignum, J.-J. Ch. Meyer, H. Weigand & F. Dignum, An Organisational-Oriented Model for Agent Societies, in: Proceedings International Workshop on Regulated Agent-Based Social Systems: Theory and Applications (RASTA’02) (G. Lindemann, D. Moldt, M. Paolucci, B. Yu, eds.). Univ. Hamburg. FBI—HH-M-318/02, 31-50, 2002.
M. d’Inverno et al., A Formal Specification of dMARS, in: Intelligent Agents IV (A. Rao, M.P. Singh & M.J. Wooldridge, eds.), LNAI 1365, Springer, Berlin, 1997, pp. 155-176.
E.A. Emerson, Temporal and Modal Logic, Chapter 16 in: Handbook of Theoretical Computer Science, Vol. B: Formal Models and Semantics (J. van Leeuwen, ed.), Elsevier, 1990, pp. 995-1072.
E. Ephrati & J.S. Rosenschein, Multi-Agent Planning as a Dynamic Search for Social Consensus, in: Proc. 13th Int. Joint Conf. on Artificial Intelligence (IJCAI-93), 1993, Morgan Kaufmann, San Mateo, CA, 1993, pp. 423-429.
M. Esteva, J. Padget and C. Sierra, Formalizing a Language for Institutions and Norms. In: J.-J. Ch. Meyer and M. Tambe, eds.), Intelligent Agents VIII, Lecture Notes in Artificial Intelligence 2333, Springer, Berlin, 2001, pp. 348-366.
I.A. Ferguson, TouringMachines: an Architecture for Dynamic, Rational, Mobile Agents. PhD Thesis, Clare Hall, University of Cambridge, UK, 1992.
FIPA, Specification Part 2 – Agent Communication Language, Technical Report, 1999.
M. Fisher, A Survey of Concurrent METATEM - The Language and Its Applications. In: Temporal Logic (D.M. Gabbay and H.J. Ohlbach, eds.), Lecture Notes in Artificial Intelligence 827, Springer, Berlin, 1994, 480-505.
M.P. Georgeff & A.L. Lansky, Reactive Reasoning and Planning, in: Proceedings of the 6th National Conference on Artificial Intelligence (AAAI-87), Seattle, WA, 1987, pp. 677-682.
G. de Giacomo, Y. Lespérance and H. Levesque, ConGolog, a Concurrent Programming Language Based on the Situation Calculus, Artificial Intelligence 121 (1,2), 2000, pp. 109-169.
D. Harel, D. Kozen & J. Tiuryn, Dynamic Logic, The MIT Press, Cambridge, MA, 2000.
K.V. Hindriks, F.S. de Boer, W. van der Hoek, and J.-J. Ch. Meyer, Agent Programming in 3APL, International Journal of Autonomous Agents and Multi-Agent Systems 2(4), 1999, pp. 357-401.
W. van der Hoek, B. van Linder and J.-J. Ch. Meyer, An Integrated Modal Approach to Rational Agents. In: Foundations of Rational Agency (M. Wooldridge and A. Rao, eds.), Kluwer, Dordrecht, 1998, pp. 133-168.
N.R. Jennings, Controlling Cooperative Problem Solving in Industrial Multi-Agent Systems Using Joint Intentions, Artificial Intelligence 75(2), 1995, pp. 195-240.
A. Jones & M. Sergot, On the Characterization of Law and Computer Systems: The Normative System Perspective, in: Deontic Logic in Computer Science: Normative System Specification (J.-J. Ch. Meyer & R.J. Wieringa, eds.), Wiley, Chichester, UK, 1993, pp. 275-307.
G. Klein, D.D. Woods, J.M. Bradshaw, R.R. Hoffman, P.J. Feltovich, Ten Challenges for Making Automation a "Team Player" in Joint Human-Agent Activity, IEEE Intelligent Systems 19(6), 2004, pp. 91-95.
S. Kripke, Semantical Analysis of Modal Logic, Zeitschrift für Mathematische Logik und Grundlagen der Mathematik 9, 1963, pp. 67-96.
J. Mayfield, Y. Labrou & T. Finin, Evaluating KQML as an agent communication language, in: Intelligent Agents II (M. Wooldridge, J.P. Müller & M. Tambe, eds.), LNAI 1037, Springer, Berlin, 1996, pp. 347-360.
J. McCarthy & P. Hayes, Some Philosophical Problems form the Standpoint of Artificial Intelligence, in: Machine Intelligence 4 (B. Meltzer & D. Michie, eds.) Edinburgh Univ. Press, Edinburgh, UK, 1969.
J.-J. Ch. Meyer & W. van der Hoek, Epistemic Logic for AI and Computer Science, Cambridge University Press, Cambridge, 1995.
J.-J. Ch. Meyer & R.J. Wieringa, Deontic Logic in Computer Science: Normative System Specification, Wiley, Chichester, UK, 1993.
J. Müller, A Cooperation Model for Autonomous Agents, in: Intelligent Agents III (J.P Müller, M. Wooldridge & N.R. Jennings, eds.), Lecture Notes in Artificial Intelligence 1193, Springer, Berlin, 1997, pp. 245-260.
J. von Neumann & O. Morgenstern, Theory of Games and Economic Behaviour, Princeton University Press, Princeton, NJ, 1944.
N.J. Nilsson, Artificial Intelligence: A New Synthesis, Morgan Kaufmann, San Francisco, 1998.
J. Odell, H. Parunak & B. Bauer, Representing agent interaction protocols in UML, in: Agent-Oriented Software Engineering (P. Ciancarini & M.J. Wooldridge, eds.), LNCS 1957, Springer, Berlin, 2001, pp. 185-194.
S. Parsons, C.A. Sierra & N.R. Jennings, Agents That Reason and Negotiate ny Arguing, J. Logic and Computation 8(3), 1998, pp. 261-292.
H. Prakken & G. Vreeswijk, Logics for Defeasible Argumentation, in: Handbook of Philosophical Logic, 2nd edition (D. Gabbay & F. Guenthner, eds.), Kluwer, Boston, MA, 2001.
A.S. Rao and M.P. Georgeff, Modeling rational agents within a BDI-architecture, in: Proc. 1991 Conf. On Knowledge Representation (J. Allen, R. Fikes and E. Sandewall, eds.). Morgan Kaufmann, San Francisco, 1991, pp. 473-484.
Rash, J.L., Rouff, C.A., Truszkowski, W., Gordon, D. & Hinchey, M.G. (eds.), Proceedings First Goddard Workshop on Formal Approaches to Agent-Based Systems (FAABS 2000), Lecture Notes in Artificial Intelligence 1871, Springer, Berlin/Heidelberg, 2001.
J.S Rosenschein & G. Zlotkin, Rules of Encounter: Designing Conventions for Automated Negotiation among Computers, MIT Press, Cambridge, MA, 1994.
S. Russell & P. Norvig, Artificial Intelligence: A Modern Approach, Prentice Hall, Englewood Cliffs, NJ, 1995.
T. Sandholm, Distributed Rational Decision Making, in: G. Weiss (ed.), Multiagent Systems: A Modern Approach to Distributed Artificial Intelligence, MIT Press, Cambridge, MA, 1999, pp. 201-258.
Y. Shoham, Agent-Oriented Programming, Artificial Intelligence 60(1), 1993, pp. 51-92.
J.R Searle, Speech Acts: an Essay in the Philosophy of Language, Cambridge University Press, Cambridge, 1969.
K.P. Sycara, Multiagent Compromise via Negotiation, in: Distributed Artificial Intelligence, Vol. II (L. Gasser & M. Huhns, eds.), Pitman/Morgan Kaufmann, London/San Mateo, CA, 1989, pp. 119-138.
M. Tambe, Towards Flexible Teamwork, J. AI Research 7, 1997, pp. 83-124.
W. Van der Hoek & M. Wooldridge, Towards a Logic of Rational Agency, Logic Journal of the IGPL 11(2), 2003, pp. 133-157.
D. N. Walton & E.C.W. Krabbe, Commitment in Dialogue: Basic Concepts of Interpersonal Reasoning, SUNY Press, Albany, NY, 1995.
G. Weiss (ed.), Multiagent Systems: A Modern Approach to Distributed Artificial Intelligence, MIT Press, Cambridge, MA, 1999.
D.E. Wilkins & K. Myers, A Multiagent Planning Architecture, in: Proc. 4th Int. Conf. on Artificial Intelligence Planning Systems (AIPS-98), AAAI Press, Menlo Park, CA, 1998, pp. 154-162.
R.A. Wilson & F.C. Keil (eds.), The MIT Encyclopedia of the Cognitive Sciences, Bradford Book / MIT Press, Cambridge, MA, 1999.
M.J. Wooldridge, Reasoning about Rational Agents, The MIT Press, Cambridge, MA, 2000.
M.J. Wooldridge. An Introduction to MultiAgent Systems. John Wiley & Sons. Chichester, 2002.
M.J. Wooldridge & N.R. Jennings (eds.), Intelligent Agents, Lecture Notes in Artificial Intelligence 890, Springer, Berlin, 1995.
M.J. Wooldridge, N.R. Jennings & D. Kinny, A Methodology for Agent-Oriented Analysis and Design, in: Proc. Agents ’99, Seattle, WA, 1999, pp. 69-76.
READING LIST
M.N. Huhns & M.P. Singh, Readings in Agents, Morgan Kaufmann, San Francisco, CA, 1998.
(worthwhile collection of the older, seminal papers)
Autonomous Agents & Multi Agent Systems, the yearly proceedings of the international JAAMAS conferences, ACM Press, since 2002. These are the continuation of separate conferences and workshops started in the 1990s till 2002, i.e. ICMAS (Int. Conf. On MultiAgent Systems), Autonomous Agents and ATAL (Agent Theories, Architectures and Languages). There is also a journal with the same name, originally published by Kluwer and currently by Springer.
Share with your friends: |