Agent Based Modeling What is it?

Download 28.36 Kb.
Size28.36 Kb.

  1. Agent Based Modeling

    1. What is it?

ABM is a software engineering method which roots emerged from early attempts at artificial Intelligence and has a lot in common with object orientated software engineering [4]. ABM has many different names, other terms often used to describe it are Multi Agent Systems, Agent Based Simulation, Individual based modelling, Agent-based computing, and Social Level Modelling [e.g. 1, 2, 3, 4]. These names are often used interchangeably and occasionally overlap as to date there is no standard definition of the term Agent Based Modeling.

What is an agent?

–A discrete entity with its own goals and behaviors

–Autonomous, with a capability to adapt and modify its behaviors


–Some key aspect of behaviors can be described.

–Mechanisms by which agents interact can be described.

–Complex social processes and a system can be built “from the bottom up.”


–People, groups, organizations

–Social insects, swarms

–Robots, systems of collaborating robots

Agents are diverse and heterogeneous

An agent-based model consists of:

–A set of agents (part of the user-defined model)

–A set of agent relationships (part of the user-defined model)

–A framework for simulating agent behaviors and interactions (provided by an ABMS toolkit or other implementation)

Agents can move in free (continuous) space

��Cellular automata have agents interacting in local “neighborhoods”

��Agents can be connected by networks of various types and be static or dynamic

��Agents can move over Geographical Information Systems (GIS) tilings

��Sometimes spatial interactions are not important (“Soup”Model)

Agents Are Autonomous Decision-making Units with Diverse Characteristics (Heterogeneous) [2]

An agent is an encapsulated computer system that is situated in some environment

and that is capable of flexible, autonomous action in that environment in order to

meet its design objectives.

There are a number of points about this definition that require further explanation. Agents


(i) clearly identifiable problem solving entities with well-defined boundaries and interfaces;

(ii) situated (embedded) in a particular environment—they receive inputs related to the

state of their environment through sensors and they act on the environment through


(iii) designed to fulfill a specific purpose—they have particular objectives (goals) to


(iv) autonomous—they have control both over their internal state and over their own


(v) capable of exhibiting flexible problem solving behaviour in pursuit of their design

objectives—they need to be both reactive (able to respond in a timely fashion to

changes that occur in their environment) and reactive (able to act in anticipation of future goals). [3]

Drawing these points together (Fig. 1), the essential concepts of agent-based computing can

be seen to be: agents, high-level interactions and organisational relationships (see [14,19,23] for broadly similar characterisations). [3]

The agent-oriented approach advocates decomposing problems in terms of autonomous agents that can engage in flexible, high-level interactions. [3]

An intelligent agent is generally regarded as an autonomous decision-making system, which senses and acts in some environment [4]

By an agent-based system, we mean one in which the key abstraction used is that of an agent. Agent-based systems may contain a single agent (as in the case of user interface agents or software secretaries [39]) but arguably the greatest potential lies in the application of multi-agent systems [6] By an agent, we mean a system that enjoys the following properties [[66] pp. 116-1 181.

(a) Autonomy: agents encapsulate some state (that is not accessible to other agents), and make decisions about what to do based on this state, without the direct intervention of humans or others.

(b) Reactivity: agents are situated in an environment (which may be the physical world, a user via a graphical user interface, a collection of other agents, the Internet, or perhaps many of these combined), are able to perceive this environment (through the use of potentially imperfect sensors), and are able to respond in a timely fashion to changes that occur in it.

(e) Proactiveness: agents do not simply act in response to their environment, they are able to exhibit goaldirected behaviour by taking the initiative.

(4 Social ability: agents interact with other agents (and possibly humans) via some kind of agent-communication language [17], and typically have the ability to engage in social activities (such as cooperative problem solving or negotiation) to achieve their goals. [4]

Agents are simply software components that must be designed and implemented in much the same way that other software components are. However, AI techniques are often the most appropriate way of building agents. [4]

The most obvious difference between the ‘standard’ object model and our view of agent-based systems is that in traditional object-oriented programs there is a single thread of control. In contrast, agents are processlike, concurrently executing entities. However, there have been variants on the basic object model in which objects are more like processes: object-based concurrent programming models such as ACTORS [l] have

long been recognised as an elegant model for concurrent computation, and ‘active object’ systems are also quite similar; even comparatively early on in the development of object-oriented programming, it was recognised that something like agents would be a natural next step. [4]

In addition, the object-oriented community has not addressed issues like cooperation, competition, negotiation, computational economies and so on, that form the foundation for multi-agent systems development [4]

    1. Why use it?

Greater demand due to the desire to model complex systems, financial, ecological, biological, physics, chemistry, social networks Due to new design tools, increased processing power, new design methodologies, greater access to data things can now be modelled that previously couldn’t [2]

Agent modelling should be used when:

- When there is a natural representation as agents

–When there are decision and behaviors that can be defined discretely (with boundaries)

–When it is important that agents adapt and change their behavior

–When it is important that agents learn and engage in dynamic strategic behavior

–When it is important that agents have a dynamic relationships with other agents, and agent relationships form and dissolve

–When it is important that agents form organizations and adaptation and learning are important at the organization level

–When it is important that agents have a spatial component to their behaviors and interactions

��When the past is no predictor of the future

��When scale-up to arbitrary levels is important

��When process structural change needs to be a result of the model, rather than an input to the model [2]

Agents are being espoused as a new theoretical model of computation that more closely reflects current computing reality than Turing Machines [3]

In this article, it is argued that although contemporary methods are a step in the right

direction, when it comes to developing complex, distributed systems they fall short in two

main ways:

(i) the interactions between the various computational entities are too rigidly defined;


(ii) there are insufficient mechanisms available for representing the system’s inherent

organisational structure (see Section 4 for more details of these arguments).

Against this background, the two central arguments of this paper can be expressed:

The Adequacy Hypothesis. Agent-oriented approaches can significantly enhance our

ability to model, design and build complex, distributed software systems.

The Establishment Hypothesis. As well as being suitable for designing and building

complex systems, the agent-oriented approach will succeed as a mainstream software engineering paradigm. [3]

Decomposing a problem in such a way aids the process of engineering complex systems in two main ways. Firstly, it is simply a natural representation for complex systems that are invariably distributed (“all real systems are distributed” [22]) and that invariably have multiple loci of control (“real systems have no top” [42, p. 47]). 6 This decentralisation, in turn, reduces the system’s control complexity and results in a lower degree of coupling between components. The fact that agents are active means they know for themselves when they should be acting and when they should update their state (cf. passive objects that need to be invoked by some external entity to do either). Such self-awareness reduces control complexity since the system’s control know-how is taken from a centralised repository and localised inside each individual problem solving component. Secondly, since decisions about what actions should be performed are devolved to autonomous entities, selection can be based on the local situation of the problem solver. This enables selection to be responsive to the agent’s actual state of affairs, rather than some external entity’s perception of this state, 7 and means that the agent can attempt to achieve its individual objectives without being forced to perform potentially distracting actions simply because they are requested by some external entity. Moving onto the flexible nature of interactions. The fact that agents make decisions about the nature and scope of interactions at run-time makes the engineering of complex systems easier for two main reasons. Firstly, the system’s inherent complexity means it is impossible to know a priori about all potential links: interactions will occur at unpredictable times, for unpredictable reasons, between unpredictable components. For this reason, it is futile to try and predict or analyse all the possibilities at design-time. Rather, it is more realistic to endow the components with the ability to make decisions about the nature and scope of their interactions at run-time. From this, it follows that components need the ability to initiate (and respond to) interactions in a flexible manner (see Section 5 for a discussion of the downside of this flexibility). Thus agents are specifically designed to deal with unanticipated requests and they can spontaneously generate requests for assistance whenever appropriate. Secondly, the problem of managing control relationships between the software components is significantly reduced (see above discussion). All agents are continuously active and any coordination that is required is handled bottom-up through inter-agent interaction. Thus, the ordering of the system’s toplevel goals is no longer something that has to be rigidly prescribed at design time. Rather, it becomes something that is handled in a context-sensitive manner at run-time. [3]

The success of such agent-oriented systems, both in terms of increased throughput and

greater robustness to failure, can be attributed to a number of points. Firstly, representing

the components and the machines as agents means the decision making is much more

localised. It can, therefore, be more responsive to prevailing circumstances. If unexpected

events occur, agents have the autonomy and proactiveness to try alternatives. Secondly,

because the schedules are built up dynamically through flexible interactions, they can

readily be altered in the event of delays or unexpected contingencies. For example, if

one of the constituent parts of a composite item is delayed en route to a synchronisation

point, it can inform the remaining team members. Together they can then re-arrange the

meeting time and adapt their individual behaviour accordingly. Thirdly, the explicitly

defined relationships between the constituent parts of a composite item identify those

agents that need to coordinate their actions.Moreover, a composite item team can be treated as a single conceptual entity by machines further on down the manufacturing line. This, in turn, eases the scheduling task by reducing the number of items that need to be considered during decision making. [3]

Agents appear to be a promising approach to developing many complex applications, ranging from Internetbased electronic commerce and information gathering to industrial process control [4]

    1. How do you use it?

When adopting an agent-oriented view of the world, it soon becomes apparent that most problems require or involve multiple agents; to represent the decentralised nature of the problem, the multiple loci of control, the multiple perspectives or the competing interests [3]. Moreover, the agents will need to interact with one another, either to achieve their individual objectives or to manage the dependencies that ensue from being situated in a common environment [9,29]. These interactions can vary from simple information interchanges, to requests for particular actions to be performed and on to cooperation, coordination and negotiation in order to arrange interdependent activities. In all of these cases, however, there are two points that qualitatively differentiate agent interactions from those that occur in other computational models. Firstly, agent-oriented interactions are conceptualised as taking place at the knowledge level [40]. That is, they are conceived in terms of which goals should be followed, at what time, and by whom (cf. method Invocation or function calls that operate at a purely syntactic level). Secondly, as agents are flexible problem solvers, operating in an environment over which they have only partial control and observability, interactions need to be handled in a similarly flexible manner. Thus, agents need the computational apparatus to make run-time decisions about the nature and scope of their interactions and to initiate (and respond to) interactions that were not foreseen at design time (cf. the hard-wired engineering of such interactions in extant approaches).

In most cases, agents act to achieve objectives either on behalf of individuals/companies or as part of some wider problem solving initiative. Thus, when agents interact there is typically some underpinning organisational context between them [14,19]. This context defines the nature of the relationship between the agents (e.g., they may be peers working together in a team or one may be the manager of the other agents) and consequently influences their behaviour. Since agents make decisions about the nature and scope of interactions at run time, it is imperative that this key shaping factor is taken into account. Thus organisational relationships need to be represented explicitly. In many cases, these relationships are subject to ongoing change: social interaction means existing relationships evolve and new relations are created. This means the temporal extent of relationships can also vary significantly, from just long enough to deliver a particular service once, to a permanent bond. To cope with this variety and dynamic, agent researchers have: devised protocols that enable organisational groupings to be formed and disbanded; specified mechanisms to ensure groupings act together in a coherent fashion; and developed structures to characterise the macro behaviour of collectives (see [37,60] for an overview). [3]

One of the most successful solutions to

this problem involves viewing agents as intentional systems

whose behaviour can be predicted and

explained in terms of attitudes such as belief, desire and

intention [4]

This intentional stance, whereby the behaviour of a

complex system is understood via the attribution of

attitudes such as believing and desiring, is simply an

abstraction tool. It is a convenient shorthand for talking

about complex systems, which allows us to succinctly

predict and explain their behaviour without

having to understand how they actually work. [4]

For many researchers in AI, this idea of programming computer systems in terms of ‘mentalistic’ notions

such as belief, desire, and intention is the key component

of agent-based computing. The concept was articulated

most clearly by Yoav Shoham in his agent oriented

programming (AOP) proposal [57]. [4]

In AOP the idea is that, as in declarative programming,

we state our goals, and let the built-in control mechanism

figure out what to do to achieve them. In this

case, however, the control mechanism implements some

model of rational agency (such as the Cohen-Levesque

theory of intention [8], or the Rao-Georgeff BDI

model [47]). Hopefully, this computational model corresponds

to our own intuitive understanding of (say)

beliefs and desires, and so we need no special training

to use it. [4]

Ideally, as AOP programmers, we would not be concerned

with how the agent achieves its goals. The reality,

as ever, does not quite live up to the ideal. [4]

  1. Agent Based Modeling Platforms


Repast S

    1. What


    1. Why


    1. How


  1. Network Models

    1. What


    1. Why


    1. How


  1. Epidemiology Models

    1. What


    1. Why


    1. How


  1. Viruses

    1. What


    1. Why


    1. How


Download 28.36 Kb.

Share with your friends:

The database is protected by copyright © 2024
send message

    Main page