Identificacion de sistemas

Download 88.89 Kb.
Size88.89 Kb.

CHAPTER 1: Systems and Models

CHAPTER 1: Systems and Models 2





1.4.1 Interpretative models 7

1.4.2 Predictive models 9

1.4.3 Models for filtering and state estimation 10

1.4.4 Models for diagnosis 11

1.4.5 Models for simulation 12


1.5.1 Approaches to model building 14

1.5.2 Deducing models from other models: physical modeling 16

1.5.3 Inductive Modelling and System Identification Methodology 17


1.6.1 Basic concepts 18

1.6.2 The modelling and simulation process 23

1.6.3 Validity of a model: a system – model relation 27

1.7 FUENTES 29

CHAPTER 1: Systems and Models


The word system, (from the Greek συστημα) is used in a diversified range of contexts and has thus assumed different meanings. The most common definition of system concerns a group of parts linked by some form of interaction. In the context of System Theory a system can be defined as a slice of reality whose evolution in time can be described by a certain number of measurable attributes. For example, a machine tool, an electric motor, a computer, an artificial satellite, the economy of a nation.

A measurable attribute is a characteristic that can be correlated with one or more numbers, either integer, real or complex, or simply a set of symbols. Examples include the rotation of a shaft (a real number), the voltage or impedance between two given points of an electric circuit (a real or complex number), any color belonging to a set of eight well-defined colors (an element of a set of eight symbols; for instance, digits ranging from 1 to 8 or letters from a to h), the position of a push button (a symbol equal to 0 or 1, depending on whether it is released or pressed).
In systems, one is interested, on the one hand in their internal functional relationships, on the other hand in their external relations to the environment. The former is called the structure of the system; the latter is its behaviour. Behaviour is concerned with the dependence of responses to stimuli. Structure is concerned with the manner of arrangement, that is organisation, of the mutual coupling between the elements of the system and the behaviour of these elements.
A system can be represented as a block and its external relations as connections with the environment or other systems, as shown by the simple diagram of Fig. 1.1.1. When a separation of quantities into those produced by the environment and those produced by the system is given in advance, then inputs, the former, and outputs, the latter, can be distinguished. Such systems are called oriented (or controlled) systems. If a separation into inputs and outputs is not given, one talks about non oriented (or neutral) systems [Van Welden, 2000].

Figure 1.1.1. Schematic representation of a system [Basile and Marro, 2002]

It is worth noting that the distinction between causes and effects in some cases it is anything but immediate. Consider, for instance, the simple electric circuit shown in Fig. 1.1.2(a), whose variables are v and i. It can be oriented as in Fig. 1.1.2(b), i.e., with v as input and i as output: this is the most natural choice if the circuit is supplied by a voltage generator. But the same system may be supplied by a current generator, in which case i would be the cause and v the effect and the corresponding oriented block diagram would be as shown in Fig. 1.1.2(c) [Basile and Marro, 2002].

Figure 1.1.2. An electric system with two possible orientations.

In this course, a first restriction towards oriented systems, see Figure 1.1.3, that represents relatively closed physical systems, is made.
For the oriented system in Figure 1.1.3 the external quantities are U(t) and Y(t); the internal quantity is X(t). The input is represented by U(t) and the output by Y(t).

Figure 1.1.3 : A oriented system

The measuring of system attributes introduces the problem of establishing quantitative relationships between them, i.e. to construct abstract (mathematical) models.
Since mathematical models are themselves systems, although abstract, it is customary to denote both the object of the study and its mathematical model by the word “system” [Basile and Marro, 2002]. The discipline called System Theory pertains to the derivation of mathematical models for systems, their classification, investigation of their properties, and their use for the solution of engineering problems.


The etymological roots of model come from the Latin word modus and its diminutive modulus both meaning measure; its initial use in science and technology can be associated to scale representations used by architects to reproduce the shape of buildings before their actual construction. In this case the models are physical systems that reproduce approximately the aesthetic properties of other physical systems before their realization.

An important advancement was achieved with the introduction of models that were still small scale reproductions of physical systems but were used however to investigate their behavior either before construction or under operating conditions otherwise impossible or too expensive. A further step was taken by reproducing the behavior of a system on another system that could be more easily studied, taking advantage of the different physical laws that can be described by formally equal relations. A typical example is given by analog computers, structured as flexible electrical networks that, properly interconnected (programmed), reproduce the behaviors of other systems (mechanical, hydraulic, economic etc.) less suitable for direct experiments. These models could be defined as analog or, to avoid any confusion with the common use of this term (analog to denote quantities whose measure can be performed with continuity, as opposite to digital used to denote quantizations), models based on analogy laws.
It can be observed that these models offer greater flexibility than previous ones where the only degree of freedom was the scale factor, since the physical nature of the model is also a matter of choice. However, the use of models based on analogy laws requires the knowledge of the laws describing the behavior of the system to be studied, since it is necessary to select, construct or configure another system governed by analog laws by studying its behavior from suitable initial conditions. What is, on the contrary, not required, is the capability of constructing a complete mathematical model and of using it to determine the system behavior.
The last step in the evolution of models is the development of abstract models i.e. mathematical models that describe the links established by the system between its measurable attributes. The important role played currently by mathematical models in science and technology is due to the availability of both the abstract tools offered by System Theory and numerical computers that allow their effective use.


It has already been observed that mathematical models limit their description to the quantitative links established by real systems between their measurable attributes so that they constitute, in any case, only partial descriptions. The asymptotic evolution of science has however also canceled the illuministic illusions on the possibility of achieving exact descriptions of reality [Guidorzi, 2003]. Even the relations accepted as laws of nature can be considered, at most, as models not yet falsified. Newton’s law of motion is a good example of the extended acceptance of a mathematical relation as an absolute description of a phenomenon before its falsification, but it’s also a good example of the excellent accuracy of a simple model in describing a very large range of situations.

Many phenomena are simply too complex to be described in detail by manageable models and/or are not ruled by any definite law of nature (e.g. national economies). The construction of mathematical models should thus be ruled by usefulness criteria more than by (always relative) truth criteria. The inherent approximations associated to models outline that different models of the same system can be used for different purposes (interpretation, prediction, filtering, diagnosis, simulation) optimizing their performance for these tasks. The criteria to compare and select models have, consequently, both philosophical and practical importance. A well known criterion is the “razor of Occam”, due to William of Occam (1290–1350) establishing that:

the simpler among the models accounting for the same phenomenon must be preferred.

This principle certainly helped the acceptance of the model proposed, for the solar system, by Copernicus who, prudently, emphasized that his heliocentric model should have been considered only as an exercise to obtain in a simpler way the results of the officially accepted Ptolemaic model.
A different description of the parsimony principle can be found in the work of Popper (1963). According to this author,

among the models that explain the available observations, the model explaining as little else as possible (the most powerful unfalsified model) is to be preferred.

The parsimony principle is supported not only by philosophical arguments (and by common sense) but also by mathematical arguments that show how increasing model complexity leads, when the models are deduced from uncertain data, to corresponding increases in the uncertainty of their parameters.


Mathematical models have been defined as sets of relations among the measurable attributes of a system, describing the links established by the system among these quantities. This limits the descriptive capability of mathematical models to attributes that can be expressed by means of numbers and furthermore shows that these models constitute, in any case, only an approximate description of reality. The intrinsic approximation performed by introducing models can be better evaluated in the context of a classification based on the purposes of modeling [Guidorzi, 2003].

1.4.1 Interpretative models

The rationale of these models lies in satisfying scientific curiosity and rationalizing the behavior of observed processes. They can also be seen as ways to extract essential information from complex experiments or to substitute (large amounts of) data with a data generating mechanism. The purpose of interpretative models is to increase the understanding of a slice of reality existing behind the observed phenomena; they must thus “interpret” sets of collected data but they don’t necessarily have any capability to generate other (future) sets of data (that will be) generated by the same system.

Most physical laws can be seen as models of this kind. Ptolemy’s observation on the possibility of describing the same observations with different models highlights that the interpretation essentially concerns some measurable attributes of phenomena and not (necessarily) their actual nature.
Another important observation concerns the approximations of interpretative models and/or their limited range of validity. So Newton’s law of motion, giving a simple relation between the force acting on a mass and its acceleration, leads to large errors for speeds approaching the speed of light.
Models of this kind have been developed by Ptolemy, Copernicus, Kepler, Galileo, Newton and Halley to describe the motion of physical objects. Interpretative models are used in a large number of disciplines like econometrics, ecology, life sciences, agriculture, physics.

Example – Sunspot cycle
The plot of Figure 1.4.1 shows the yearly mean sunspot count from 1749 to 1983, computed from daily relative sunspot numbers evaluated on the basis of more than fifty observing stations around the world.

Figure 1.4.1 – Yearly mean sunspot count from 1749 to 1983

Estimating from this sequence (after subtracting its mean value) a second order autoregressive (AR) model with the least squares algorithm, we obtain the model

y(t +2) = 1.3873 y(t + 1) − 0.6937 y(t)

whose poles, p1,2 = 0.6936 ± i 0.4611, indicate a periodicity of 10.71 years for the phenomenon. This “law”, obtained by means of a mathematical elaboration of the observations, compares well with the commonly assumed period of 11 years and with the approximate evaluation that can be directly obtained from the plot.

1.4.2 Predictive models

The rationale of predictive models is forecasting the future behavior of a system i.e. interpolating available observations into the future. This is probably the most frequent use of mathematical models, with applications in many different fields (e.g. forecasting demands of specific products, weather conditions, population growth, the future state of an ecosystem or of a plant).

The predictions obtained in this way are often used to manipulate the inputs of the considered system to achieve specific objectives like the desired attitude of an aircraft or of a missile, the position of a robot arm, the degree of purity of the output of a distillation column or the inflation rate. Other less obvious applications of predictive models concern speech and image processing to reduce bandwidth requirements in transmission and recording. A model can, of course, be at the same time an interpretative model and a predictive model. As observed by Norton (1986), when Halley in 1704 realized that the observations of 1531, 1607 and 1682 referred to the same comet and computed its orbit, he constructed an interpretative model that predicted accurately the subsequent return of 1758.
Example – Forecast of a river flow
Since 1975 the Welsh Water Authority operates a real–time flow forecasting system on the River Dee as part of extensive water supply and flood control schemes for the catchment. The River Hirnant’s subcatchment, with an area of 33.9 km2, is situated west of Bala Lake, in NorthWales. It is composed mainly of rocks, providing very little storage for rainfall. Furthermore, because of its steep slopes, it causes a fast streamflow response to rainfall. Figure 1.4.2 reports a rainfall recording over a period of 60 hours and Figure 1.4.3 the corresponding streamflow measured at Plas Rhiwaedog; the data are sampled at half–hourly intervals.
Also the forecast of a model, obtained with identification techniques, whose input is given by rainfall, is reported (dotted line) in Figure 1.4.3. While short–term forecasts (few hours), useful for early flood warning, can rely on the available rainfall measures, long–term forecasts, useful for water resources management, must rely on weather forecasts and are, consequently, less accurate.

Figures 1.4.2 and 1.4.3 – Rainfall on the catchment and River Hirnant streamflow

1.4.3 Models for filtering and state estimation

The rationale is here the extraction of some external variables (output) from noisy measurements performed on the system and/or the estimation of some internal variables (state) from external measures affected by errors.

Applications concern the reception and processing of radio signals (e.g. telemetry and pictures sent from a spacecraft), transmission of digital data over noisy channels (e.g. telephone lines), processing of radar signals, analysis of electrocardiographic and electroencephalographic signals, geophysical data processing, monitoring of industrial plants and of natural systems, demography.
An application frequently cited is the Kalman filter used to estimate the state (position and velocity) of the spacecraft (Apollo 11) in the first manned lunar mission; all other space missions to Mercury, Venus, Mars and beyond relied, even more heavily, on these techniques.

Example – Tracking of a maneuvering target
The altitude of a maneuvering target, given every 10 s by a radar system is affected by an error with a standard deviation σa = 49 m. The actual altitude is estimated by means of a Kalman filter that reduces the error standard deviation to σa = 43m (Figure 1.4.4). The state of the Kalman filter gives also an estimate of the vertical speed of the target, reported in Figure 1.4.5 against its actual value.

Figures 1.4.4 and 1.4.5 – Altitude error and estimated vertical speed of the target

1.4.4 Models for diagnosis

The computation of the specific model best fitting a set of data collected on a process allows its general behavior to be compared with a class of behaviors established as a reference, evaluating abnormal conditions like a sensor fault in an industrial process or a disease in a patient. The sulfobromophthalein (BSP) and the glucose tolerance (IVGT) tests are routinely used in the medical practice as aids in the assessment of hepatobiliary and pancreas diseases. In both cases the test starts with intravenous injections of these substances and is followed by measures of their plasmatic concentrations at specific intervals. Values beyond certain limits indicate a slow metabolism that could be associated to hepatitis or diabetic conditions.

Example – Intravenous glucose tolerance test (IVGT)
The control of blood sugar levels in the human body is carried out by the insulin secreted by the pancreas when the sugar level exceeds the physiological equilibrium value.
The rate of change of blood sugar levels after a glucose injection gives a reliable description of the efficiency of this regulation mechanism as follows from the comparison of Figure 1.4.6, reporting the response of a normal individual, with Figure 1.4.7 regarding the response of a diabetic. The measures performed on a patient, compared in Figure 1.4.8 with the standard response, allow to diagnose the presence of abnormal conditions.

Figures 1.4.6 and 1.4.7 – Response of normal and diabetic patients to a glucose loading

Figure 1.4.8 – Standard response and abnormal measures obtained on a patient

1.4.5 Models for simulation

The rationale is here the substitution of real systems with their models to evaluate their response to assumed control policies (inputs). A substitution of this kind can be very rewarding from an economic point of view and can also allow performing operations that would have been otherwise impossible or risky on real systems (e.g. demographic studies, the responses in a national economy to changes in interest rates, pilot training, major nuclear reactor incidents etc.).

Of course the usefulness of simulations depends on the accuracy of the model in reproducing the behavior of the actual system; the etymology of simulation (the Latin simulare = to pretend) seems to suggest the possible ambiguity of this substitution.

Example – Simulation of a sodium heat exchanger
PEC is a LMFBR (Liquid Metal Fast Breeder Reactor) with a thermal power of 120 MW, designed to test experimental fuel elements developing powers up to 3MW in the thermal and neutron flux conditions that are met in large fast breeder nuclear reactors. The cooling of the core is performed by means of a double sodium primary loop and sodium–sodium heat exchanger, a secondary loop and sodium–air heat exchangers. The dynamical behavior of the reactor in emergency situations (e.g. failure of the pump in one of the primary loops) has been investigated by means of a large simulation package which includes the models of every part of the plant. This model is, however, unsuitable for real–time simulations because of its size. A reduced–order model obtained with identification techniques has been developed for real–time simulations regarding both operator training and process control.

Figures 1.4.9 and 1.4.10 – Output temperatures of the PEC sodium heat exchangers

Figures 1.4.9 and 1.4.10 show the output temperatures of the primary and secondary sodium heat exchangers given by the model (dotted line) against the true values for variations of the inputs (primary and secondary sodium flows and input temperatures) of approximately 20%. The limited error given by this model is fully compatible with its planned use.


The unavoidable association of the concepts of model and approximation puts modeling in a twilight zone that does not belong to pure science due to the lack of unicity postulates but must, in any case, rely on results and methodologies offered by the abstract science of mathematics.

Moreover the limited capability of models to solve any specific problem leads to the opportunity of constructing application–oriented models; the rationales behind the different uses of models that have been described previously become thus rationales also behind the construction of special–purpose models for the same system.

1.5.1 Approaches to model building

Keeping in mind these directives, two main approaches to model building are devised. They are based on two principal types of information about a general system, which relate to the extreme ends of the system’s greyness spectrum (see Figure 1.5.1):

  • knowledge and insight about the system (white box component)

  • experimental data of system inputs and outputs (black box component).

The two main approaches are depicted in Table 1.5.1 and illustrated in Figure 1.5.1.

Figure 1.5.1 : The two sources of information for model construction

knowledge based approach

data based approach



system identification

top-down modelling

bottom-up approach




does what?

encodes the (inner) structure of the system

encodes the behaviour of the system (via experimental data)

problem type



Table 1.5.1 : The two main approaches to constructing a model
If we depict a model as a box containing the mathematical laws that link the inputs (causes) with the outputs (effects), the three main modelling approaches can associated-with the "colour" of the box .
White box modelling : The model is derived directly from some first principles by taking into account the connection between the components of the system. Typical examples are found in mechanical and electrical systems where the physical laws (F = ma, for instance) can be used to predict the effects given the causes. Rather than white, the box should be termed "transparent", in the sense that we know the internal structure of the system
Grey box modelling : Sometimes the model obtained by invoking the first principles is incomplete because the value of some parameter is missing. For instance, a planet is subject to the gravitation law but its mass is unknown. In this case, it is necessary to collect experimental data and proceed to a tuning of the unknown parameters until the outputs predicted by the model match the observed data. The internal structure of the box is only partially known (there are grey zones).
Black box modelling: When either the internal structure of the system is unknown or there are no first principles available, the only chance is to collect data and use them to guess the links between inputs and outputs. For instance, this is a common situation in economics and physiology. However, black box modelling is also useful to deal with very complex systems where the white box approach would be time consuming and expensive (an example: modelling the dynamics of an internal combustion engine in order to develop the idle-speed controller).

1.5.2 Deducing models from other models: physical modeling

A model constructed purely deductively can generally be considered as a unique solution of modelling. Hence, the top-down approach can and should be used if there is enough a priori knowledge and theory to characterise completely the mathematical equations. In that context, the concept of model structure becomes important.

Physical modeling is based on the partition of a system into subsystems and on their description by means of known laws. The model is then obtained joining such relations into a whole.
This approach requires a general knowledge of the structure or design of the considered system and of the “laws” describing their behaviors. Since physical laws are, in turn, models obtained from observations or from unfalsified speculations, physical modeling consists of constructing a whole model joining together simple and already established models.
The advantages of physical modeling consist in the possibility of using, in the model construction procedure, a priori information on the system and in the physical meaning of the model variables. This procedure cannot be applied, however, to systems whose internal structure is not known, whose behaviors are not described by established relations or whose complexity would lead to unmanageable models where most parameters would only marginally influence the aspects to be reproduced of the system behavior.
The deductive approach is preferred whenever possible, this is called the physicality principle, because it involves a one-to-one mapping (in fact, no new knowledge is generated), while bottom-up modelling involves a one-to-many mapping (knowledge has to be induced). The latter issue is very important in the field of machine learning.

1.5.3 Inductive Modelling and System Identification Methodology

If the (deductive) modelling route is impossible, one has to treat the system as a black (or grey) box and try to infer a model via data-analysis of input and output signal recordings. This is the identification route, which is based on experimentation. The Concise Encyclopaedia of Modelling & Simulation describes identification as “the search for a definition of a model showing the behaviour of a process evolving under given conditions. It is usually realised by means of a set of measurements and by observing the behaviour of a system during a given time interval. The model obtained by identification must enable the evolution of the identified process to be predicted for a given horizon, when the evolution of the inputs and various external influences acting on this system during this time interval are known” [Atherton and Borne 1992].

The bottom-up approach tries to infer the structural information from experimental data and to come up with a usable model under a given experimental frame. This approach may generate an infinite number of models satisfying the observed input-output relationships. So, there is no straightforward procedure for determining the structure of a model. A set of guiding principles and quantitative procedures for inferring structure parts from data sets is needed. More specifically, it is desirable to have additional assumptions or constraints to help selecting an ‘optimal’ model.
System observations may be obtained either actively or passively. In the former situation the modeller specifies interesting inputs, applies these to the system under study, and observes the outputs. In the latter situation, the modeller cannot specify the inputs and he/she has to accept whatever input-output data is available.
Identification consists in the selection of a specific model in a specified class on the only basis of observations performed on the system to be described and of a selection criterion. The whole procedure makes no reference neither to the physical nature of the modeled system nor to the a priori knowledge of the modeller; only the data speak.
The internal variables of identified models may lack any physical meaning and the same can be true of the model parameters. Such models are, on the other side, simple, accurate and can extract from complex frameworks only some relevant aspects.
It is not difficult to recognize that often physical laws have been obtained as a result of identification procedures; the data collected by Galileo in his experiments on falling bodies, for instance, led him to recognize that a simple model could explain every experiment and could consequently be considered a law.


A model is a system similar to an original, sometimes called real system in the sense that, when solving a problem concerning the original system, it can solve the problem to a better advantage. A model is thus a workable substitute for a system. In the following, an introduction to the basic concepts of modelling and simulation is given.

Figure 1.6.1 presents modelling and simulation concepts as introduced by Zeigler [Vangheluwe, 2001].

1.6.1 Basic concepts

Object is some entity in the Real World. Such an object can exhibit widely varying behaviour depending on the context in which it is studied, as well as the aspects of its behaviour which are under study.
Base Model is a hypothetical, abstract representation of the object’s properties, in particular, its behaviour, which is valid in all possible contexts, and describes all the object’s facets. A base model is hypothetical as we will never —in practice— be able to construct/represent such a “total” model.
System is a well defined object in the Real World under specific conditions, only considering specific aspects of its structure and behaviour.

Figure 1.6.1 : Modelling and Simulation [Vangheluwe, 2001]

Experimental Frame When one studies a system in the real world, the experimental frame (EF) describes experimental conditions (context), aspects, . . . within which that system and corresponding models will be used. As such, the Experimental Frame reflects the objectives of the experimenter who performs experiments on a real system or, through simulation, on a model.
In its most basic form (see Figure 1.6.2), an Experimental Frame consists of two sets of variables, the Frame Input Variables and the Frame Output Variables, which match the system or model terminals.
On the input variable side, a generator describes the inputs or stimuli applied to the system or model during an experiment. A generator may for example specify a unit step stimulus. On the output variable side, a transducer describes the transformations to be applied to the system (experiment) or model (simulation) outputs for meaningful interpretation. A transducer may for example specify the calculation of the extremal values of some of the output variables. In the above, output refers to physical system output as well as to the synthetic outputs in the form of internal model states measured by an observer. In case of a model, outputs may observe internal information such as state variables or parameters.

Figure 1.6.2: System versus Experimental Frame

Apart from input/output variables, a generator and a transducer, an Experimental Frame may also comprise an acceptor which compares features of the generator inputs with features of the transduced output, and determines whether the system (real or model) “fits” this Experimental Frame, and hence, the experimenter’s objectives.
(Lumped) Model gives an accurate description of a system within the context of a given Experimental Frame. The term “accurate description” needs to be defined precisely. Usually, certain properties of the system’s structure and/or behaviour must be reflected by the model within a certain range of accuracy. Note: a lumped model is not necessarily a lumped parameter model. Due to the diverse applications of modelling and simulation, terminology overlap is very common.
Experimentation is the physical act of carrying out an experiment. An experiment may interfere with system operation (influence its input and parameters) or it may not. As such, the experimentation environment may be seen as a system in its own right (which may in turn be modelled by a lumped model). Also, experimentation involves observation. Observation yields measurements.
Simulation of a lumped model described in a certain formalism (such as Petri Net, Differential Algebraic Equations (DAE) or Bond Graph) produces simulation results: the dynamic input/output behaviour. Simulation, which mimics the realworld experiment, can be seen as virtual experimentation, allowing one to answer questions about (the behaviour of) a system.
Crucial to the System–Experiment/Model–Virtual Experiment scheme is that there is a homomorphic relation between model and system: building a model of a real system and subsequently simulating its behaviour should yield the same results as performing a real experiment followed by observation and codifying the experimental results (see Figure 1.6.3).

Figure 1.6.3: Modelling – Simulation Morphism

A simulation model is a tool for achieving a goal (design, analysis, control, optimisation, . . . ). A fundamental prerequisite is therefore some assurance that inferences drawn from modelling and simulation (tools) can be accepted with confidence. The establishment of this confidence is associated with two distinct activities; namely, verification and validation.
Verification is the process of checking the consistency of a simulation program with respect to the lumped model it is derived from. More explicitly, verification is concerned with the correctness of the transformation from some intermediate abstract representation (the conceptual model) to the program code (the simulation model) ensuring that the program code faithfully reflects the behaviour that is implicit in the specification of the conceptual model.
Validation is the process of comparing experiment measurements with simulation results within the context of a certain Experimental Frame.
When comparison shows differences, the formal model built may not correspond to the real system. A large number of matching measurements and simulation results, though increasing confidence, does not prove validity of the model however. For this reason, Popper has introduced the concept of falsification, the enterprise of trying to falsify or disprove a model. Various kinds of validation can be identified; e.g., conceptual model validation, structural validation, and behavioural validation.

  • Conceptual validation is the evaluation of a conceptual model with respect to the system, where the objective is primarily to evaluate the realism of the conceptual model with respect to the goals of the study.

  • Structural validation is the evaluation of the structure of a simulation model with respect to perceived structure of the system.

  • Behavioural validation is the evaluation of the simulation model behaviour.

An overview of verification and validation activities is shown in Figure 1.6.4.

Figure 1.6.4: Verification and validation activities

It is noted that the correspondence in generated behaviour between a system and a model will only hold within the limited context of the Experimental Frame. Consequently, when using models to exchange information, a model must always be matched with an Experimental Frame before use. Conversely, a model should never be developed without simultaneously developing its Experimental Frame.

1.6.2 The modelling and simulation process

To understand any enterprise, it is necessary to analyze the process: which activities are preformed, what entities are operated on, and what the causal relationships (determining activity order and concurrency) are. The simulation activity is part of the larger model-based systems analysis enterprise. A rudimentary process model for these activities is depicted in Figure 1.6.5.

By means of a simple mass-spring experiment example (see Figure 1.6.6), the process will be explained. In this example, a mass sliding without friction over a horizontal surface is connected to a wall via a spring. The mass is pulled away from the rest position and let go.

A number of Information Sources (either explicit in the form of data/model/knowledge bases or implicit in the user’s mind) are used during the process:

  1. A Priori Knowledge: in deductive modelling, one starts from general principles –such as mass, energy, momentum conservation laws and constraints– and deduces specific information. Deduction is predominantly used during system design.

In the example, the a priori knowledge consists of Newton’s second law of motion, as well as our knowledge about the behaviour of an ideal spring.

Figure 1.6.5: Model-based systems analysis

Figure 1.6.6: Mass-Spring example

  1. Goals and Intentions: the level of abstraction, formalisms used, methods employed, . . . are all determined by the type of questions we want to answer.

In the example, possible questions are: “what is a suitable model for the behaviour of a spring for which we have position measurements?”, “what is the spring constant?”, “given a suitable model and initial conditions, predict the spring’s behaviour”, “how to build an optimal spring given performance criteria?”, . . .

  1. Measurement data: in inductive modelling, we start from data and try to extract structure from it. This structure/model can subsequently be used in a deductive fashion. Such iterative progression is typical in systems analysis.

Figure 1.6.7 plots the noisy measured position of the example’s mass as a function of time.

Figure 1.6.7: Measurement data

The process starts by identifying an Experimental Frame. As mentioned above, the frame represents the experimental conditions under which the modeller wants to investigate the system. As such, it reflects the modeller’s goals and questions. In its most general form, it consists of a generator describing possible inputs to the system, a transducer describing the output processing (e.g., calculating performance measures integrating over the output), and an acceptor describing the conditions (logical expressions) under which the system (be it real or modelled) match.
In the example, the experimental frame might specify that the position deviation of the mass from the rest position will/may never be larger than the rest length of the spring. Environment factors such as room temperature and humidity could also be specified, if relevant.
Based on a frame, a class of matching models can be identified. Through structure characterization, the appropriate model structure is selected based on a priori knowledge and measurement data.
In the example, a feature of an ideal spring (connected to a frictionless mass) is that the position amplitude stays constant. In a non-ideal spring, or in the presence of friction, the amplitude descreases with time. Based on the measured data, we conclude this must be an ideal spring.
Subsequently, during model calibration, parameter estimation yields optimal parameter values for reproducing a set of measurement data.
The simulation model
From the model, a simulator is built. Due to the contradicting aims of modelling –meaningful model representation for understanding and re-use– and simulation –accuracy and speed–, a large number of steps may have to be traversed to bridge the gap.
Using the identified model and parameters, simulation allows one to mimic the system behavior (virtual experimentation) as shown in Figure 1.6.8. The simulator thus obtained can be embedded in for example, an optimizer, a trainer, or a tutoring tool.
The question remains whether the model has predictive validity: is it capable not only of reproducing data which was used to choose the model and to identify parameters but also of predicting new behavior? With every use of the simulator, this validity question must be asked.

Figure 1.6.8: Fitted simulation results

1.6.3 Validity of a model: a system – model relation

The validity of a model is about how well a model represents the original system it stands for. In the first instance, validity can be measured by the extent of agreement between the original system and the model. The notion of validity is extended by Zeigler [1976], who distinguishes different degrees of validity:

  1. Replicative Validity concerns the ability of the Lumped Model to replicate the input/output data already acquired from the Real System.

  1. Predictive Validity concerns the ability to identify the state a model should be set into to allow prediction of the response of the Real System to any (not only the ones used to identify the model) input.

A model is predictively valid if it can match the data of the original system before these data are acquired from the original system. Predictive validity is stronger than replicative validity.

  1. Structural Validity concerns the structural relationship between the Real System and the Lumped Model.

A Lumped Model is structurally valid if it is not only predictively valid, but also reflects the ways in which the original system operates to produce its behaviour.

A crucial question is whether a model has predictive validity: is it capable not only of reproducing data which was used to choose the model and parameters but also of predicting new behavior?
The predictive validity of a model is usually verifyed by comparing new experimental data sets to those produced by simulation, an activity known as model validation. Model validation has received considerable attention in the past few decades. Problems from general validation methodologies to concrete testing technologies have been extensively studied. The comparison of the experimental and simulation data are accomplished either subjectively, such as through graphical comparison, Turing test, or statistically, such as through analysis of the mean and variance of the residual signal employing the standard statistics, multivariate analysis of variance regression analysis, spectral analysis, autoregressive analysis, autocorrelation function testing, error analysis, and some non-parametric methods.
As indicated by the feedback arrows in Figure 1.6.5, a model has to be corrected once proven invalid.


  • Guidorzi Roberto, Multivariable System Identification: From Observations to Models. Bononia University Press, 2003.

  • Gaines B. R., General System Identification—Fundamentals and Results. Man-Machine Systems Lab., Dept. of E. E. Science. University of Essex, Colchester, Essex, U.K. In Klir, G.J., Ed. Applied General Systems Research. pp. 91-104. New York, USA: Plenum Press. 1978

  • Van Welden, Danny F., Induction of Predictive Models for Dynamical Systems Via Datamining, Ph.D. dissertation, Toegepaste Wiskunde en Biometrie, Universiteit Gent, Belgium. 1999

  • Basile G. and Marro G., Controlled and Conditioned Invariants in Linear Systems Theory. Department of Electronics, Systems and Computer Science. University of Bologna, Italy. October 7, 2002

  • Vangheluwe H. L. Multi-formalism modelling and simulation. Ph.D thesis, Ghent University, Ghent, Belgium. 2000.


  • Atherton and Borne [1992], Concise Encyclopaedia of Modelling & Simulation. ed. D.P. Atherton, P. Borne, Pergamon Press, 1992.

  • Klir G.J., An Approach to General System Theory. Van Nostrand Reinhold, 1969.

  • Popper, K.R., Conjectures and Refutations: The Growth of Scientific Knowledge. London, Routledge & Kegan Paul. 1963.

  • Zeigler B.P. [1976], Theory of Modelling and Simulation. John Wiley & Sons, 1976.

Download 88.89 Kb.

Share with your friends:

The database is protected by copyright © 2020
send message

    Main page