Définitions



Download 79.85 Kb.
Date28.01.2017
Size79.85 Kb.
#9600

Bibliographie & Ressources

Validation formelle de systèmes industriels critiques

Entrée en matière

Définitions



Méthode formelle (informatique) :

« En informatique, les méthodes formelles sont des techniques permettant de raisonner rigoureusement, à l'aide de logique mathématique, sur des programmes informatiques ou des matériels électroniques, afin de démontrer leur validité par rapport à une certaine spécification. Ces méthodes permettent d'obtenir une très forte assurance de l'absence de bug dans les logiciels, c'est-à-dire d'acquérir des niveaux d'évaluation d'assurance élevés, elles sont basées sur les sémantiques des programmes, c'est-à-dire sur des descriptions mathématiques formelles du sens d'un programme donné par son code source (ou parfois, son code objet). Cependant, elles sont généralement coûteuses en ressources (humaines et matérielles) et actuellement réservées aux logiciels les plus critiques. Leur amélioration et l'élargissement de leurs champs d'application pratique sont la motivation de nombreuses recherches scientifiques en informatique. < http://fr.wikipedia.org/wiki/M%C3%A9thode_formelle_%28informatique%29 >,


.
Hubert Garavel, Isabelle Bellin. La fiabilité des systèmes devient un défi majeur.

«En 20 ans, les systèmes informatiques (ordinateurs, logiciels, réseaux) ont envahi notre vie courante et sont désormais au cœur d’applications de plus en plus vitales. Leur complexité technique augmente sans cesse, alors que leur contexte de production est de plus en plus tendu, réductions de coûts et de délais obligent. Quelles conséquences en termes de qualité, sûreté et sécurité ? Eviter les défaillances informatiques représente un enjeu d'avenir, majeur pour l’industrie, sur la base des progrès scientifiques de ces dernières décennies.» < http://www.inria.fr/centre/grenoble/actualites/la-fiabilite-des-systemes-devient-un-defi-majeur >,


.


Ouvrages



Ouvrages de référence
Jean-Michel Muller, Nicolas Brisebarre, Florent de Dinechin, Claude-Pierre Jeannerod, Vincent Lefèvre, Guillaume Melquiond, Nathalie Revol. Handbook of Floating-Point Arithmetic. Birkhäuser; 2010, 596 p. ISBN-13: 978-0817647049. [Note : voir notamment le chapitre 13 portant sur la certification.]

“Floating-point arithmetic is by far the most widely used way of implementing real-number arithmetic on modern computers. Although the basic principles of floating-point arithmetic can be explained in a short amount of time, making such an arithmetic reliable and portable, yet fast, is a very difficult task. From the 1960s to the early 1980s, many different arithmetics were developed, but their implementation varied widely from one machine to another, making it difficult for nonexperts to design, learn, and use the required algorithms. As a result, floating-point arithmetic is far from being exploited to its full potential. This handbook aims to provide a complete overview of modern floating-point arithmetic, including a detailed treatment of the newly revised (IEEE 754-2008) standard for floating-point arithmetic. Presented throughout are algorithms for implementing floating-point arithmetic as well as algorithms that use floating-point arithmetic. So that the techniques presented can be put directly into practice in actual coding or design, they are illustrated, whenever possible, by a corresponding program.” Disponible au Prêt-entre-Bibliothèque


Insup Lee, Joseph Y-T. Leung, Sang H. Son. Handbook of Real-Time and Embedded Systems. Chapman and Hall, 2007. 800 p. ISBN-13: 978-1584886785.

“Real-time and embedded systems are essential to our lives, from controlling car engines and regulating traffic lights to monitoring plane takeoffs and landings to providing up-to-the-minute stock quotes. Bringing together researchers from both academia and industry, the Handbook of Real-Time and Embedded Systems provides comprehensive coverage of the most advanced and timely topics in the field. The book focuses on several major areas of real-time and embedded systems. It examines real-time scheduling and resource management issues and explores the programming languages, paradigms, operating systems, and middleware for these systems. The handbook also presents challenges encountered in wireless sensor networks and offers ways to solve these problems. It addresses key matters associated with real-time data services and reviews the formalisms, methods, and tools used in real-time and embedded systems. In addition, the book considers how these systems are applied in various fields, including adaptive cruise control in the automobile industry.”


Halbwachs, Nicolas. Synchronous programming of reactive systems. Kluwer Academic, 1993. The Springer International Series in Engineering and Computer Science, vol. 215. 1993, XIII, 174 p. ISBN 978-0-7923-9311-5.

“This book presents a synthesis of recent works concerning reactive system design. The term `reactive system' has been introduced in order to avoid ambiguities often involved with the term `real-time system' which, while being best-known and suggestive, has been assigned so many different meanings that it is almost inevitably misunderstood. Industrial Process control system, transportation control and supervision systems, signal processing systems, etc. are examples of the systems we have in mind. Four programming languages are presented, which share the same underlying synchronous model: based on Robin Milner's pioneering works about synchronous process algebras, this model consists in considering that a program instantaneously reacts to events, or that the machine execution time is negligible with respect to the response delays of its environment. Using this abstract point of view, the time behavior of a system can be formalized in a very simple and elegant way.” Disponible au Prêt-entre-Bibliothèque


Méthodes formelles outils de vérification :
Merz, Stephan. Nicolas Navet. Modeling and verification of real-time systems : formalisms and software tools. London : J. Wiley, 2008, 393 p., ISBN 978-1-84821-013-4.

“This title is devoted to presenting some of the most important concepts and techniques for describing real-time systems and analyzing their behavior in order to enable the designer to achieve guarantees of temporal correctness. Topics addressed include mathematical models of real-time systems and associated formal verification techniques such as model checking, probabilistic modeling and verification, programming and description languages, and validation approaches based on testing. With contributions from authors who are experts in their respective fields, this will provide the reader with the state of the art in formal verification of real-time systems and an overview of available software tools.” Disponible au Prêt-entre-Bibliothèque

Le coin des spécialistes

Ouvrages



Stefania Gnesi, Tiziana Margaria. Formal Methods for Industrial Critical Systems: A Survey of Applications. Wiley-Blackwell, mars 2013, 292 p., ISBN-13: 978-0470876183.

“Today, formal methods are widely recognized as an essential step in the design process of industrial safety-critical systems. In its more general definition, the term formal methods encompasses all notations having a precise mathematical semantics, together with their associated analysis methods, that allow description and reasoning about the behavior of a system in a formal manner. Growing out of more than a decade of award-winning collaborative work within the European Research Consortium for Informatics and Mathematics, This books presents a number of mainstream formal methods currently used for designing industrial critical systems, with a focus on model checking. The purpose is threefold: to reduce the effort required to learn formal methods, which has been a major drawback for their industrial dissemination; to help designers to adopt the formal methods which are most appropriate for their systems; and to offer a panel of state-of-the-art techniques and tools for analyzing critical systems.”


Bruno Legeard, Fabrice Bouquet, Natacha Pickaert. Industrialiser le test fonctionnel: Pour maîtriser les risques métier et accroître l'efficacité du test - 2e édition, Dunod, 2011, 304 p. ISBN-13: 978-2100566563.

« Le test logiciel a beaucoup progressé ces dernières années, tant au niveau des méthodes, des processus que des outils disponibles. Ce livre montre comment la conception, puis la construction et la mise en œuvre des tests peuvent s'appuyer sur des processus systématiques et "industrialisables". - La première partie décrit le cycle de la qualification et les besoins d'industrialisation associés ; la deuxième partie décrit les techniques à mettre en œuvre pour aboutir à une démarche systématique. Elle présente un processus outillé et les éléments clés d'un déploiement réussi. La troisième partie expose trois études de cas (progiciel, application web, système embarqué). Cette 2ème édition rend compte de l'intégration récente des processus métier dans la génération du référentiel de tests. »


Marco Bozzano, Adolfo Villafiorita. Design and Safety Assessment of Critical Systems. Auerbach Publishers, 2011, 279 p., ISBN-13: 978-1-439-80331-8.

“Safety-critical systems, by definition those systems whose failure can cause catastrophic results for people, the environment, and the economy, are becoming increasingly complex both in their functionality and their interactions with the environment. Unfortunately, safety assessments are still largely done manually, a time-consuming and error-prone process. The growing complexity of these systems requires an increase in the skill and efficacy of safety engineers and encourages the adoption of formal and standardized techniques. An introduction to the area of design and verification of safety-critical systems, Design and Safety Assessment of Critical Systems focuses on safety assessment using formal methods. Beginning with an introduction to the fundamental concepts of safety and reliability, it illustrates the pivotal issues of design, development, and safety assessment of critical systems. The core of the book covers traditional notations, techniques, and procedures, including Fault Tree Analysis, FMECA, HAZOP, and Event Tree Analysis, and explains in detail how formal methods can be used to realize such procedures. It looks at the development process of safety-critical systems, and highlights influential management and organizational aspects. […] “



Disponible au Prêt-entre-Bibliothèque
Boulanger, Jean-Louis. Techniques industrielles de modélisation formelle pour le transport. Paris : Hermès, Lavoisier, 2011, 351 p. ISBN 978-2-7462-3230-3.

« Les techniques formelles réalisent des modèles de spécifications et/ou de conception et servent à l'analyse statique de code, à la démonstration du respect de propriété, à la bonne gestion des calculs sur les flottants, etc. Dès la première mise en oeuvre des logiciels au sein d'un équipement, la RATP a mis en oeuvre les techniques formelles afin de démontrer que des impératifs de sécurité sont respectées par le logiciel. Cet ouvrage présente des exemples concrets de mise en oeuvre des techniques formelles (simulation, "model-checking", preuve) et des méthodes formelles (méthode B, SCADE) sur des projets de transport ferroviaire de type métro et grande-ligne (ligne classique, TGV, ERTMS, fret). »



Disponible au Prêt-entre-Bibliothèque
José Bacelar Almeida, Maria Jooão Frade, Jorge Sousa Pinto [et al.] Rigorous software development : an introduction to program verification. London : Springer, 2011, 263 p. ISBN 978-0-85729-017-5.

« The use of mathematical methods in the development of software is essential when reliable systems are sought; in particular they are now strongly recommended by the official norms adopted in the production of critical software. Program Verification is the area of computer science that studies mathematical methods for checking that a program conforms to its specification. This text is a self-contained introduction to program verification using logic-based methods, presented in the broader context of formal methods for software engineering. The idea of specifying the behaviour of individual software components by attaching contracts to them is now a widely followed approach in program development, which has given rise notably to the development of a number of behavioural interface specification languages and program verification tools. A foundation for the static verification of programs based on contract-annotated routines is laid out in the book. These can be independently verified, which provides a modular approach to the verification of software. The text assumes only basic knowledge of standard mathematical concepts that should be familiar to any computer science student. […].”



Disponible au Prêt-entre-Bibliothèque
Lamsweerde, A. van Axel. Requirements engineering : from system goals to UML models to software specifications. John Wiley, 2009, 682 p. ISBN 978-0-470-01270-3.

“The book presents both the current state of the art in requirements engineering and a systematic method for engineering high-quality requirements, broken down into four parts. The first part introduces fundamental concepts and principles including the aim and scope of requirements engineering, the products and processes involved, requirements qualities to aim at and flaws to avoid, and the critical role of requirements engineering in system and software engineering. The second part of the book is devoted to system modeling in the specific context of engineering requirements. It presents a multi-view modeling framework that integrates complementary techniques for modeling the system-as-is and the system-to-be. The third part of the book reviews goal-based reasoning techniques to support the various steps of the KAOS method. The fourth part of the book goes beyond requirements engineering to discuss the mapping from goal-oriented requirements to software specifications and to software architecture. Online software will accompany the book and will add value to both classroom and self-study by enabling students to build models and specifications involved in the book s exercises and case studies, helping them to discover the latest RE technology solutions. Instructor resources such as slides, solutions, models and animations will be available from an accompanying website.



Disponible au Prêt-entre-Bibliothèque

Zeller, Andreas. Why programs fail Texte imprimé : a guide to systematic debugging, San Francisco, Calif. Morgan Kaufmann, cop. 2009, 400 p. ISBN 978-0-12-374515-6.

“This fully updated second edition includes 100+ pages of new material, including new chapters on Verifying Code, Predicting Errors, and Preventing Errors. Cutting-edge tools such as FindBUGS and AGITAR are explained, techniques from integrated environments like Jazz.net are highlighted, and all-new demos with ESC/Java and Spec#, Eclipse and Mozilla are included. This complete and pragmatic overview of debugging is authored by Andreas Zeller, the talented researcher who developed the GNU Data Display Debugger(DDD), a tool that over 250,000 professionals use to visualize the data structures of programs while they are running. Unlike other books on debugging, Zeller's text is product agnostic, appropriate for all programming languages and skill levels. […] explains best practices ranging from systematically tracking error reports, to observing symptoms, reproducing errors, and correcting defects. It covers a wide range of tools and techniques from hands-on observation to fully automated diagnoses, and also explores the author's innovative techniques for isolating minimal input to reproduce an error and for tracking cause and effect through a program. It even includes instructions on how to create automated debugging tools.”



Disponible au Prêt-entre-Bibliothèque
Mark Utting ; Bruno Legeard. Practical Model-Based Testing - A tools approach. San Francisco : Morgan Kaufmann, 2007, 433 p. ISBN 978-0-12-372501-1.

“This book gives a practical introduction to model-based testing, showing how to write models for testing purposes and how to use model-based testing tools to generate test suites. It is aimed at testers and software developers who wish to use model-based testing, rather than at tool-developers or academics. The book focuses on the mainstream practice of functional black-box testing and covers different styles of models, especially transition-based models (UML state machines) and pre/post models (UML/OCL specifications and B notation). The steps of applying model-based testing are demonstrated on examples and case studies from a variety of software domains, including embedded software and information systems.



Disponible au Prêt-entre-Bibliothèque
Timothy T.R. Colburn, J.H. Fetzer, R.L. Rankin. Program Verification: Fundamental Issues in Computer Science. Springer; reprint of the original 1st ed. 1993, 471 p. ISBN-13: 978-9401047890.

“Among the most important problems confronting computer science is that of developing a paradigm appropriate to the discipline. Proponents of formal methods - such as John McCarthy, C.A.R. Hoare, and Edgar Dijkstra - have advanced the position that computing is a mathematical activity and that computer science should model itself after mathematics. Opponents of formal methods - by contrast, suggest that programming is the activity which is fundamental to computer science and that there are important differences that distinguish it from mathematics, which therefore cannot provide a suitable paradigm. Disagreement over the place of formal methods in computer science has recently arisen in the form of renewed interest in the nature and capacity of program verification as a method for establishing the reliability of software systems. A paper that appeared in Communications of the ACM entitled, `Program Verification: The Very Idea', by James H. Fetzer triggered an extended debate that has been discussed in several journals and that has endured for several years, engaging the interest of computer scientists (both theoretical and applied) and of other thinkers from a wide range of backgrounds who want to understand computer science as a domain of inquiry.” Disponible au Prêt-entre-Bibliothèque




Articles



Techniques de vérification récentes :
Frederic Lang, Radu Mateescu. Partial Model Checking using Networks of Labelled Transition Systems and Boolean Equation Systems. Tools and Algorithms for the Construction and Analysis of Systems (2012).

“Partial model checking was proposed by Andersen in 1995 to verify a temporal logic formula compositionally on a composition of processes. It consists in incrementally incorporating into the formula the behavioural information taken from one process - an operation called quotienting - to obtain a new formula that can be verified on a smaller composition from which the incorporated process has been removed. Simplifications of the formula must be applied at each step, so as to maintain the formula at a tractable size. In this paper, we revisit partial model checking. First, we extend quotienting to the network of labelled transition systems model, which subsumes most parallel composition operators, including m among n synchronisation and parallel composition using synchronisation interfaces, available in the E-LOTOS standard. Second, we reformulate quotienting in terms of a simple synchronous product between a graph representation of the formula (called formula graph) and a process, thus enabling quotienting to be implemented efficiently and easily, by reusing existing tools dedicated to graph compositions. Third, we propose simplifications of the formula as a combination of bisimulations and reductions using Boolean equation systems applied directly to the formula graph, thus enabling formula simplifications also to be implemented easily and efficiently. Finally, we describe an implementation in the CADP (Construction and Analysis of Distributed Processes) toolbox and present some experimental results in which partial model checking uses hundreds of times less memory than on-the-fly model checking.”< http://hal.inria.fr/hal-00684471 >



Accessible en Open-Access via Hal
Yoo, J, Jee, E, Cha, S. Formal Modeling and Verification of Safety-Critical Software. IEEE SOFTWARE, 2009, vol. 26, no 3, pp.42-49.

“Rigorous quality demonstration is important when developing safety-critical software such as a reactor protection system (RPS) for a nuclear power plant. Although using formal methods such as formal modeling and verification is strongly recommended, domain experts often reject formal methods for four reasons: there are too many candidate techniques, the notations appear complex, the tools often work only in isolation, and output is often too difficult for domain experts to understand. A formal-methods-based process that supports development, verification and validation, and safety analysis can help domain experts overcome these obstacles. Nuclear engineers can also use CASE tools to apply formal methods without having to know details of the underlying formalism. The authors spent more than seven years working with nuclear engineers in developing RPS software and applying formal methods. The engineers and regulatory personnel found the process effective and easy to apply with the integrated tool support.< DOI : 10.1109/MS.2009.67 > Accessible via l’abonnement Inria


Articles de synthèse :
Patrick Cousot, Radhia Cousot, Laurent Mauborgne. Theories, solvers and static analysis by abstract interpretation. Journal of the ACM, December 2012, vol. 59, no 6, Article No. 3.

“The algebraic/model theoretic design of static analyzers uses abstract domains based on representations of properties and pre-calculated property transformers. It is very efficient. The logical/proof theoretic approach uses SMT solvers/theorem provers and computation of property transformers on-the-fly. It is very expressive. We propose to unify both approaches, so that they can be combined to reach the sweet spot best adapted to a specific application domain in the precision/cost spectrum. We first give a new formalization of the proof theoretic approach in the abstract interpretation framework, introducing a semantics based on multiple interpretations to deal with the soundness of such approaches. Then we describe how to combine them with any other abstract interpretation-based analysis using an iterated reduction to combine abstractions. The key observation is that the Nelson-Oppen procedure, which decides satisfiability in a combination of logical theories by exchanging equalities and disequalities, computes a reduced product (after the state is enhanced with some new “observations” corresponding to alien terms). […]” < http://dx.doi.org/10.1145/2395116.2395120 > Accessible via l’abonnement Inria


Robert M. Hierons, Kirill Bogdanov, Jonathan P. Bowen, Rance Cleaveland, John Derrick, Jeremy Dick, Marian Gheorghe, Mark Harman, Kalpesh Kapoor, Paul Krause, Gerald Luttgen, Anthony J. H. Simons, Sergiy Vilkomir, Martin R. Woodward and Hussein Zedan. Using formal specifications to support testing. ACM Computing Surveys, February 2009, vol. 41, no 2, Article 9, 76 p.

“Formal methods and testing are two important approaches that assist in the development of high-quality software. While traditionally these approaches have been seen as rivals, in recent years a new consensus has developed in which they are seen as complementary. This article reviews the state of the art regarding ways in which the presence of a formal specification can be used to assist testing.”< DOI : 10.1145/1459352.1459354 > Accessible via l’abonnement Inria




Actes de Conférences



Ireland, A, Grov, G, Llano, MT, Butler, M. Reasoned modelling critics: Turning failed proofs into modelling guidance. Science of Computer Programming, 2013, vol. 78, no 3, Special Issue, pp. 293-309.

“The activities of formal modelling and reasoning are closely related. But while the rigour of building formal models brings significant benefits, formal reasoning remains a major barrier to the wider acceptance of formalism within design. Here we propose reasoned modelling critics - an approach which aims to abstract away from the complexities of low-level proof obligations, and provide high-level modelling guidance to designers when proofs fail. Inspired by proof planning critics, the technique combines proof-failure analysis with modelling heuristics. Here, we present the details of our proposal, implement them in a prototype and outline future plans.”< DOI: 10.1016/j.scico.2011.03.006 > Accessible via l’abonnement Inria


Supratik Chakraborty, Madhavan Mukund. Automated Technology for Verification and Analysis. 10th International Symposium, ATVA 2012, Thiruvananthapuram, India, October 3-6, 2012 : Proceedings. Lecture Notes in Computer Science, vol. 7561, 2012, 436 p.

“This book constitutes the thoroughly refereed proceedings of the 10th International Symposium on Automated Technology for Verification and Analysis, ATVA 2012. The 25 regular papers, 3 invited papers and 4 tool papers presented were carefully selected from numerous submissions. Conference papers are organized in 9 technical sessions, covering the topics of automata theory, logics and proofs, model checking, software verification, synthesis, verification and parallelism, probabilistic verification, constraint solving and applications, and probabilistic systems.”


Mariëlle Stoelinga, Ralf Pinger. Formal Methods for Industrial Critical Systems. 17th International Workshop, FMICS 2012, Paris, France, August 27-28, 2012. Proceedings. Lecture Notes in Computer Science, 2012, vol. 7437. ISBN: 978-3-642-32468-0

“This book constitutes the proceedings of the 17th International Workshop on Formal Methods for Industrial Critical Systems, FMICS 2012, held in Paris, France, in August 2012. The 14 papers presented were carefully reviewed and selected from 37 submissions. The aim of the FMICS workshop series is to provide a forum for researchers who are interested in the development and application of formal methods in industry. It also strives to promote research and development for the improvement of formal methods and tools for industrial applications.”< http://link.springer.com/book/10.1007/978-3-642-32469-7/ > Accessible via l’abonnement Inria



Habilitation à diriger des recherches - HDR
Genet, Thomas. Reachability analysis of rewriting for software verification. Habilitation à diriger des recherches : Informatique, Université de Rennes 1, 30 novembre 2009, 168 p. + 1 vidéo.

« Ce travail s'intéresse à la preuve de propriétés de sûreté sur les programmes. Prouver de telles propriétés revient généralement à démontrer que les configurations critiques ne sont jamais atteintes lors de l'exécution du programme. Pour ces propriétés, nous proposons une technique de vérification semi-automatique qui tente de combiner les avantages du model-checking, interprétation abstraite et preuve interactive. Comme en model-checking, cette technique permet de prouver automatiquement des propriétés de sûreté sur les systèmes finis, ainsi que sur certaines classes de systèmes infinis ayant une présentation finie. En dehors de ces classes et comme en interprétation abstraite, notre technique permet d'approcher le comportement infini d'un système par une sur-approximation sûre. Enfin, lorsque les approximations sont trop grossières, il nous est possible d'intervenir manuellement sur la définition des approximations afin de les raffiner et, si cela est possible, de mener la preuve à son terme. Cette approche est basée sur les systèmes de réécriture qui sont un des outils centraux de la déduction automatique. Nous les utilisons comme formalisme pour représenter les programmes: une configuration de programme est représentée par un terme et les transitions entre configurations par des règles de réécriture. […] »< http://tel.archives-ouvertes.fr/docs/00/48/05/37/PDF/Final.pdf > Accessible en Open-Access via TEL

Outils et applications

Calls for papers / Events (sélection)



ICFEM 2013 - 15th International Conference on Formal Engineering Methods Queenstown, 2013, Oct 29 - Nov 1, New Zealand.

“Since 1997, ICFEM has provided a forum for those interested in the application of formal engineering methods to computer systems. Formal methods for the development of computer systems have been extensively researched and studied. We now have good theoretical understandings of how to describe what programs do, how they do it, and why they work. A range of semantic theories, specification languages, design techniques, and verification methods and tools have been developed and applied to the construction of programs of moderate size that are used in critical applications. The goal of this conference is to bring together industrial, academic, and government experts, from a variety of user domains and software disciplines, to help advance the state of the art. Researchers, practitioners, tool developers and users, and technology transition experts are all welcome. We are interested in work that has been incorporated into real production systems, and in theoretical work that promises to bring practical, tangible as well as engineering benefit.” <https://www.cs.auckland.ac.nz/research/conferences/icfem2013/ >,


.
ATVA 2013 - 11th International Symposium on Automated Technology for Verification and Analysis, October 15 - 18, 2013, Hanoi, Vietnam.

“The purpose of ATVA is to promote research on theoretical and practical aspects of automated analysis, verification and synthesis by providing a forum for interaction between the regional and the international research communities and industry in the field. The previous events were held in Taiwan - 2003-5, Beijing - 2006, Tokyo - 2007, Seoul - 2008, Macao - 2009, Singapore - 2010, Taiwan - 2011, and Thiruvananthapuram - 2012.”



< http://www.uet.vnu.edu.vn/atva2013/ >,
.
RV 2013 – 4th International Conference on Runtime Verification, 24-27 September 2013, INRIA Rennes, France.

“Runtime verification is concerned with monitoring and analysis of software and hardware system executions. Runtime verification techniques are crucial for system correctness and reliability; they are significantly more powerful and versatile than conventional testing, and more practical than exhaustive formal verification. Runtime verification can be used prior to deployment, for verification and debugging purposes, and after deployment for ensuring reliability, safety and security, and for providing fault containment and recovery. Topics : specification languages and formalisms for traces, specification mining, program instrumentation, monitor construction techniques, logging, recording, and replay, fault detection, localization, recovery and repair, program steering and adaptation, metrics and statistical information gathering, combination of static and dynamic analyses, program execution visualization.“



< http://rv2013.gforge.inria.fr/ >,
.

FMICS 2013 - 18th International Workshop on Formal Methods for Industrial Critical Systems, 23-24 September 2013, Madrid, Spain.

“The aim of the FMICS workshop series is to provide a forum for researchers who are interested in the development and application of formal methods in industry. In particular, FMICS brings together scientists and engineers that are active in the area of formal methods and interested in exchanging their experiences in the industrial usage of these methods. The FMICS workshop series also strives to promote research and development for the improvement of formal methods and tools for industrial applications.” < http://lvl.info.ucl.ac.be/Fmics2013/Fmics2013 >,


.


Structures de recherche, équipes, laboratoires



ABSTRACTION

Equipe de recherche Inria Rocquencourt, “ ABSTRACTION développe des techniques d'interprétation abstraite, permettant de calculer statiquement une sur-approximation des comportements des logiciels analysés. L’analyse statique par interprétation abstraite est sûre (en cas de réussite, le logiciel analysé est effectivement prouvé correct) mais incomplète (dans certains cas, la recherche de preuve peut échouer à cause des problèmes d’indécidabilité et conduire à de fausses alarmes). En spécialisant l’analyse pour des familles de programmes bien définies, il est possible d’éliminer les fausses alarmes. Les objectifs d’ABSTRACTION consistent à : formaliser des modèles sémantiques des programmes et des propriétés à prouver ; développer des abstractions adaptées, permettant à la fois un calcul rapide et précis, c'est-à-dire permettant la preuve effective des propriétés souhaitées ; implémenter et valider ces techniques par l'analyse de logiciels industriels ; rendre possible l'industrialisation de telles techniques à moyen terme.” < http://www.inria.fr/equipes/abstraction >,


.
ARIC - Arithmetic and Computing

“The overall objective of AriC is, through computer arithmetic, to improve computing at large, in terms of performance, efficiency, and reliability. Our method:working from the high-level specification of a computation to the lower-level details of its implementation, reconciling performance and numerical quality, both when building operators and when using existing operators developing the mathematical and algorithmic foundations of computing. Our expertise: Floating-point computing, Elementary functions, Numerical validation and formal proof, FPGA arithmetic, Algorithms for Euclidean lattices, and their applications, Cryptography, Exact linear algebra, Computer algebra, Number theory.” < http://www.ens-lyon.fr/LIP/AriC/ >,


.
CONVECS - Construction of Verified Concurrent Systems

“CONVECS is a research team common between Inria Grenoble - Rhône-Alpes and the LIG laboratory. Its research activities focus on the formal modeling and verification of asynchronous concurrent systems, which are instantiated in various domains (communication protocols, distributed algorithms, GALS, etc.). To this aim, CONVECS proposes new formal languages for specifying the behaviour and the properties of concurrent systems, and devises efficient verification algorithms and tools running on sequential and massively parallel machines.” < http://convecs.inria.fr/ >,


05 avril 2013>.
SYNCHRONE

“The present activities of the group are organized into four long-term research topics: Language Design, Formal Verification, Theory, Techniques and Tools, Embedded Software Implementation, Models for the Implementation and the Virtual Prototyping of Embedded Systems.” < http://www-verimag.imag.fr/Synchrone,30.html >,


.


Projets Européens & Internationaux



ERCIM (European Research Consortium for Informatics and Mathematics) Working Group on Formal Methods for Industrial Critical Systems (FMICS).

“Formal methods have been advocated as a means of increasing the reliability of systems, especially those which are safety or business critical, but the industrial uptake of such methods has been slow. This is due to the perceived difficulty of mathematical nature of these methods, the lack of tool support, and the lack of precedents where formal methods has been proven to be effective. It is even more difficult to develop automatic specification and verification tools due to limitations like state explosion, undecidability, etc. This working group will bring together researchers of the ERCIM consortium in order to promote the use of formal methods within industry.”



< http://fmics.inria.fr >,
05 avril 2013>.
Maurice ter Beek, Stefania Gnesi, Fabio Martinelli, Franco Mazzanti, Marinella Petrocchi. Formal Modelling and Verification in Service-Oriented Computing

“Formal methods and tools are a popular means of analysing the correctness properties of computer network protocols, such as safety, liveness and security. First the protocol under scrutiny is described in a formal language, which often results in a more precise definition of its function. Subsequently, the properties to be analysed are specified in a suitable logic. Finally, to decide whether or not the protocol fulfils certain properties, automatic tools are used to analyse it. The outcome either proves the protocol to be correct with respect to the relevant properties or shows there to be a problem.” < http://ercim-news.ercim.eu/en70/special/formal-modelling-and-verification-in-service-oriented-computing >,


. Accessible en ligne


Outils, technologies et services



CADP :
“Construction and Analysis of Distributed Processes, formerly known as "CAESAR/ALDEBARAN Development Package") is a popular toolbox for the design of asynchronous concurrent systems, such as communication protocols, distributed systems, asynchronous circuits, multiprocessor architectures, web services, etc. Since January 2012, CADP is developed by the CONVECS team (formerly, by the Inria VASY team). CADP is connected to various complementary tools. CADP is maintained, regularly improved, and used in many industrial projects.”< http://cadp.inria.fr/ >,
.

  • Voir aussi sur Wikipedia  la page de présentation CADP: <http://en.wikipedia.org/wiki/Construction_and_Analysis_of_Distributed_Processes >,
    .


Hubert Garavel, Frédéric Lang, Radu Mateescu, Wendelin Serwe. CADP 2011: A Toolbox for the Construction and Analysis of Distributed Processes. International Journal on Software Tools for Technology Transfer, 2012.

“CADP (Construction and Analysis of Distributed Processes) is a comprehensive software toolbox that implements the results of concurrency theory. Started in the mid 80s, CADP has been continuously developed by adding new tools and enhancing existing ones. Today, CADP benefits from a worldwide user community, both in academia and industry. This paper presents the latest release, CADP 2011, which is the result of a considerable development effort spanning the last five years. The paper first describes the theoretical principles and the modular architecture of CADP, which has inspired several other recent model checkers. The paper then reviews the main features of CADP 2011, including compilers for various formal specification languages, equivalence checkers, model checkers, compositional verification tools, performance evaluation tools, and parallel verification tools running on clusters and grids. Finally, the paper surveys some significant case studies.” < http://hal.inria.fr/hal-00715056 > Accessible en Open-Access via HAL



CADP : applications récentes :
Gwen Salaün , Xavier Etchevers , Noël De Palma, Fabienne Boyer, Thierry Coupaye. Verification of a Self-configuration Protocol for Distributed Applications in the Cloud. 27th Symposium On Applied Computing - SAC 2012, pp. 1278-1283.

“Distributed applications in the cloud are composed of a set of virtual machines running a set of interconnected software components. In this context, the task of automatically configuring distributed applications is a very difficult issue. In this paper, we focus on such a self-configuration protocol, which is able to configure a whole distributed application without requiring any centralized server. The high degree of parallelism involved in this protocol makes its design complicated and error-prone. In order to check that this protocol works as expected, we specify it in LOTOS NT and verify it using the CADP toolbox. The use of these formal techniques and tools helped to detect a bug in the protocol, and served as a workbench to experiment with several possible communication models.”< http://hal.archives-ouvertes.fr/hal-00685394 > Accessible en Open-Access via HAL


Radu Mateescu, Wendelin Serwe. Model Checking and Performance Evaluation with CADP Illustrated on Shared-Memory Mutual Exclusion Protocols. Science of Computer Programming, 2012.

“Mutual exclusion protocols are an essential building block of concurrent shared-memory systems: indeed, such a protocol is required whenever a shared resource has to be protected against concurrent non-atomic accesses. Hence, many variants of mutual exclusion protocols exist, such as Peterson's or Dekker's well-known protocols. Although the functional correctness of these protocols has been studied extensively, relatively little attention has been paid to their non-functional aspects, such as their performance in the long run. In this paper, we report on experiments with the CADP toolbox for model checking and performance evaluation of mutual exclusion protocols using Interactive Markov Chains. Steady-state analysis provides an additional criterion for comparing protocols, which complements the verification of their functional properties. We also carefully re-examined the functional properties of these protocols, whose accurate formulation as temporal logic formulas in the action-based setting turns out to be quite involved.”< http://hal.inria.fr/hal-00671321 >



Accessible en Open-Access via HAL
The Astrée Static Analyzer.

“Astrée is a static program analyzer aiming at proving the absence of Run Time Errors (RTE) in programs written in the C programming language. On personal computers, such errors, commonly found in programs, usually result in unpleasant error messages and the termination of the application, and sometimes in a system crash. In embedded applications, such errors may have graver consequences. Astrée analyzes structured C programs, with complex memory usages, but without dynamic memory allocation and recursion. This encompasses many embedded programs as found in earth transportation, nuclear energy, medical instrumentation, aeronautic, and aerospace applications, in particular synchronous control/command such as electric flight control [30], [31] or space vessels maneuvers [32].The main applications of Astrée appeared two years after starting the project. Since then, Astrée has achieved the following unprecedented results on the static analysis of synchronous, time-triggered, real-time, safety critical, embedded software written or automatically generated in the C programming language”. < http://www.astree.ens.fr/ >,


.


  • Voir aussi cet article signalé sur HAL :

Julien Bertrane, Patrick Cousot, Radhia Cousot, Jérôme Feret, Laurent Mauborgne, Antoine Miné, Xavier Rival. L'analyseur statique Astrée. < http://hal.inria.fr/hal-00748084 >
GAPPA :
Florent de Dinechin, Christoph Lauter, Guillaume Melquiond. Certifying the floating-point implementation of an elementary function using Gappa. IEEE Transactions on Computers, 2011, vol. 60, no 2, pp. 242-253.

“High confidence in floating-point programs requires proving numerical properties of final and intermediate values. One may need to guarantee that a value stays within some range, or that the error relative to some ideal value is well bounded. This certification may require a time-consuming proof for each line of code, and it is usually broken by the smallest change to the code, e.g. for maintenance or optimization purpose. Certifying floating-point programs by hand is therefore very tedious and error-prone. The Gappa proof assistant is designed to make this task both easier and more secure, thanks to the following novel features. It automates the evaluation and propagation of rounding errors using interval arithmetic. Its input format is very close to the actual code to validate. It can be used incrementally to prove complex mathematical properties pertaining to the code. It generates a formal proof of the results, which can be checked independently by a lower-level proof assistant like Coq. Yet it does not require any specific knowledge about automatic theorem proving, and thus is accessible to a wide community. […]” < http://hal.inria.fr/inria-00533968 > Accessible en Open-Access via HAL


AADL:
Erwan Jahie, Nicolas Halbwachs, P. Raymond. Synchronous Modeling and Validation of Priority, Inheritance Schedulers. Fundamental Approaches to Software Engineering. march 2009.

Architecture Description Languages (ADLs) allow embedded systems to be described as assemblies of hardware and software components. It is attractive to use such a global modelling as a basis for early system analysis. However, in such descriptions, the applicative software is often abstracted away, and is supposed to be developed in some host programming language. This forbids to take the applicative software into account in such early validation. To overcome this limitation, a solution consists in translating the ADL description into an executable model, which can be simulated and validated together with the software. In a previous paper, we proposed such a translation of Aadl (Architecture Analysis & Design Language) specifications into an executable synchronous model. The present paper is a continuation of this work, and deals with expressing the behavior of complex scheduling policies managing shared resources. We provide a synchronous specification for two shared resource scheduling protocols: the well-known basic priority inheritance protocol (BIP), and the priority ceiling protocol (PCP). This results in an automated translation of Aadl models into a purely Boolean synchronous (Lustre) scheduler, that can be directly model-checked, possibly with the actual software.



< http://hal.archives-ouvertes.fr/hal-00384389/ > Accessible en Open-Access via HAL


Sociétés / Starts-Up



AdaCore

« Founded in 1994, AdaCore is the leading provider of commercial software solutions for Ada, a state-of-the-art programming language designed for large, long-lived applications where safety, security, and reliability are critical. AdaCore’s flagship product is the GNAT Pro development environment, which comes with expert on-line support and is available on more platforms than any other Ada technology.”



< http://www.adacore.com/ >,
.

ESTEREL Technologies

“Esterel Technologies is the leading provider of critical systems and software development solutions for the aerospace, defense, rail transportation, nuclear, and industrial domains. System and software engineers use Esterel SCADE® solutions to graphically design, verify, and automatically generate critical systems and software applications with high dependability requirements. SCADE solutions easily integrate, allowing for development optimization and increased communication among team members.” < http://www.esterel-technologies.com/ >,


.
SERMA INGENIERIE

« Filiale du GROUPE SERMA qui étudie et réalise des systèmes électroniques embarqués. Née de la fusion de différents bureaux d’études, SERMA INGENIERIE capitalise plus de 30 ans de présence sur le marché de l’Ingénierie Electronique. Cette expérience couplée à une réelle volonté d’adaptation et d’intégration des dernières stratégies industrielles permettent à SERMA INGENIERIE de répondre efficacement aux besoins de grands comptes industriels quelques soient leurs domaines d’activités. » < http://www.serma-ingenierie.com/ >,


.
Smartesting

“Smartesting is a leading independent software provider of an innovative solution that guarantees and certifies the alignment of information system applications with the business processes of the organization. The solution provides a certification approach that automates the production of required functional tests and helps reduce the risks of testing the most critical business applications. Smoothly integrated with mainstream test management solutions, the Smartesting solution accelerates the industrialization of current test practices and enables greater agility for the IT teams. […].”



< http://www.smartesting.com/ >,
.
SYSTEREL - « Etude & Développement Validation & Evaluation d’équipements critiques

« Est une PME dont le coeur de métier est le développement de systèmes critiques sécurisés et intervient sur le cycle de vie des équipements en réalisant : les études amont à la conception du produit : analyses de sécurité de fonctionnement, modélisation de systèmes, définition d’architectures, conseil en technologies et méthodologies ;les activités de spécification, développement, évaluation, des logiciels embarqués à fortes contraintes temps réel ou de sécurité ; des outils spécifiques d’aide à la conception, l’intégration, et la validation de tels systèmes. < http://www.systerel.fr/ >,


.
UXP

« UXP est concepteur, développeur et intégrateur de technologies et solutions avancées dans les domaines de l’automation, de l’informatique industrielle, des systèmes embarqués et de l’électricité industrielle. UXP développe des solutions produits (logicielles et matérielles) pour bâtir des solutions métiers ouvertes dans les domaines de l’Industrie, l’Energie, l’Embarqué, le BTP et la Montagne.

A l’origine destinées à l’automatisation industrielle rapide et le contrôle en production, les technologies UXP s’étendent aujourd’hui à tous les secteurs d’activité. Elles apportent vitesse et précision nécessaires pour obtenir qualité et répétabilité garantes de la performance des solutions métier.» < http://www.uxp.fr/ >,
.


Administration /Communautés / Associations



CAP’TRONIC – Compétitivité et innovations des PME par l’électronique et le logiciel embarqué.

«Sa mission est de faciliter l’innovation et la compétitivité des PME par l’électronique. Forte de vingt Ingénieurs répartis sur le territoire et de plus de quatre cents adhérents, elle a aidé en 2009 plus de 1800 PME différentes de tous secteurs par de la sensibilisation au moyen de séminaires techniques (1100 PME), des conseils (494 PME), des contrats d’appui technique en collaboration avec des centres de compétence en électronique (260 PME) ainsi que le suivi des PME et de leurs projets (364 PME). » < http://www.captronic.fr/ >,


.

Bibliographie/Ressources réalisée par le service IST Inria Grenoble - Rhône-Alpes/ Avril 2013 / Pour tout renseignement, contactez services.ist-gra@inria.fr




Download 79.85 Kb.

Share with your friends:




The database is protected by copyright ©ininet.org 2024
send message

    Main page