Work package number
|
1
|
Start date or starting event:
|
M1
|
Work package title
|
Requirements
|
Participant number
|
|
|
|
|
|
|
|
Participant short name
|
|
|
|
|
|
|
|
Person-months per participant
|
|
|
|
|
|
|
|
Objectives
Purpose of WP1 is the definition of the requirements for tools and methodologies to be developed by the technology packages. Starting from the evaluation of the current state-of-the-art and state-of-practice approaches related to evolutionary development in the model-based engineering of embedded systems, requirements for tools and methodologies will be defined in order to provide: non-intrusive analysis and monitoring techniques to systematically collect information during the evolutions; decision support for the synthesis of system architectures; model management and visualization, to automate the management of models and improve comprehension during the system evolution. Safety and certification requirements will be also identified. WP1 is also responsible of the definition of test methodologies that will be applied during validation phase and the preliminary definition of candidate validation cases and scenarios.
|
Description of work (possibly broken down into tasks) and role of partners
T1.1 State of the art
Lead: IKER
Contributors: Atego, IKER, NXP-A, NXP-D, UEF, UoO
This Task will accomplish the following activities:
Evaluation of commercial existing tools and European Patents
Evaluation of the needs about:
Learning from previous versions of the product
Hand-over of models to new engineering teams
Supporting product-management in steering the product requirements
Collect relevant and significant industry control use cases
T1.2 Requirement identification
Lead: AVL
Contributors: Atego, AVL, BHL, CAU, CEA, CISC, CRF, FHG-HHI, IKER, ISYS, LDZ, NXP-D, RTU, UEF, UES, UoO
This Task will accomplish the following activities:
Identification of the monitoring and analysis requirements for DECISIVE applications
Identification of the requirements for the definition of interfaces between DECISIVE models (high-level and analysis models, computation models, and platform specific execution models)
Identification of the requirements for human-centric tools able to provide better understanding of the systems
Requirements of a DECISIVE Human Interface Guideline able to define unambiguous systems requirements
Requirements cover model development perspectives - model editing, simulation, and analysis (with respect of model and result representation)
Identification of the requirements of indexes able to provide support to product management during the decision phase of evolutionary systems
Identification of requirement related to safety and certification features
T1.3 Test methodology definition
Lead: Philips
Contributors: Atego, CRF, IKER, MU, NXP-A, NXP-D, PHILIPS, UES
This Task will accomplish the following activities:
Find commonalities to ensure the reusability and cross-domain applicability of the DECISIVE methodology and tools
Preliminary definition of candidate validation cases and scenarios
Definition of test methodology to apply during the validation phase (WP6)
|
Deliverables (brief description) and month of delivery
D1.1
|
State-of-the-art
|
|
|
D1.2
|
Business requirements and needs
|
M6
|
|
D1.3
|
Preliminary validation case and test methodology
|
M36
|
|
|
Work package number
|
2
|
Start date or starting event:
|
M1
|
Work package title
|
Modelling and design frameworks
|
Participant number
|
3
|
4
|
6
|
10
|
15
|
16
|
17
|
Participant short name
|
CISC
|
NXP-A
|
PAJ
|
NSN
|
ATEGO
|
CEA
|
EADS
|
Person-months per participant
|
24
|
20
|
?
|
?
|
40
|
?
|
?
|
Participant number
|
22
|
26
|
27
|
43
|
?
|
|
|
Participant short name
|
NXP-D
|
RTU
|
ALM
|
FPK
|
TEC
|
|
|
Person-months per participant
|
24
|
?
|
?
|
?
|
?
|
|
|
Objectives
Use cases reflecting the DECISIVE themes and provided by the industrial partners will be used to determine the most important data that have to be recorded and visualized during the system evolutions for support of an evolutionary design methodology. Details of requirements of this new approach will come from WP1. One of the key innovative elements in WP2 will be the extension to the current state-of-the-art modelling and design frameworks used by partners for the selected use cases to back-annotate the information learned during the system evolutions (Task 2.1 and 2.4). The second key element in WP2 is the provision of model-transformations that use information gained throughout the system evolutions to improve the quality of decisions, increase reuse, reduce the development time, increase the quality of the final implementation, improve the overall engineering process and support the creation of safety-cases (Task 2.2). The third key topic is efficient safety analysis, where WP2 studies the interplay between models and domain experts so that tools (computers) and experts (humans) can cooperate in the most efficient, flexible an cooperative manner (Task 2.3 and 2.4).
Thus WP2 covers the development and extensions of various modelling languages and the associated tools, and include the following:
Models for back-annotation (e.g. extension of SysML requirements diagrams, possibly extending the results of the ARTEMIS CHESS project)
Models for structured and modular system development, ranging from early stage languages for requirements elicitation to detailed design and implementation models (e.g. extensions to FBK’s languages for requirements and design analysis and validation, hierarchical extension of the CEA PsyC language for OASIS, extensions of SystemC for system descriptions towards VHDL/Verilog for physical implementation)
Models for safety analysis with focus on safety analysis in the early phases of development (e.g., extensions of the FSAP platform for safety analysis).
Techniques for model-driven design of user-interfaces, with a special focus on user interfaces for safety-critical systems
API's and data interfaces for provision of the monitoring and analysis results from WP3
System safety must be considered from the very start of the project. It is thus important that all the formalisms and models used in the project are able to support safety analysis in an efficient way. The problems and challenges in this area are related both to modelling language – what can be expressed – and to the methods used for safety analysis – how can we use the information presented by the model in the most efficient way?
|
Description of work (possibly broken down into tasks) and role of partners
T2.1 Development of extensions to existing models and languages
Lead: xx
Contributors: CISC, CEA, TEC, NXP-D, FBK
CISC (12 PM): extension of the existing model representation within tool System Architect Designer (SyAD®) to support DECISIVE target capturing knowledge acquired in former steps of an evolutionary design process. The work will focus on the model based system design approach and the underlying architecture with links to variants of subsystems from former designs and the actual requirements/use cases.
ATEGO (x PM): will work on:
-
Integration of safety standards and the corresponding analyses in the infrastructure (focus on 26262 automotive standard norm).
-
Integration of OASIS to guarantee time and space isolation in the early stages of model-based engineering development
-
Results of safety analyses and OASIS should be provided at modeling level.
-
Study of mechanisms to preserve safety properties under compositionality and composability
-
Development of a prototype in Artisan Studio to support the DECISIVE results
CEA (x PM): will work on
-
Development of a hierarchical extension of the PsyC language (OASIS programming language) in order to allow compositional design directly in PsyC and in connection with higher-level models
-
Implementation of the mechanisms for model annotation and extraction of information to be back-annotated into higher-level models and provided to analysis and decision-aid tools
TEC (x PM): Implementation of the mechanisms for semantic model annotation and extraction of information to be back-annotated into higher-level models and provided to analysis and decision-aid tools
NXP-D (12 PM): will use results from earlier projects and product developments to introduce new behavioural and functional aspects to improve those as well as the new target architecture. Special focus will be on the modularity and scalability of those models to increase the possible re-use and allow faster adoption to new architectures. In conjunction with task T2.4 flexibility shall be gained to combine building blocks from already existing systems with new system components.
-
Access existing concepts and their key strengths or disadvantages
-
Identify what can be reused for a next-level modelling style and framework, what extensions are required to gain above mentioned flexibility
FBK (x PM): Tool support for checking the consistency of the system properties specified at design time with those obtained by monitoring the actual system.
T2.2 Development of modelling techniques for early-stage safety analysis
Lead: FBK
Contributors: FBK
ATEGO(x PM): will work on:
-
Implementation of the engineering techniques to automate safety analyses
-
Contribution on methodological aspects
-
Contribution on coherence between high-level and low-level specification with respect to safety properties
-
Development of a prototype in Artisan Studio to support the DECISIVE results
FBK (x PM): Extension and integration of FSAP platform for model-based safety and dependability analysis.
T2.3 Techniques for specifying safety requirements
Lead: xx
Contributors . TEC, FBK
This task is based on the output of T2.2, and will provide:
-
templates and techniques for specifying safety requirements. Here we will also build on and extend the work on requirements boilerplates from the ARTEMIS project CESAR
-
ontology-based support for analyzing completeness, consistency and standard compliance of safety requirements
TEC (x PM): In this task we would like to link variability management techniques to the safety related requirements. How can we manage at early safety requirements specification variability identification and management. This will provide information of decision making at early stages.
FBK (x PM): Extension and integration of FBK’s tools and techniques for early validation of functional requirements analyzing their consistency, completeness and correctness.
T2.4 Model-based development of user interfaces
Lead: CISC
Contributors: CISC, NXP-D
CISC (12 PM): extension of the existing user interface within tool System Architect Designer (SyAD®) to extend the automatic generation of test benches for Black and White-Box testing based on SystemC (TLM) with focus on DECISIVE applications. In particular this will be a “Use Case” editor, design variants/configuration support, support of analogue modelling with SystemC-AMS.
NXP-D (12PM): will investigate the possibilities to extend the abstraction levels to allow modelling and simulation of building blocks of different complexity. To even better represent performance parameters of new models the simulation should be extendable real time emulation input. The hierarchy level of models to be used will span from re-used functional blocks, for which complete characterization results will be available, up to high level models described on pure algorithmic level (e.g. SystemC, FPGA based real-time simulation ), totally independent of the technology used later. Further the heterogeneity of the architecture, consisting of as well digital, analog as well as sensory and other elements, must be fully reflected by used tools and modelling framework.
|
Deliverables (brief description) and month of delivery
D2.1
|
t.b.d
|
M9
|
Lead: xx
|
D2.2
|
Prototype
|
M21
|
Lead: xx
|
D2.3
|
DECISIVE Modeling and Design Framework applied to DESCISIVE use cases (Report)
Final realization of the Modelling and Design Frameworks used by industrial partners applied to proposed validators.
|
M33
|
Lead: xx
|
|
Work package number
|
3
|
Start date or starting event:
|
M1
|
Work package title
|
Monitoring and analysis infrastructure
|
Participant number
|
1
|
2
|
4
|
12
|
16
|
21
|
22
|
Participant short name
|
PHILIPS
|
AVL
|
NXP-A
|
UEF
|
CEA
|
FHG
|
NXP-D
|
Person-months per participant
|
72
|
8
|
28
|
46
|
16
|
36
|
32
|
Participant number
|
28
|
29
|
|
|
|
|
|
Participant short name
|
OCE
|
TUE
|
|
|
|
|
|
Person-months per participant
|
6
|
42
|
|
|
|
|
|
Objectives
WP3 plays an important role in the overall workflow of DECISIVE represented by the technical work packages.
Within this work package the needed evaluation, simulation, mining and profiling techniques will be developed to generate and analyse data for the characterization of software/hardware components in respect to their runtime behaviour, memory footprint, power consumption, etc. This data will be used at runtime and design time of a software component. For reusing this information during the evolution of components it will be back annotated to the high-level model description (cf. WP2).
In that sense WP3 covers the development of techniques, which can be distinguished by the time when they will be applied:
Design Time Methodologies:
At design time the developer of a software component typically uses simulation and profiling techniques to debug, validate and optimise a given component. Here it will be necessary to implement a framework for the model-based analysis and design of distributed systems. This framework can also incorporate middleware architectures in respect to their RT behaviour, which will be the high level simulation approach within the project. To support model-based analysis techniques with exact results it is needed to implement a low level profiling methodology. This methodology must be able to profile a complete system platform constructed of software and hardware components. The hardware components typically represent the execution platform of a software component. The methodology needs to be completely independent from the underlying architecture of the execution platform in order to support a wide variety of systems, e.g. simple single-processor embedded systems as well network on chip architectures.
Runtime Methodologies:
To gather timing information of the runtime behaviour of a software component it is needed to develop monitoring infrastructures, which are ideally integrated in the kernel of the underlying operating system/runtime system. This information can be used for example for optimising RT scheduling mechanisms. For supporting software monitoring mechanisms integrated in the operating/runtime system hardware monitors are needed, which allow non-intrusively gathering of the profiling data. The monitoring infrastructures need to be as generic as possible in order to guarantee a unified approach across the different applications and measurements. In particular we want to achieve better usage of performance counters during execution. Simulation and process mining techniques are used to provide detailed information about the dynamic behaviour of the platform and its performance. Several different aspects of performance metrics will be important, both those traditionally focused on (e.g. speed and memory consumption) and also additional metrics such as power consumption, which have recently seen increasing interest in response to energy and climate challenges as well as in the field of mobile devices. Hence, it is interesting to look into the use of performance counters and other HW-assisted monitors to estimate energy consumption.
Analysis Methodologies:
Design and runtime methodologies generate massive amounts of data, which need to be managed and processed in a way, that the designer can use the information in a fast and efficient way. For that purpose WP3 will not only focus on the generation of analysis information, but as well on post-processing and database concepts for that special purpose. Note that we will analyze the embedded systems deployed in the field. For the analysis of the enormous amount of data we will use and develop process mining techniques. These techniques automatically construct models that can be used to understand the dynamic behavior and to suggest improvements. Note that an increasing number of embedded systems is connected to the internet. See for example the medical equipment of Philips Healthcare and the lithography systems of ASML that are collecting detailed event logs. These logs can be used for remote diagnostics based on process mining. For example, it can be predicted whether a machine will fail, why it fails, and how it can be repaired.
|
Description of work (possibly broken down into tasks) and role of partners
T3.1 Runtime and simulation monitoring
Lead: Philips
Contributors: CEA, Philips, TUE
CEA: will develop and implement the monitoring support for its safety-oriented real-time operating platform OASIS. This platform comprises the following elements
A programming language PsyC – an extension of C – that allows to design applications with explicit parallel architecture, structured as a collection of agents communicating through dataflow and message passing. Furthermore, all elements of an OASIS application – agent behaviour and communication – have explicit temporal properties.
Several implementations of the OASIS kernel responsible for managing temporal behavior of the application and communications between the agents.
Kernel implementations exist for various environments: POSIX execution and simulation on Linux platforms, and native execution on several bare-bone architectures (IA32, ARM7, ARM9 and several others). Typical OASIS applications target embedded execution and, therefore, information such as execution time, power consumption, frequency of inter-core migrations or code-coverage is very important for system sizing and optimization. We expect the DECISIVE results to allow collecting such information through application monitoring without considerable impact on system performance. System support for such monitoring will provide means (e.g. API, but could be something more complete to enable periodic – off-line? – information collection) to integrate the collected information into the models used at design time for analysis. This latter functionality provides a link with WP2.
Philips: Philips Healthcare iXR will contribute via the design of a diagnostic infrastructure. This infrastructure consists of monitoring software for both the X-ray acquisition hardware as well as the PC-based infrastructure that controls them. The infrastructure collects this information, provides mechanisms to draw conclusions, and makes it available remotely.
TUE: TUE will focus on the analysis of deployed systems in the field using process mining techniques. A multitude of events are generated and/or recorded by today's embedded systems. An example is the “CUSTOMerCARE Remote Services Network” of Philips Healthcare (PH). This is a worldwide internet-based private network that links PH equipment to remote service centers. Any event that occurs within an X-ray machine (e.g., moving the table, setting the deflector, etc.) is recorded and can be analyzed. Process mining techniques attempt to extract non-trivial and useful information from the event logs generated by such machine. One aspect of process mining is control-flow discovery, i.e., automatically constructing a process model (e.g., a Petri net or BPMN model) describing the causal dependencies between activities. Process mining is not limited to control-flow discovery. In fact, in this project we work on the further development of three types of process mining: (a) discovery, (b) conformance, and (c) extension. We will use the ProM platform for experimentation and case studies.
T3.2 Model based analysis and techniques
Lead: Philips
Contributors: Philips, NXP-D , OCE, TUE
Philips: Philips Healthcare iXR will define and implement models to analyze the data acquired by the diagnostics infrastructure to provide just-in-time support to products in the field and speed-up the diagnostics process.
NXP-D: NXP-D will investigate novel work flows to advance automation in the validation of new developments in analog mixed-signal design. Different abstraction levels of digital and analog blocks, from functional to detailed back annotated behavior descriptions, shall be supported by a joint concept. Therefore NXP-D will define and implement regression test suites including automated pass/fail recognition, based on which novel work flows will be investigated to advance automation. NXP-D will standardize the extensions to be implemented in the models to support monitoring and automated pass/fail recognition. Further simulation and model based analysis for evolving systems will be accomplished, based on different hierarchy levels. This will require e.g. the proper interfacing of models from different domains (analog, digital, etc) as well as models with different parameter sets and depths. All model used will have to be compatible to the according test suite.
Deliverable: Definition of regression test suite / simulation & analysis for evolving systems (out of task 3.2 and 3.3)
OCE: will adapt the model based development and simulation environment for print systems to handle the product modularity. This means that multiple models can be combined.
Further OCE will relate productivity modeling and runtime monitoring information. Suitable information written in log files will be used to check the realized productivity (like the time it takes to print a number of pages for specific jobs) with the models that have been used to design the system. Furthermore, runtime trace information will be used to relate the execution flow with the specification models, for example for diagnosis purposes.
UEF will focus on the application of intelligent and learning methods mainly as segmentation and classification algorithms. The research will focus on analyzing run-time behavior using automatic classification methods which are able to extract the relevant information from the software processes using as little a priori information as possible. UEF will also utilize pattern recognition and neural networks techniques in the analysis of different models in the analysis of software behavior.
TUE: TUE will combine simulation and process mining techniques to seamlessly combine model-based and data-based analysis. Process mining techniques are used to analyze both simulated and real data. Moreover, TUE will develop techniques such that the real system can interact easily with a simulated system. The CPN Tools and ProM platforms are used to create a testbed for model-based analysis and simulation. This testbed will be used to investigate predictive process mining techniques.
Deliverables:
D3.2.1 Use case analysis …
D3.2.2 Tool chain …
T3.3 Profiling based analysis and simulation techniques
Focusing on the run-time characterization of system platforms in respect to power consumption, processing performance and real-time behavior it is expected that conventional or state of the art profiling and simulation techniques cannot be used for the task of analyzing and profiling. Especially profiling memory accesses on a cycle accurate basis is not or not sufficiently supported by available profiling tools and methodologies due to the fact that in most cases only shared memory architectures were modeled at the time. With the emerging trend building embedded systems based upon multi core and many core architectures with completely different connection topologies it is foreseen that new profiling techniques will be required which take as well the actual connection topology into account. Based on the profiling methodologies it will be possible to get an in depth view of how programming models and runtime systems behave on enhanced embedded processor platforms, which can consist of more than one processor core. The results can be used to co-optimize the runtime system, programming model and the architecture of the computing platform.
FHG-HHI: HHI will work on system level profiling methodologies using System C platform descriptions which can be used for run-time characterization of system platforms in respect to power consumption, processing performance and real-time behavior
To achieve precise results in respect to the real-time behavior of a given system FHG-HHI will develop a non intrusive profiling and exploration methodology, which is suited for platform models implemented using the modeling language SystemC. Instead of manually instrumenting the SystemC code of a multi core platform or a similar architecture itself, the methodology relies on the architecture elaboration phase of a SystemC compilation run. After the specific elaboration phase the actual architecture under investigation will be visualized and can be prepared for a simulation run by adding probes for data types, busses, program counters, etc. which should be observed by the profiler. The methodology will support simple SystemC constructs as well as complex TLM2.0 based architectures. The data collected by the so called backend tool will be visualized by a corresponding frontend, which will be integrated into an existing state of the art design flow. The same frontend could also be able to collect and visualize data that has been gathered by the hardware monitoring service described in task T3.4.
According to the given terminology the tool will be used during the design time of an embedded system for simulation, profiling and HW/SW co- exploration.
NXP-A: NXP-A will define a laboratory test concept and equipment for automated verification supporting the integral concept and work on an automated data post-processing engine and report generators
NXP-D: NXP-D will work on a database approach allowing storing regression test results on different abstraction levels. Data formats from different sources have to be adopted for automatic evaluation whereas eventual synergies between those formats have to be identified. Further a method has to be developed to allow probing of data during simulations as well as pre-processing and compression of such data.
The design blocks used will represent data sources which will deliver data asynchronously to each other, so effective sampling of data will require smarter methods. This method should preferably allow the combination of analog (real values) and digital data ind the same step.
Deliverables:
D3.3.1 Use case analysis and specification of analysis and profiling methodologies
The document will review state of the art profiling methodologies and will highlight missing features in respect to special use cases. Based on the results of the use case analysis a detailed specification will be defined, which will be the basis for the implementation of the tools delivered in D3.3.2.
D3.3.2 Tool chain consisting of profiling and post processing tools (Type : Prototype)
The deliverable will consist of running prototypes of the tools as defined in D3.3.1. including an appropriate written documentation.
T3.4 HW monitoring techniques
Lead: FHG-HHI
Contributors: FHG-HHI, NXP-A, NXP-D
For system wide tuning and observation of runtime metrics e.g. core clock frequency, memory allocation, heap size, real time -behavior and –violations it is essential to collect corresponding information of the system at runtime. Due to the fact, those e.g. monitoring memory access transactions can result in very high computational demands, it is needed to have specialized hardware monitoring features in the system architecture, which will be accessible by external debugging tools and the system itself for automatic runtime optimization.
FHG-HHI: HHI will work on efficient concepts of hardware support for performance monitoring. Especially in the case of state of the art processor interconnection concepts like network on chip architectures it is needed to observe memory transactions and their parameters e.g. access time, latency, etc. To collect information regarding the behavior of these advanced hardware architectures FHG-HHI will develop a generic hardware concept for monitoring the performance of network oriented interconnection architectures. The system will consist of hardware components to gather (sniffer), transport (monitoring network) and process (monitoring service) monitoring information.
In contrast to the methodology described in task T3.3 the monitoring system will be used at runtime, but will be compatible with the developed data visualization front end of task T3.3.
NXP-D: will develop a concept for automated extraction of pass-fail information. Special focus will have to be on the handling of large amounts of test data (Tera Byte). The new concept will also be implemented in connection to an according test-case.
|
Share with your friends: |