The CPS Demonstrator is specified to cover a usual workflow in Model-driven Engineering, where a system is (1) first described in a source domain model, then (2) automated model-to-model transformations are used to derive a target domain model. Finally, (3) model-to-text transformation is performed to generate code from the target domain. In addition, a model generator that can automatically create source domain models can support the correctness testing and performance evaluation of the components.
In this example (illustrated on Figure 1), the source domain represents a Cyber-Physical System with applications with dynamic behaviour allocated to connected hosts. The target domain represents the system deployment configuration, with stateful applications contained by hosts. Instance models of the two domains are connected by a traceability model for maintaining correspondence between related elements. These domains are modelled in Ecore and instance models are handled using the code generated by EMF.
The CPS domain specifies application and host types and their instances, requests and requirements on applications and resource requirements of applications towards hosts. Application types have a state machine that describes their behaviour through states and transitions. Finally, application instances can be allocated to host instances that can communicate with each other.
In the deployment model, hosts contain the applications that are running on them, while each application has a behaviour with states and transitions. The behaviour has a current state and transitions may trigger other transitions. This triggering represents the message passing between behaviours of different applications.
The traceability model describes the correspondence between a CPS and a deployment model. The traceability information is stored in a set of traces that refer to zero, one or multiple CPS and deployment elements. Such a trace represents that the deployment elements were created by the model-to-model transformation based on the given CPS elements.
Since manually creating large instance models requires a lot of effort, we developed a CPS Model Generator that executes a number of generation phases defined in a plan and based on simple configuration can output arbitrarily large CPS models. The model generator is built in Xtend and uses VIATRA Query patterns for gathering elements for complex operations.
The model generator aims to output models that are similar in fine structure but have different number of elements (to generate scaled-up models) and allow some randomization (e.g. to create state machines with different number of states for different application types).
Randomization is controlled by min-max and percentage type parameters and ratio maps:
-
A min-max parameter specifies a range with a minimum and maximum value, while operations depending on the given parameter can obtain a random number that is part of the range (e.g. create statemachine with 5 to 10 states).
-
A percentage parameter specifies the fraction of a total, while operations may use it to decide how to distribute the choices for the possible elements (e.g. 35% of transitions in a state machine should have actions).
-
A ratio map parameter assigns integer values to classes, while operations may use it to distribute choices (e.g. how application instances are allocated to different host class instances).
The fine structure is specified with host and application classes:
-
A host class determines how many host types are created (min-max), how many instances are created for each type in the class (min-max), how many other host instance each host instance communicates with (min-max) and how the communications are distributed among instances of different host classes (ratio map).
-
An application class determines how many application types are created (min-max), how many instances are created for each type in the class (min-max), how many states and transitions should the state machine of each type contain (both min-max), how many of the instances are allocated (percentage), how is the allocation distributed among instances of different host classes (ratio map), how many of the transitions should define actions (percentage) and of those what is the ratio of sends (percentage).
Based on a list of host and application classes as input, the CPS model generator outputs an instance model that satisfies the constraints of the classes. While min-max parameters are always satisfied, percentage and ratio map parameters may not be precisely followed (e.g. allocating 35% of 10 applications may be 3 or 4). However, for larger sizes and in general, the generated model will have the structure specified in the classes.
The CPS-to-Deployment model-to-model transformation derives a deployment model from a CPS model and also creates the correspondence in a traceability model. In addition, model-to-model transformations should be able synchronize changes in the CPS model to the deployment model and the traceability model. Alternative transformation methods can be implemented, with some variant offering only batch execution functionalities (recreating the deployment and traceability models every time), while other alternatives are capable of incremental execution, where only changes are propagated.
The transformation creates deployment hosts for each CPS host instance, then creates deployment applications in these hosts for all application instances allocated to the corresponding CPS host. Next, the state machine of the application type for each mapped application instance is mapped to a deployment behaviour of the deployment application. This includes creating states and transitions as well, although the two metamodels represent state machines an behaviours in a slightly different way. Finally, transition actions are processed and trigger references are created between behaviour transitions if the model structure and the actions are set up in a given way (Step 6. in the specification).
Implemented batch alternatives:
Implemented incremental alternatives:
The Deployment Code Generation model-to-text transformation takes a deployment model and outputs Java code that can simulate the dynamic behaviour of the system. Each host is executed on separate threads by host runners, while inter-host triggers are implemented by message passing through a simple communication network object. Once again, based on a single specification, multiple model-to-text alternatives can be provided, with incremental approaches re-generating only parts of the source code that are affected by deployment model changes. These changes are collected using a deployment change monitor, which uses VIATRA Query to aggregate low level deployment model modifications and provide deltas that specify the changed elements (hosts, applications, behaviours).
The change monitor aggregates modifications into a delta between checkpoints. The API allows creating a new checkpoint and will provide the delta between the previous and newly created checkpoints, while it also starts to record the delta starting from the new checkpoint. The delta contains a boolean value to signify that the top-level configuration has to be re-generated and three sets of elements:
-
Appeared since the last checkpoint: source code related to these elements has to be generated, clean up not required
-
Disappeared since the last checkpoint: source code related to these elements should be cleaned up
-
Updated since the last breakpoint: source code related to these elements has to be re-generated, clean up may be needed (e.g. file name will change)
The code generator creates source code fragments from elements of the deployment model, these include:
-
Deployment for creating the top level configuration that sets up host objects
-
Host for creating the code that sets the applications for a host
-
Application for creating code that sets up an application, with a current state
-
Behavior for creating code for a deployment behaviour including states and transitions (with triggers).
The generated code uses base classes that contain model-independent code and some common classes that are used for the execution (e.g. communication network).
Each component is tested independently with their own set of unit tests, while the complete workflow is tested by integration tests, both implemented in JUnit. For those components that are planned to have multiple implementations, the unit tests are developed based on the specification of the component and each implementation is tested with the same set of tests.
For details on benchmarking with the demonstrator, see the VIATRA CPS Benchmark wiki.