Skip to content

orchestration

Bill Majurski edited this page Jun 21, 2016 · 1 revision

Test orchestration

Currently in toolkit we have the ability to stage test data in an actor as a prerequisite for a test. This is done by having an initial test/section/step that establishes the test data in an actor implementation under test or the Public Registry server. A step within a section/step can the depend on that previous step. The dependence takes the form of having the initial step publish a detail to its log file and then the dependent step reads the log file and uses the published information. This feature is used in many tests. A good example is tests 12346 and 11897. 12346 executes a collection of Register transactions to load a Document Registry with objects, test data. Test 11897, which is focused on the FindDocuments query, executes various combinations of FindDocuments queries against that test data set.

A second example is the collection of Repository tests. To test Repository performance we instruct the user to configure their Repository, which is the system under test, to forward its Register transaction to the Registry actor in the Public Registry implementation. Given this arrangement we can probe the Public Registry to see that the Repository performed its duties correctly.

All of this is done in the context of conformance testing.

There is now a need to extend this practice to the use of simulators in toolkit. Going back to the Repository test, I would like to be able to tool the following scenario. The Repository actor implementation is the system under test. To test the Provide and Register transaction and the subsequent Register transaction toolkit needs to provide the Document Source and Document Registry actor implementations in the form of simulators. (Actually the Document Source comes out of the test client section of toolkit and not a formal simulator.)

One way to look at this is as a series of nested contexts. The top context is the test session selection. This selects the storage area for the test results. Inside of that would be the simulator context - setup before, tear down after. Inside of that would be the test execution that exists now. The test execution context would verify that a suitable simulator context was in place. For some tests, like the existing ones, the simulator context would be empty.

There are exceptions to this pattern. A test run with simulator context may fail and one step in debugging the system under test is to examine the state of one of the simulators.

Clone this wiki locally