-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhancement of the Testing Guidelines #158
Comments
@jlurien Below are few thoughts from our side Request setup For Open points Scope of Test plan |
Hello First thanks @jlurien for the proposal. Configuration variables
Request set-up Request Sending Response validation Open Points Scope/reach of the testing plan |
Thanks for the feedback @mdomale. Some comments/questions below:
Do you mean using ? Advantage of this is that Gherkin formatters work better with it, but as it is reserved for Scenario Outline, some tools may expect an examples section with values to be substituted, which will not be available while defining the test plan
In general, values for testing variables will be provided by the implementation to the tester for certain environment. We may considering defining a template with the set of variables that have to be provided.
It is a possibility to move the request bodies to separate files. The advantage is that they can be used as values for scenarios outlines. But it will require maintaining many independent files for a test plan and defining clear file naming rules
Agree. We can only setup a generic request body in Background, that may work for some generic scenarios testing generic errors, e.g. 401 without Authorization header, etc. We may define a default request body and then allow scenarios to overwrite the request body in the Background
Testing of device is particularly complex, and will have to be aligned with outcome of discussion in #127. We may need specific scenarios to test the behaviour agreed in #127, and for other scenarios which do not test device support, just assume that a valid device object is provided as config variable.
In the test plan we can provide different inputs to test different behaviours, but this would not test that implementation is right or that implementation is able to provide all possible responses. For example, in location-verification, if an implemention always answers with vwerificationResult: TRUE or FALSE, but never answers with PARTIAL or UNKNOWN, that would be difficult to test, unless we add some precondition to a test scenario requiring some input values that force that response. |
Thanks @bigludo7, please see my comments inline:
Commented above. Happy to keep it standard, but we'll have to figure out a way to express in our Gherkins that certain value for certain variable will be provided separately. We may use a separate template file and refer to it, or maybe write Scenario Outlines with placeholders. Tools that automatise execution of feature files may have problems with certain approach. Any feedback to handle this is welcome.
Agree. It is key to close #127, and I would isolate testing tranversal device particularities from other more API-specific logic.
In the first iterations, I think that this is enough. In more mature phases we may try to test that service implementations follow the agreed implementation guidelines |
To move the discussion further with some examples: One of the main decisions to make is the level of detail for each scenario, specially regarding preconditions to setup the scenario. For example, for an API with It is enough to design something like option 1, or should we try to achieve something more similar to option 2?
|
Please, take a look to the example in camaraproject/DeviceLocation#189, to illustrate the proposal here. |
Hello @jlurien - From Orange side (checked with @patrice-conil) we're ok with your proposal provided in Device Location project. Thanks. |
Option 1 is preferable for us to ensure we provide flexibility to test implementer and not to restrict set of input static values. @akoshunyadi @shilpa-padgaonkar |
Problem description
The first version of the has to be enhanced with more detailed instructions to have consistent Test Plans across the WGs.
Possible evolution
Enhance the API testing guidelines with more detailed instructions.
Build necessary artifacts to support testing plans
Additional context
We include here a draft of the proposal, to trigger the discussion. When we reach enough consensus on the approach we can create a PR with the modifications.
Proposal
Testing implementations can use the Gherkin feature files with two different approaches:
For testing automation with some framework that take Gherkin as an input, such as Cucumber or Behave. For those tools, the steps have to be written in a way that allows to map them to a function with some optional arguments.
To be interpreted by a human as input for codification with another tool, which does not link to Gherkin natively, such as Postman, SoapUI, etc. In these cases, it is important that the scenario as a whole is unambiguous, and states how to build the request and validate the response.
Design principles
Feature structure
A feature file will typically test an API operation and will consist of several scenarios testing the behaviour of the API operation with different conditions or input content, validating that the response complies with the expected HTTP status, and that the response body meeets the expected JSON schema and some properties have the expected values.
Configuration variables
Most scenarios will test a request and its response. Commonly, values to fill request bodies will not be known in advance as they will be specific for the test environment, and will have to be provided as a separate set of configuration variables.
A first stage is to identify those variables, e.g. device identifiers (phone numbers, IP addresses), status of a testing device, etc. How those variables are set and feed into the testing execution will depent on the testing tool (Postman environment, context in Behave, etc)
In order to pass information between steps and make use of the variables, we have to agree on some syntaxis to refer to those variables. For example, Postman uses
{{variable}}
, Gherkin uses<variable>
for Scenario Outline, but this is not properly an Scenario Outline case. As a proposal, we may use something like [CONFIG:var].Example:
A background section at the beginning of the feature file may set common configuration variables and attributes for all scenarios, e.g.
apiServer
,baseUrl
,resource
, etcRequest setup
Typically the scenario will have to setup the request as part of the Given steps, filling the necessary path, query, header or body parameters. The guidelines can define a set of reusable steps for this, e.g.
Request sending
Usually one When step will only be necessary:
For complex scenarios concatenating several requests, subsequent requests will usually be included in Then steps after the response to the first one is validated.
Response validation
Several Then steps can validate the response. Some may be quite common, e.g.:
Others may be quite specific for the API logic and have to be designed ad-hoc.
Open points
Some to start with...
Device identification
Identification of the device. There are many possible combinations that comply with the schema
Device
, and there are no clear guidelines about which ones are required in order to pass the certification.Do we have to test many combinations and validate which ones are passed in certain environment?
Do we have to assume that
device
as an object is a configuration variable and each environment will provide a valid one?This topic links with the discussion in #127
Scope/reach of the testing plan
Does then Test plan have to test only that the interface complies with the spec, or does it have to test that the service is correctly provided? For example:
Test that a device location is correctly verified or just that a valid response gets a 200, even if
verificationResult
value is not correct?Test that a device status for roaming is correct?
The text was updated successfully, but these errors were encountered: