Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhancement of the Testing Guidelines #158

Closed
jlurien opened this issue Mar 7, 2024 · 8 comments · Fixed by #203
Closed

Enhancement of the Testing Guidelines #158

jlurien opened this issue Mar 7, 2024 · 8 comments · Fixed by #203
Labels
enhancement New feature or request

Comments

@jlurien
Copy link
Contributor

jlurien commented Mar 7, 2024

Problem description
The first version of the has to be enhanced with more detailed instructions to have consistent Test Plans across the WGs.

Possible evolution

  • Enhance the API testing guidelines with more detailed instructions.

  • Build necessary artifacts to support testing plans

Additional context

We include here a draft of the proposal, to trigger the discussion. When we reach enough consensus on the approach we can create a PR with the modifications.


Proposal

Testing implementations can use the Gherkin feature files with two different approaches:

  • For testing automation with some framework that take Gherkin as an input, such as Cucumber or Behave. For those tools, the steps have to be written in a way that allows to map them to a function with some optional arguments.

  • To be interpreted by a human as input for codification with another tool, which does not link to Gherkin natively, such as Postman, SoapUI, etc. In these cases, it is important that the scenario as a whole is unambiguous, and states how to build the request and validate the response.

Design principles

Feature structure

A feature file will typically test an API operation and will consist of several scenarios testing the behaviour of the API operation with different conditions or input content, validating that the response complies with the expected HTTP status, and that the response body meeets the expected JSON schema and some properties have the expected values.

Configuration variables

Most scenarios will test a request and its response. Commonly, values to fill request bodies will not be known in advance as they will be specific for the test environment, and will have to be provided as a separate set of configuration variables.

A first stage is to identify those variables, e.g. device identifiers (phone numbers, IP addresses), status of a testing device, etc. How those variables are set and feed into the testing execution will depent on the testing tool (Postman environment, context in Behave, etc)

In order to pass information between steps and make use of the variables, we have to agree on some syntaxis to refer to those variables. For example, Postman uses {{variable}}, Gherkin uses <variable> for Scenario Outline, but this is not properly an Scenario Outline case. As a proposal, we may use something like [CONFIG:var].

Example:

  Scenario: Description
    Given the configuration variables:
      | variable  | description                           |
      | device    | Object identifying a device           |
    And the request body:
      """
      {
        "device": [CONFIG:device],
        ...
      }
    When the HTTP "POST" request is sent
    Then the response status code is "200"
    And the response JSON field "$.device" is "[CONFIG:device]"

A background section at the beginning of the feature file may set common configuration variables and attributes for all scenarios, e.g. apiServer, baseUrl, resource, etc

Request setup

Typically the scenario will have to setup the request as part of the Given steps, filling the necessary path, query, header or body parameters. The guidelines can define a set of reusable steps for this, e.g.

- the resource "{url_path}"
- the resource “{url_path_str_format}” is parameterized with values:
    | param_name | param_value |
    | ---------- | ----------- |
    | xxx        | yyy         |
- the header "{name}" is set as "{value}"
- the headers:
    | param_name | param_value |
    | ---------- | ----------- |
    | xxx        | yyy         |
- the query parameter "{name}" is set as "{value}"
- the query parameters:
    | param_name | param_value |
    | ---------- | ----------- |
    | xxx        | yyy         |
- the request body field "{json_path" is set as "{value}"
- the request body:
    ```
    {
      "xxx": "yyy"
    }
    ```

Request sending

Usually one When step will only be necessary:

When the HTTP "{method}" request is sent

For complex scenarios concatenating several requests, subsequent requests will usually be included in Then steps after the response to the first one is validated.

Response validation

Several Then steps can validate the response. Some may be quite common, e.g.:

- the response status code is "200"
- the respone complies with the JSON schema at "{OAS_schema_path}"
- the response JSON field "{json_path}" is "{value}"

Others may be quite specific for the API logic and have to be designed ad-hoc.

Open points

Some to start with...

Device identification

Identification of the device. There are many possible combinations that comply with the schema Device, and there are no clear guidelines about which ones are required in order to pass the certification.

  • Do we have to test many combinations and validate which ones are passed in certain environment?

  • Do we have to assume that device as an object is a configuration variable and each environment will provide a valid one?

This topic links with the discussion in #127

Scope/reach of the testing plan

Does then Test plan have to test only that the interface complies with the spec, or does it have to test that the service is correctly provided? For example:

  • Test that a device location is correctly verified or just that a valid response gets a 200, even if verificationResult value is not correct?

  • Test that a device status for roaming is correct?

@jlurien jlurien added the enhancement New feature or request label Mar 7, 2024
@mdomale
Copy link

mdomale commented Apr 2, 2024

@jlurien Below are few thoughts from our side
Configuration variables
-We recommend to use gherkin syntax for variables and common setup can be achieved
using background section.
-We can consider to use separate configuration file which will have values
defined for variables of multiple feature files.
-For Payload we can consider separate json filesto be used for configuration.
We can use scenario outline with examples for different variable value

Request setup
It can achieved as part of background step
but we cannot guarantee of same setup(payload) for different methods of same API

For Open points
We have to assume that device as an object is a configuration variable and
each environment will provide a valid .

Scope of Test plan
We can have recommendation as validation of response code and mandatory response parameters.
Here decision has to taken if validation of 1 possible case is enough or all possible set of values?

@akoshunyadi @shilpa-padgaonkar

@bigludo7
Copy link
Collaborator

bigludo7 commented Apr 3, 2024

Hello
Compiled with @patrice-conil and @GuyVidal for Orange perspective

First thanks @jlurien for the proposal.

Configuration variables
+1 to @mdomale points

  • about keeping standard Gherkin syntax. No use of specific [CONFIG:var] grammar but use
  • environment values must be set in background.
    Let's keep it standard.

Request set-up
We’re fine with @jlurien proposal globally – Perhaps we could have some adjustment after first use/examples.

Request Sending
OK for us

Response validation
OK for us

Open Points
Device Identification
For us we must follow what has been described in #127.
We can probably contribute a Gherkin file to describe this.

Scope/reach of the testing plan
We test only the interface and not the service itself.

@jlurien
Copy link
Contributor Author

jlurien commented Apr 3, 2024

Thanks for the feedback @mdomale. Some comments/questions below:

@jlurien Below are few thoughts from our side Configuration variables -We recommend to use gherkin syntax for variables and common setup can be achieved using background section.

Do you mean using ? Advantage of this is that Gherkin formatters work better with it, but as it is reserved for Scenario Outline, some tools may expect an examples section with values to be substituted, which will not be available while defining the test plan

-We can consider to use separate configuration file which will have values defined for variables of multiple feature files.

In general, values for testing variables will be provided by the implementation to the tester for certain environment. We may considering defining a template with the set of variables that have to be provided.

-For Payload we can consider separate json filesto be used for configuration. We can use scenario outline with examples for different variable value

It is a possibility to move the request bodies to separate files. The advantage is that they can be used as values for scenarios outlines. But it will require maintaining many independent files for a test plan and defining clear file naming rules

Request setup It can achieved as part of background step but we cannot guarantee of same setup(payload) for different methods of same API

Agree. We can only setup a generic request body in Background, that may work for some generic scenarios testing generic errors, e.g. 401 without Authorization header, etc. We may define a default request body and then allow scenarios to overwrite the request body in the Background

For Open points We have to assume that device as an object is a configuration variable and each environment will provide a valid .

Testing of device is particularly complex, and will have to be aligned with outcome of discussion in #127. We may need specific scenarios to test the behaviour agreed in #127, and for other scenarios which do not test device support, just assume that a valid device object is provided as config variable.

Scope of Test plan We can have recommendation as validation of response code and mandatory response parameters. Here decision has to taken if validation of 1 possible case is enough or all possible set of values?

In the test plan we can provide different inputs to test different behaviours, but this would not test that implementation is right or that implementation is able to provide all possible responses. For example, in location-verification, if an implemention always answers with vwerificationResult: TRUE or FALSE, but never answers with PARTIAL or UNKNOWN, that would be difficult to test, unless we add some precondition to a test scenario requiring some input values that force that response.

@akoshunyadi @shilpa-padgaonkar

@jlurien
Copy link
Contributor Author

jlurien commented Apr 3, 2024

Thanks @bigludo7, please see my comments inline:

Hello Compiled with @patrice-conil and @GuyVidal for Orange perspective

First thanks @jlurien for the proposal.

Configuration variables +1 to @mdomale points

  • about keeping standard Gherkin syntax. No use of specific [CONFIG:var] grammar but use
  • environment values must be set in background.
    Let's keep it standard.

Commented above. Happy to keep it standard, but we'll have to figure out a way to express in our Gherkins that certain value for certain variable will be provided separately. We may use a separate template file and refer to it, or maybe write Scenario Outlines with placeholders. Tools that automatise execution of feature files may have problems with certain approach. Any feedback to handle this is welcome.

Request set-up We’re fine with @jlurien proposal globally – Perhaps we could have some adjustment after first use/examples.

Request Sending OK for us

Response validation OK for us

Open Points Device Identification For us we must follow what has been described in #127. We can probably contribute a Gherkin file to describe this.

Agree. It is key to close #127, and I would isolate testing tranversal device particularities from other more API-specific logic.

Scope/reach of the testing plan We test only the interface and not the service itself.

In the first iterations, I think that this is enough. In more mature phases we may try to test that service implementations follow the agreed implementation guidelines

@jlurien
Copy link
Contributor Author

jlurien commented Apr 12, 2024

To move the discussion further with some examples:

One of the main decisions to make is the level of detail for each scenario, specially regarding preconditions to setup the scenario. For example, for an API with device in the input, there should be a case to test that there is compliance with the schema, but there are many possible variations for a wrong request body.

It is enough to design something like option 1, or should we try to achieve something more similar to option 2?


  # Assuming that a default valid request body is setup in the Background 
  
  # Option 1: High level step, not indicating how to write the test.
  # Tester can decide how to build the request body and how many cases to test
  Scenario: Validate that device complies with the schema
    Given the request body property "$.device" does not comply with the schema
    When the HTTP "POST" request is sent
    Then the response status code is 400
    And the response property "$.status" is 400
    And the response property "$.code" is "INVALID_ARGUMENT"
    And the response property "$.message" contains a user friendly text


  # Option 2: Details steps with explicit content to test.
  # Implementations will know in advance the level of testing
  Scenario Outline: Validate that device phoneNumber complies with the schema
    Given the request body property "$.device.phoneNumber" is set to <value>
    When the HTTP "POST" request is sent
    Then the response status code is 400
    And the response property "$.status" is 400
    And the response property "$.code" is "INVALID_ARGUMENT"
    And the response property "$.message" contains a user friendly text

    Examples:
      | value                                                   |
      | foo                                                     |
      | *092828#                                                |
      | 1234567890                                              |
      | 12ft23333                                               |
      | ""                                                      |
      | +178931297489º17249017409º70937498º73297932790723097091 |

    Scenario Outline: Validate that device ipv4Address complies with the schema
    Given the request body property "$.device.ipv4Address" is set to
      """
        { 
          "publicAddress": <publicAddress>,
          "publicPort": <publicPort>
        }
      """
    When the HTTP "POST" request is sent
    Then the response status code is 400
    And the response property "$.status" is 400
    And the response property "$.code" is "INVALID_ARGUMENT"
    And the response property "$.message" contains a user friendly text

    Examples:
      | publicAddress | publicPort |
      | 1.2.3.4.5     | 1234       |
      | foo           | 1234       |
      | 1.2.3.4       | foo        |

    # etc, there will be lots of scenarios

@jlurien
Copy link
Contributor Author

jlurien commented Apr 24, 2024

Please, take a look to the example in camaraproject/DeviceLocation#189, to illustrate the proposal here.

@bigludo7
Copy link
Collaborator

bigludo7 commented Apr 30, 2024

Please, take a look to the example in camaraproject/DeviceLocation#189, to illustrate the proposal here.

Hello @jlurien - From Orange side (checked with @patrice-conil) we're ok with your proposal provided in Device Location project. Thanks.

@mdomale
Copy link

mdomale commented May 19, 2024

To move the discussion further with some examples:

One of the main decisions to make is the level of detail for each scenario, specially regarding preconditions to setup the scenario. For example, for an API with device in the input, there should be a case to test that there is compliance with the schema, but there are many possible variations for a wrong request body.

It is enough to design something like option 1, or should we try to achieve something more similar to option 2?


  # Assuming that a default valid request body is setup in the Background 
  
  # Option 1: High level step, not indicating how to write the test.
  # Tester can decide how to build the request body and how many cases to test
  Scenario: Validate that device complies with the schema
    Given the request body property "$.device" does not comply with the schema
    When the HTTP "POST" request is sent
    Then the response status code is 400
    And the response property "$.status" is 400
    And the response property "$.code" is "INVALID_ARGUMENT"
    And the response property "$.message" contains a user friendly text


  # Option 2: Details steps with explicit content to test.
  # Implementations will know in advance the level of testing
  Scenario Outline: Validate that device phoneNumber complies with the schema
    Given the request body property "$.device.phoneNumber" is set to <value>
    When the HTTP "POST" request is sent
    Then the response status code is 400
    And the response property "$.status" is 400
    And the response property "$.code" is "INVALID_ARGUMENT"
    And the response property "$.message" contains a user friendly text

    Examples:
      | value                                                   |
      | foo                                                     |
      | *092828#                                                |
      | 1234567890                                              |
      | 12ft23333                                               |
      | ""                                                      |
      | +178931297489º17249017409º70937498º73297932790723097091 |

    Scenario Outline: Validate that device ipv4Address complies with the schema
    Given the request body property "$.device.ipv4Address" is set to
      """
        { 
          "publicAddress": <publicAddress>,
          "publicPort": <publicPort>
        }
      """
    When the HTTP "POST" request is sent
    Then the response status code is 400
    And the response property "$.status" is 400
    And the response property "$.code" is "INVALID_ARGUMENT"
    And the response property "$.message" contains a user friendly text

    Examples:
      | publicAddress | publicPort |
      | 1.2.3.4.5     | 1234       |
      | foo           | 1234       |
      | 1.2.3.4       | foo        |

    # etc, there will be lots of scenarios

Option 1 is preferable for us to ensure we provide flexibility to test implementer and not to restrict set of input static values. @akoshunyadi @shilpa-padgaonkar

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants