Skip to content

Latest commit

 

History

History
233 lines (168 loc) · 10.8 KB

testrunner.md

File metadata and controls

233 lines (168 loc) · 10.8 KB

Test Runner for YAML rules

Via the test subcommand, kantra exposes a test runner.

It allows testing YAML rules written for analyzer-lsp.

The input to the test runner are tests written in YAML, the output of the test runner is a report.

Usage

This section covers:

  1. Writing tests
  2. Running tests
  3. Understanding output

Writing tests

Tests for a rules file are written in a YAML file with names ending in .test.yaml suffix.

A tests file contains three fields rulesPath, providers and tests at the top level:

rulesPath: "/optional/path/to/rules/file"
providers:
  - name: "go"
    dataPath: "/path/to/test/data/for/this/provider"
tests:
  - ruleID: "rule-id-for-this-test"
    testCases:
      - name: "test-case-name"
      [...]
  • rulesPath: Relative path to a file containing rules these tests are applicable to
  • providers: List of configs, each containing configuration for a specific provider to be used when running tests
  • tests: List of tests to run, each containing test definition for a specific rule in the associated rules file

Note that rulesPath is optional. If it is not specified, the runner will look for a file in the same directory with the same name as the tests file except the .tests.yaml suffix in the name.

Defining providers

The field providers defines a list of configs, each specific to a provider:

providers:
  - name: <name_of_the_provider>
    dataPath: <path_to_test_data>
tests:
  [...]

name is the name of the provider to which the config applies to, and dataPath is the relative path to the test data to be used when testing rules for that provider.

Note that dataPath must be relative to the directory in which tests file exists.

If all tests under a ruleset share values of providers field (e.g. they use common data directory in all tests for a given provider), this config can also be defined at ruleset level under a special file testing-config.yaml. In that case, the config present in this file will apply to all tests in that directory. A more specific config for a certain file can still be defined in the tests file. In that case, values in the tests file will take precedance over values at the ruleset level.

See an example of ruleset level config in ../pkg/testing/examples/ruleset/testing-config.yaml.

Note that a config for every providers present in the rules file must be defined.

Defining tests

The field tests defines a list of tests, each specific to a rule in the rules file:

providers:
  [...]
tests:
  - ruleID: test-00
    testCases:
      - name: test-tc-00
        analysisParams:
          depLabelSelector: "!konveyor.io/source=open-source"
          mode: "full"
        hasIncidents:
            exactly: 10
            messageMatches: "test"
            codeSnipMatches: "test"
      - name: test-tc-01
        analysisParams:
          mode: "source-only"
        hasTags:
          - "test"
        hasIncidents:
          locations:
          - lineNumber: 10
            fileURI: file://test
            messageMatches: "message"     
Test
Field Type Required Description
ruleID string Yes ID of the rule this test applies to
testCases []TestCase Yes List of test cases (See TestCase)
TestCase
Field Type Required Description
name string Yes Unique name for the test case, can be used to filter test case.
analysisParams AnalysisParams Yes Analysis parameters to use when running this test case (See AnalysisParams)
hasIncidents HasIncidents No Passing criteria that compares produced incidents (See HasIncidents)
hasTags []string No Passing criteria that compares produced tags, passes test case when all tags are present in the output
isUnmatched bool No Passes the test case when rule is NOT matched
AnalysisParams
Field Type Required Description
depLabelSelector string No Dependency label selector expression to pass as --dep-label-selector to the analyzer
mode string No Analysis mode, one of - source-only or full
HasIncidents

HasIncidents defines a criteria for passing the test case. It provides two ways to define a criteria, either one of the two can be defined in a test case:

  1. Count based: This criteria is based on count of incidents. It can be defined using following fields under hasIncidents:

    Field Type Required Description
    exactly int Yes Produced incidents should be exactly equal to this number for test case to pass
    atLeast int Yes Produced incidents should be greater than or equal to this number for test case to pass
    atMost int Yes Produced incidents should be less than or equal to this number for test case to pass
    messageMatches int No In all incidents, message should match this pattern for test case to pass
    codeSnipMatches int No In all incidents, code snippet should match this pattern for test case to pass

    Only one of exactly, atLeast, or atMost can be defined at a time

  2. Location based: This criteria is based on location of each incident. It can be defined using following fields under hasIncidents:

    Field Type Required Description
    locations []Location No Passing criteria that is based on location of each incident rather than just count

    Each Location has following fields:

    Field Type Required Description
    fileURI string Yes An incident must be found in this file for test case to pass
    lineNumber string Yes An incident must be found on this line number for test case to pass
    messageMatches int No Message should match this pattern for test case to pass
    codeSnipMatches int No Code snippet should match this pattern for test case to pass

Running tests

To run tests in a single file:

kantra test /path/to/a/single/tests/file.test.yaml

To run tests in a ruleset:

kantra test /path/to/a/ruleset/directory/

To run tests in multiple different paths:

kantra test /path/to/a/ruleset/directory/ /path/to/a/test/file.test.yaml

To run specific tests by rule IDs:

kantra test /path/to/a/ruleset/directory/ -t "RULE_ID_1, RULE_ID_2"

-t option allows specifying a list of rule IDs (separated by commas) to select specific tests.

A specific test case in a test can also be selected using the -t option.

To run specific test cases in a test, each value in the comma separated list of -t becomes <RULE_ID>#<TEST_CASE_NAME>:

kantra test /path/to/a/ruleset/directory/ -t RULE_ID_1#TEST_CASE_1

Note that # is a reserved character used to seperate test case name in the filter. The name of the test case itself must not contain #.

Test Output

When a test passes, the runner creates output that looks like:

- 156-java-rmi.windup.test.yaml 2/2 PASSED
 - java-rmi-00000               1/1 PASSED
 - java-rmi-00001               1/1 PASSED
------------------------------------------------------------
  Rules Summary:      2/2 (100.00%) PASSED
  Test Cases Summary: 2/2 (100.00%) PASSED
------------------------------------------------------------

The runner will clean up all temporary directories when all tests in a file pass.

If a test fails, the runner will create output that looks like:

- 160-local-storage.windup.test.yaml 0/1 PASSED
 - local-storage-00001               0/1 PASSED
   - tc-1                            FAILED
     - expected at least 48 incidents, got 18
     - find debug data in /tmp/rules-test-242432604
------------------------------------------------------------
  Rules Summary:      0/1 (0.00%) PASSED
  Test Cases Summary: 0/1 (0.00%) PASSED
------------------------------------------------------------

In this case, the runner leaves the temporary directories behind for debugging. In the above example, the temporary directory is /tmp/rules-test-242432604.

Among other files, the important files needed for debugging in this directory are:

  • analysis.log: This file contains the full log of analysis
  • output.yaml: This file contains the output generated post analysis
  • provider_settings.json: This file contains the provider settings used for analysis
  • rules.yaml: This file contains the rules used for analysis
  • reproducer.sh: This file contains a command you can run directly on your system to reproduce the analysis as-is.

In the temporary directory, there could be files generated by the providers including their own logs. Those files can be useful for debugging too.

References