Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Orquesta workflow unit testing UX #27

Open
guzzijones opened this issue May 22, 2020 · 10 comments
Open

Orquesta workflow unit testing UX #27

guzzijones opened this issue May 22, 2020 · 10 comments
Assignees

Comments

@guzzijones
Copy link

guzzijones commented May 22, 2020

Purpose

Allow unit testing of orquesta workflows.
Workflows with many forks need to be unit tested with fixtures to provide regression testing of functionality after making code changes in the yaml file.

UX new binary

  1. orquesta-test
    https://github.com/StackStorm/orquesta/blob/8035754f474e7872b0efd9e82e307a3121da027d/orquesta/tests/unit/base.py#L242
    that will be refactored into the main orquesta module.

usage:

  1. orquesta-test -f [path to fixture] -l [path to workflow files named in fixture]
    1. run a test against a workflow named in fixture
  2. orquesta-test -d [path to directory containing fixtures] -l [path to workflow files named in fixture]
    1. run an entire directory of fixtures automatically one after the other

fixture spec in JSONobject yaml.

workflow:
  description: filename of workflow
  type: string
  required: true

expected_task_seq:
  description:  list of tasks names in the order expected
  type: array
  items:
    type: string
  required: true

expected_routes:
  description: routes expected in workflow
  type: array
  items:
    type: array
    items:
      type: number
  required false

inputs:
  description: input parameters to workflow
  type: object

mock_statuses:
  description: return status for each task
  type: array
  items:
   type: integer
  required: false

mock_results:
  description: array of results for each task 
  type: array
  required: false

expected_workflow_status:
  description: expected workflow status output
  type: number
  
expected_output:
  type: object
  required: false

expected_term_tasks:
  type: array
  required: false
  items:
    type: string
@m4dcoder
Copy link

m4dcoder commented May 26, 2020

Can we make this as part of st2-run-pack-tests instead of a separate command? This will be more consistent with the rest of the pack testing at https://docs.stackstorm.com/development/pack_testing.html.

@guzzijones
Copy link
Author

guzzijones commented May 26, 2020 via email

@guzzijones
Copy link
Author

guzzijones commented May 26, 2020 via email

@guzzijones
Copy link
Author

@m4dcoder could you explain what a route is? I am guessing it is the number of HEAD nodes on the graph of the workflow?

@guzzijones
Copy link
Author

ahh i see nosetests just uses python built in unit testing. yeah that makes sense.

@m4dcoder
Copy link

m4dcoder commented May 26, 2020

could you explain what a route is?

Route is a sequence of tasks that records the origin of a fork/split. A fork/split in orquesta is a non-join task that is referenced multiple times in a workflow. If the task is called multiple times during workflow execution, this result in multiple independent branches.

Let's take this workflow as an example. https://github.com/StackStorm/orquesta/blob/master/orquesta/tests/fixtures/workflows/native/splits.yaml.

In the example, task4 and task8 are referenced multiple times and both tasks are not join. So when task4 is called by task2 and task3, there will be two separate branches for task4, one originating from task2 and the other from task3. Similarly, when task8 is called by task1 and task7s, there will be multiple branches for task8. The routes keep track of these point of branching.

In this case, the workflow has a total of 5 routes. You can see this as where the forks occur...

  1. task1->task8->...
  2. task1->task2->task4->...
  3. task1->task3->task4->...
  4. task1->task2->task4->...->task8->...
  5. task1->task3->task4->...->task8->...

You can take a look at the unit test at https://github.com/StackStorm/orquesta/blob/master/orquesta/tests/unit/conducting/native/test_workflow_split.py#L54 to better understand.

In the unit tests, you'll see the notation task1__t0 where the t0 stands for task transition under next at index 0. There can be multiple task transition between tasks. This to keep track of which task transition this is referring to in the workflow definition.

@m4dcoder
Copy link

@guzzijones If it requires a separate command to run, then it's fine. But as for running under st2-run-pack-tests, I want users to be able to define test cases in YAML/JSON and provide include the datasets for mocking action execution result. As it stands with orquesta-test, I prefer the datasets for each action execution to be in separate files (or at least have an option to put them in different files and then either pass file path to the command). The result can be huge and it's hard to pass them all through the command as arguments.

@guzzijones
Copy link
Author

10-4 on defining tests using yaml/json files.
10-4 on orquesta-test command in orquesta using separate fixture files for each dataset to test.

@guzzijones
Copy link
Author

10-4 on routes. Thank you for the clear explanation and the location of examples.

@guzzijones
Copy link
Author

My TLDR on routes:
If a task is referred to in multiple locations the route tells you which transition it was transitioned from

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants