Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adds example tests for launch descriptions #165

Closed
wants to merge 2 commits into from

Conversation

hidmic
Copy link
Contributor

@hidmic hidmic commented Jan 17, 2019

Closes #166. Still a work in progress, but it should help getting the gist of it. I think something like this should cover most if not all use cases moving forward. Please, shove this PR with some more use cases if you have them.

@tfoote tfoote added the in progress Actively being worked on (Kanban column) label Jan 17, 2019
@hidmic
Copy link
Contributor Author

hidmic commented Jan 17, 2019

BTW don't even try to run this, it won't work :)

@hidmic hidmic force-pushed the hidmic/launch-testing-examples branch from 34fbebb to 195a3e4 Compare January 18, 2019 20:12
@hidmic hidmic changed the title [WIP] Adds example tests for pub/sub launch descriptions. [WIP] Adds example tests for launch descriptions Jan 18, 2019
@hidmic
Copy link
Contributor Author

hidmic commented Jan 18, 2019

Alright, now it looks much better. To summarize,

  • For testing at the executable-level, we delegate on testing frameworks (e.g. gtest or pytest). Since these just implement policies on how to deal with the underlying executable (how to execute, which special command line arguments should be forwarded to it, what's its output and how to deal with it, etc.), they are effectively decoupled from that executable. To comply with the current API, we can collapse execution and executable though (e.g. GTestNode(...) instead of GTest(Node(...))).
  • For testing at the launch-level, Assert actions act upon predicates made up of variables that are linked to different launch-level entities such as the output of an executable, an event being triggered, etc. Different flavors of Assert are available: to assert an invariant throughout the test, to assert an invariant for a fraction of the test, to assert a condition holds once, to assert a condition eventually holds, to assert multiple conditions in a given sequence or order, etc. On a first iteration, we can skip their implementation though.

@hidmic hidmic changed the title [WIP] Adds example tests for launch descriptions Adds example tests for launch descriptions Jan 18, 2019
launch_testing/examples/test_pub_sub_launch.py Outdated Show resolved Hide resolved
pub_sub_launch_description.add_action(
# TODO(hidmic): allow launching non-installed
# executables within the current package.
PyTest(Node(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think Node is the write argument for PyTest (or GTest below). The actual Python script containing tests or the executable containing a gtest main function are not necessarily nodes.

Copy link
Contributor Author

@hidmic hidmic Jan 18, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's exactly the point I want to make. Both the pytest script and the gtest executable could be nodes, containers of composable nodes, plain ROS-agnostic programs, etc. So we'd want to reuse what we have in terms of doing file lookups, argument passing, output handling, etc. If you think about it, all the PyTest and GTest actions have to do is execute whatever is being passed in a single local process (for starters), pass specific command line arguments and collect whatever output the executable is expected to generate. All assuming that what we passed is what we say it is.

So above, where it says Node, it could say Process (i.e. any non ROS aware executable).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So above, where it says Node, it could say Process (i.e. any non ROS aware executable).

Maybe I am missing how this is supposed to work.

A Python script being a node will have a __main__ which get the node running. On the other hand a Python file containing unit tests invoked by pytest doesn't. Also such a test file wouldn't be represented by Process (since it doesn't have a __main__). In fact a Python test file can't even be invoked - it commonly doesn't have an executable flag. So e.g. node_executable doesn't apply here - neither can you pass command line arguments of any kind.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Specifically for pytest, I think Dirk might be right. For gtest, I do think it makes sense to have an argument that inherits from the proposed Executable or ExecuteProcess (as Node does).

Copy link
Contributor Author

@hidmic hidmic Jan 21, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A Python script being a node will have a main which get the node running. On the other hand a Python file containing unit tests invoked by pytest doesn't. Also such a test file wouldn't be represented by Process (since it doesn't have a main).

That's mostly true, though not strictly. The following script can be made executable to run all the tests in it:

#! /usr/bin/env python                                                          
                                                                                
import os                                                                       
import pytest                                                                   
                                                                                
                                                                                
def test_stuff():                                                               
    assert False                                                                
                                                                                
                                                                                
pytest.main([os.path.realpath(__file__)])

But, set aside above's unusual setup, you're right about pytest test files not being executable in general. And we do have a problem with command line arguments, though not an insurmountable one.


The initial idea was to describe and parameterize different aspects of the same executable through decoration. But pytest breaks the pattern in a way that I believe would require quite a bit of API refactoring in other parts of the launch system to cope with.

So, as I see it, this leaves us with few options (that I'm going to sketch for both gtest and pytest as I'd rather have a consistent API):

  1. We derive a test version of each executable kind we have and we think of testing frameworks as testing protocols (e.g. how to execute, how to collect results, etc.):
NodeTest(
     path='path/to/test.py', protocol='pytest', 
     timeout=20, remapping=[('chatter', 'my_chatter')]
 ),
ComposableNodeTest(
     container_executable='intra_proc_comm_test_container',
     composable_node_descriptions=[
         ComposableNode(
             package_name='composition',
             node_plugin_name='talker_component',
             name='my_talker_component'
         ),
         ComposableNode(
             package_name='composition',
             node_plugin_name='listener_component'
             name='my_listener_component'
         ),
      protocol='gtest', timeout=10
   ]
)
  1. We outright ban test parameterization at the launch level.
PyTest(path='path/to/test.py', timeout=10),
GTest(path='path/to/gtest', timeout=10)

(1) is somewhat complex, but it enables full launch API reuse. It's an alternative to specialization (i.e. inheritance) of each executable kind but with less code repetition. (2) is easy to use and to implement, though a bit limiting in my opinion.

I guess the question is whether we want to provide all the tools to parameterize tests or if we just to want to cover the basic use cases and then leave the users on their own to come up with custom ways of doing so.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At the moment I am not aware of any use case where we need ComposableNodeTest. In the future there could certainly be test using that approach. For now I would rather focus on the cases which we do have a use case for.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright, went with the simple one.

variables.EventEmitted(
event_type=ProcessStarted
)
], timeout=[30, 10])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is interesting, and I like it, but the the generic name really needs a comment (as you've put here), otherwise I wouldn't have an idea of what's supposed to be doing. Not really a criticism but just an observation.

Also, would it be valuable to ensure that the events came from a specific process? (or node in the case of the lifecycle change) Like in the case of a multi-process test?

In which case, would it make sense to couple this assertion to the thing being tested? Either with the Node/ExecuteProcess being an argument to this action or with this being somehow an "peer" argument to another Test class, e.g. (Test(Node(), [<condition 1>, ....]) where this AssertSequenceOnce would be one of the conditions to test, related to this node). But then again, there might be "global" cases to test, or test cases which apply across multiple nodes, for example. Not sure there's anything to do, just thinking out loud.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is interesting, and I like it, but the the generic name really needs a comment

Sure. I'm open to any naming changes. How would you briefly state that you're asserting that N conditions occur in the right sequence/order?

Also, would it be valuable to ensure that the events came from a specific process?

For sure. Don't take these examples as a completely exhaustive demonstration of all the (future) API features. In this case, we could as easily add some matching to only check for the source (i.e. the executable) of interest.

In which case, would it make sense to couple this assertion to the thing being tested?

I did play with this idea. It wasn't clear to me either how to deal with multiple assertions of different kinds, potentially targeting multiple actions. That's why I dropped it in favor of using the target action instances directly in the predicates being asserted. We could have something similar to what you describe though, by deferring variable binding. E.g. instead of:

Assert(len(predicates.regex_match(
    variables.Output(target), pattern='^Oops!.*'
)) > 10, timeout=10)

we could write:

Test(target=Node(), description=[
    Assert(len(predicate.regex_match(
        variables.Output(), pattern='Oops!.*'
    )) > 10, timeout=10)
])

It may get somewhat confusing though if you start mixing both on the same launch file. The former is the most general one, so I don't think we can replace it with the latter.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure. I'm open to any naming changes. How would you briefly state that you're asserting that N conditions occur in the right sequence/order?

I don't have an alternative in mind, but I'll keep thinking on it.

The rest makes sense to me, you can realize those cases when/if we need them.

@wjwwood
Copy link
Member

wjwwood commented Jan 18, 2019

lgtm, other than a few comments.

@hidmic hidmic self-assigned this Jan 23, 2019
Signed-off-by: Michel Hidalgo <michel@ekumenlabs.com>
@hidmic hidmic added in review Waiting for review (Kanban column) and removed in progress Actively being worked on (Kanban column) labels Jan 25, 2019
@hidmic
Copy link
Contributor Author

hidmic commented Jan 25, 2019

Future of this sample API depends on we go about other alternatives that are being proposed concurrently.

@hidmic
Copy link
Contributor Author

hidmic commented Mar 1, 2019

Alright, it seems we won't be pursing this route and thus I'll close it for now. We can resurrect it if necessary.

@hidmic hidmic closed this Mar 1, 2019
@hidmic hidmic removed the in review Waiting for review (Kanban column) label Mar 1, 2019
@wjwwood wjwwood deleted the hidmic/launch-testing-examples branch April 22, 2021 18:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants