GithubHelp home page GithubHelp logo

apex_rostest's Introduction

apex_launchtest

build status coverage

This tool is a framework for ROS2 integration testing using the ros2 style launch description. It works similarly to rostest, but makes it easier to inspect the processes under test. For example

  • The exit codes of all processes are available to the tests. Tests can check that all processes shut down normally, or with specific exit codes. Tests can fail when a process dies unexpectedly
  • The stdout and stderr of all processes are available to the tests.
  • The command-line used to launch the processes are avilalbe to the tests.
  • Some tests run concurrently with the launch and can interact with the running processes.

Compatibility

Designed to work with ros2 crystal

Quick start example

Start with the apex_launchtest example good_proc.test.py. Run the example by doing

apex_launchtest apex_launchtest/examples/good_proc.test.py

apex_launchtest will launch the nodes found in the generate_test_description function, run the tests from the TestGoodProcess class, shut down the launched nodes, and then run the tests from the TestNodeOutput class.

The Launch Description

def generate_test_description(ready_fn):

    return launch.LaunchDescription([
        launch.actions.ExecuteProcess(
            cmd=[path_to_process],
        ),

        # Start tests right away - no need to wait for anything in this example
        launch.actions.OpaqueFunction(function=lambda context: ready_fn()),
    ])

The generate_test_description function should return a launch.LaunchDescription object that launches the system to be tested. It should also call the ready_fn that is passed in to signal when the tests should start. In the good_proc.test.py example, there is no need to delay the start of the tests so the ready_fn is called concurrently when the launching of the process under test

Active Tests

Any classes that inherit from unittest.TestCase and not decorated with the post_shutdown_test descriptor will be run concurrently with the proccess under test. These tests are expected to interact with the running processes in some way

Post-Shutdown Tests

Any classes that inherit from unittest.TestCase that are decorated with the post_shutdown_test descriptor will be run after the launched processes have been shut down. These tests have access to the exit codes and the stdout of all of the launched processes, as well as any data created as a side-effect of running the processes

Exit Codes and Standard Out

the apex_launchtest framework automatically adds some member fields to each test case so that the tests can access process output and exit codes

These objects provide dictionary like access to information about the running processes. They also contain methods that the active tests can use to wait for a process to exit or to wait for specific output

Assertions

The apex_launchtest framework automatically records all stdout from the launched processes as well as the exit codes from any processes that are launched. This information is made available to the tests via the proc_info and proc_output object. These objects can be used by one of several assert methods to check the output or exit codes of the process:

apex_launchtest.asserts.assertInStdout(proc_output, msg, proc, cmd_args=None, *, strict_proc_matching=True)

Asserts that a message 'msg' is found in the stdout of a particular process.

  • msg: The text to look for in the process standard out
  • proc: Either the process name as a string, or a launch.actions.ExecuteProcess object that was used to start the process. Pass None or an empty string to search all processes
  • cmd_args: When looking up processes by process by name, cmd_args can be used to disambiguate multiple processes with the same name
  • strict_proc_matching: When looking up a process by name, strict_proc_matching=True will make it an error to match multiple processes. This prevents an assert from accidentally passing if the output came from a different process than the one the user was expecting

apex_launchtest.asserts.assertExitCodes(proc_info, allowable_exit_codes=[EXIT_OK], proc, cmd_args=None, *, strict_proc_matching=True)

Asserts that the specified processes exited with a particular exit code

  • allowable_exit_codes: A list of allowable exit codes. By default EXIT_OK (0). Other exit codes provided are EXIT_SIGINT (130), EXIT_SIGQUIT (131), EXIT_SIGKILL (137) and EXIT_SIGSEGV (139)
  • The proc, cmd_args, and strict_proc_matching arguments behave the same way as assertInStdout. By default, assert on the exit codes of all processes

apex_launchtest.asserts.assertSequentialStdout(proc_output, proc, cmd_args=None)

Asserts that standard out was seen in a particular order

  • Returns a context manager that will check that a series of assertions happen in order
  • The proc and cmd_args are the same as assertInStdout and assertExitCodes, however it is not possible to match multiple processes because there is no way to determine the order of stdout that came from multiple processes. Example:
with assertSequentialStdout(self.proc_output, "proc_name") as cm:
    cm.assertInStdout("Loop 1")
    cm.assertInStdout("Loop 2")
    cm.assertInStdout("Loop 3")

Waiting for Output or Exit Codes

The ActiveTests can also call methods that wait for particular output or a particular process to exit or time out. These asserts are methods on the proc_output and proc_info objects

proc_output.assertWaitFor(msg, proc=None, cmd_args=None, *, strict_proc_matching=True, timeout=10)

  • The msg, proc, cmd_args, and strict_proc_matching arguments work the same as the other assert methods. By default, this method waits on output from any process
  • timeout: The amount of time to wait before raising an AssertionError

proc_info.assertWaitForShutdown(proc, cmd_args=None, *, timeout=10)

  • The proc and cmd_args work the same as the other assertions, but it is not possible to wait on multiple processes to shut down
  • timeout: The amount of time to wait before raising an AssertionError

Arguments

apex_launchtest uses the same syntax as ros2 launch to pass arguments to tests.

Arguments are declared in the launch description and can be accessed by the test vi a test_args dictionary that's injected into the tests similar to proc_info and proc_output.

See the apex_launchtest example with arguments

>apex_launhtest --show-args examples/args.test.py
>apex_launchtest examples/args.test.py dut_arg:=value

Using CMake

To run apex_launchtest from a CMakeLists file, you'll need to declare a dependency on apex_launchtest_cmake in your package.xml. Then, in the CMakeLists file, add

find_package(apex_launchtest_cmake)
add_apex_launchtest(test/name_of_test.test.py)

Arguments can be passed to the tests via the CMake function, too:

add_apex_launchtest(
  test/test_with_args.test.py
  ARGS "arg1:=foo"
)

apex_rostest's People

Contributors

dejanpan avatar dhood avatar dirk-thomas avatar gerkey avatar hidmic avatar ivanpauno avatar jacobperron avatar karsten1987 avatar mikaelarguedas avatar mjcarroll avatar nuclearsandwich avatar pbaughman avatar wjwwood avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

apex_rostest's Issues

Give special treament to test processes

Taken from here: #7

@hidmic said
It'd be nice for exit code assertions to be implicit in certain cases. Having to assert that e.g. a GTest action exits with a non-zero code is a bit redundant.

Description

If launch adds test actions or we add test actions like "run pytest" or "run gtest" as described here: ros2/launch#178 we could have apex_launchtest automatically check that these processes exit with code 0. These pytest or gtest launch actions would be run in another process, could generate junit XML, and would affect the final exit code

Desired special treament

  • Implicitly check the exit code of test processes
  • Gather up the JUNIT XML from test processes if we ourselves are run with the --junit-xml argument.
  • Notice if a test process fails to generate JUNIT XML

Some open questions about how this works:

If one of these test actions fails and apex_launchtest exits with an exit code, how does it indicate what exactly failed? Right now the tests from the test.py file get their results printed to the console and junit XML is generated. Would we combine the launch action results with the test.py results

How do these test actions get their XML combined together? Do they need to be combined together? Who checks that they successfully generated XML? Right now ament_run_test checks that apex_launchtest generated XML because it knows the name of the XML to expect. Would apex_launchtest need to check that the test actions successfully generated XML?

Language: We should probably come up with a specific name for the tests found in the name_of_test.test.py file to distinguish them from the tests contained in launch actions so we don't get confused in discussions

Parametrize the test description

Do something like pytests @pytest.mark.parametrize for the launch description so we can easily run tests like this one: https://github.com/ros2/system_tests/blob/a5d2c6daffe1ffd9baf39186f3b442a38fcffb35/test_communication/test/test_publisher_subscriber.py.in

Edge cases to check (WIP)

  • ctrl+c during a parametrized test run should exit all runs. Note This was done by doing an early-out any time the processes under test dies before the tests finish. Maybe not optimal
  • Mixing launch descriptions that require command-line arguments with ones that don't
  • give a good error message if the argument in generate_test_description doesn't match the argname in the decorator
  • give a good error message if parametrize has too many different args
  • give a good error message if paraetrize has too few args

Set missing attributes in junit XML

I have a test reporting tool that looks at the output of colcon test-result --all, and it's choking on the junit XML of a skipped apex_launchtest.

pytest and GTEST set both the testcase 'name' attribute and the testcase 'classname' attribute, but apex_launchtest is only setting 'name'

We're also adding a 'skipped' element to the test case, but not setting the 'message' field

proc_lookup can have an exception while trying to raise an exception

"No data recorded for proc {}".format(_proc_to_name_and_args(process))

If the process hasn't been launched yet, we're not going to be able to look up the process name yet. You'll get an exception that looks like this:

  File "...path/install/lib/python3.5/site-packages/apex_launchtest/proc_info_handler.py", line 126, in assertWaitForShutdown 
    timeout=timeout                                                                                                                                             
  File "/usr/lib/python3.5/threading.py", line 319, in wait_for                                                                                                 
    result = predicate()                                                                                                                                        
  File "...path/install/lib/python3.5/site-packages/apex_launchtest/proc_info_handler.py", line 117, in proc_is_shutdown      
    strict_proc_matching=True                                                                                                                                   
  File "...path/install/lib/python3.5/site-packages/apex_launchtest/util/proc_lookup.py", line 85, in resolveProcesses        
    "No data recorded for proc {}".format(_proc_to_name_and_args(process))                                                                                      
  File "...path/install/lib/python3.5/site-packages/apex_launchtest/util/proc_lookup.py", line 27, in _proc_to_name_and_args  
    proc.process_details['name'],                                                                                                                               
TypeError: 'NoneType' object is not subscriptable                                                                                                               

Tests as launch actions

Feature request

Background

Currently, tests are run on a different thread than the one the fixture's LaunchService is run. As a consequence, inspection and/or mutation of any launch object (e.g. LaunchDescriptionEntity instaces, current LaunchContext, etc.) must be guaranteed to be thread-safe. Additionally, since tests have no entity themselves in the launch fixture, some devices have to be put in place to keep the fixture alive.

Description

Run so called pre shutdown tests or a group thereof as an action within launch itself (i.e. its event loop). Tests can then interact safely with their launch fixture (e.g. actions, events, event handlers, context). Both synchronous and asynchronous tests should be supported, in case that certain condition or operation has to be waited on.

So called post shutdown tests require special treatment. Some of them may be run within a Shutdown event handler, but some may not. For the latter, either the testing framework has to take care of running them after the launch service has run or launch has to support an AfterShutdown event. In any case, this has significant implications and it is directly connected to #9.

BUG junit XML doesn't play nice with _fail_result

If the launch fails to startup for some reason, apex_launchtest uses a special fail result to unconditionally fail the result of the test. Unfortunately, this doesn't play nice with the --junit-xml option

In unitxml.py line 55:
     test_suite.set('time', str(round(sum(test_result.testTimes.values()), 3)))
 AttributeError: '_fail_result' object has no attribute 'testTimes'            
 usage: apex_rostest [-h] [-v] [-s] [--junit-xml XMLPATH]                      
                     test_file [launch_arguments [launch_arguments ...]]       
 apex_rostest: error: '_fail_result' object has no attribute 'testTimes'       

The result is a somewhat cryptic error message instead of "This test failed because of X"

Todo:

  • Write a test that exposes this issue
  • Fix the issue. Either the XML serializer can't depend on there being a testTimes attribute, or our _fail_result object needs to have this attribute

Expand test scope to the launch fixture

Feature request

Background

Currently, tests perform assertions against launched processes only but indirectly, through process info and output handlers using process names as keys. This is useful if the launch fixture is being loaded from another file, though error prone. Moreover, launch does not support unique name-based referencing.

Description

Put actions and launch context in scope for tests to perform assertions with them. Process info, output and exit codes can be retrieved from them directly (through helper functions or custom assertions to minimize the coupling with launch.)

Implementation considerations

It's much more straightforward to implement if #19 is available.

Need some sort of test isolation

A potential issue I'm starting to encounter - using add_apex_launchtest cmake macro may cause multiple tests to run in parallel when you do colcon test. If these tests publish data on the same topics (spoiler alert, ours do!) they can conflict with one-another.

We could write our generate_test_description function to put all of the nodes in a unique namespace or carefully remap topics, but I'm wondering if there's something apex_launchtest can do for isolation.

@wjwwood @hidmic Can we do something with the launch description after it's generated, or maybe something with the launch service when we run it to get some sort of isolation? Can we jam everything into a unique namespace and have that be transparent to the nodes being run? Is there some other mechanism of isolation I'm not considering? Doing it at the DDS level is pretty DDS implementation specific, I think.

[QUESTION] Naming and location

As this testing framework matures and attains widespread use throughout ROS 2 packages, a couple questions come up:

  • Should it have a more generic name?
  • Where should it live?

Nightly build started to fail

I have a scheduled CI job that runs against the 'nightly' docker image. In this image, I can no longer run the talker_listener.test.py example.

Steps to reproduce (assuming apex_rostest repo is cloned to ~/apex_rostest:

> docker pull osrf/ros2:nightly

> docker run --rm -it -v ~/apex_rostest:/apex_rostest osrf/ros2:nightly

root@79aa092a2364:/# cd /apex_rostest/

root@79aa092a2364:/apex_rostest# colcon build
Starting >>> apex_launchtest
Finished <<< apex_launchtest [0.64s]          
Starting >>> apex_launchtest_cmake
Starting >>> apex_launchtest_ros
Finished <<< apex_launchtest_cmake [0.57s]                                                           
Finished <<< apex_launchtest_ros [0.59s]

Summary: 3 packages finished [1.35s]

root@79aa092a2364:/apex_rostest# source install/setup.bash 
[connext_cmake_module] Warning: The location at which Connext was found when the workspace was built [[/opt/rti.com/rti_connext_dds-5.3.1]] does not point to a valid directory, and the NDDSHOME environment variable has not been set. Support for Connext will not be available.

root@79aa092a2364:/apex_rostest# apex_launchtest apex_launchtest_ros/examples/talker_listener.test.py

You should see the example test test_talker_transmits fail with the following stack trace:

======================================================================
FAIL: test_talker_transmits (talker_listener.test.py.TestTalkerListenerLink)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/apex_rostest/apex_launchtest_ros/examples/talker_listener.test.py", line 126, in test_talker_transmits
    self.spin_rclpy(1.0)
  File "/apex_rostest/apex_launchtest_ros/examples/talker_listener.test.py", line 109, in spin_rclpy
    executor.spin_once(timeout_sec=timeout_sec)
  File "/opt/ros/crystal/lib/python3.6/site-packages/rclpy/executors.py", line 631, in spin_once
    handler, entity, node = self.wait_for_ready_callbacks(timeout_sec=timeout_sec)
  File "/opt/ros/crystal/lib/python3.6/site-packages/rclpy/executors.py", line 617, in wait_for_ready_callbacks
    return next(self._cb_iter)
  File "/opt/ros/crystal/lib/python3.6/site-packages/rclpy/executors.py", line 558, in _wait_for_ready_callbacks
    if sub.callback_group.can_execute(sub):
  File "/opt/ros/crystal/lib/python3.6/site-packages/rclpy/callback_groups.py", line 103, in can_execute
    assert weakref.ref(entity) in self.entities
AssertionError

----------------------------------------------------------------------
Ran 3 tests in 5.284s

This test is calling rclpy.spin until two messages are received on the 'talker_chatter' topic (remapped from just 'chatter' in the LaunchDescription

This just started failing in this CI pipeline using this docker image:

Using docker image sha256:14cb0769c64b280b5c609a9bef7f52393d4f306197bb48201399865486f4c6f1 for osrf/ros2:nightly ...

The previous nightly build passed without a problem using this docker image:

Using docker image sha256:4b4e5eeda437d7c64ca908c825e6464bb607a61e44cef80b5b0a1b9ec3a8136d for osrf/ros2:nightly ...

Too many ways to assert on output/exit code

There are a few places in apex_rostest that are concerned with identifying processes to look at exit codes or for matching stdout. Each one works a little differently. We should refactor this to have a common mechanism/signature on these asserts

APIs that should match one-another:

Stdout asserts

  • assertInStdout(proc_output, msg, node, cmd_args, *, strict_node_matching=True)
  • assertSequentialStdout(proc_output, node)
  • ActiveIoHandler.assertWaitFor(msg, timeout)
  • apex_launchtest.event_handlers.StdoutReadyListener(proc_name, ready_txt)

Exit Code assertions

  • assertExitCodes(proc_info, allowable_exit_codes)
  • 'ActiveProcInfoHandler.assertWaitForShutdown(*, process, timeout)`

Discussion Items

  • It might be worth deprecating the use of process names + arguments to identify a process. Maybe all of the assert methods should just use an ExecuteProcess object as the key to identify a process
  • Some asserts are free functions and some are methods. Is there a way to simplify that?

Ability to run tests with a freshly launched system

Right now apex_launchtest works kind of like rostest from ros1. The system gets launched, all the tests run in alphabetical order, and then the system gets torn down.

We may want a way to specify that certain tests or maybe certain test cases (classes) should run against a freshly launched system. This will make the tests more unit-test like and prevent tests from interfering with one-another. The trade-off is performance. It will take extra time to start up and tear down the sytem

Some questions to answer about how this should work:

  • How do post-shutdown tests fit into this? Do they need to run every time too?
  • Can we use some sort of decorator that would go on individual tests or test classes that would tell the framework "hey, do another launch when you run these"

apex_launchtest still has rclpy dependency

We wanted to get the rclpy dependencies out of apex_launchtest, but it's still a transitive dependency due to ros2launch.

root@70699c3fff50:/apex_launchtestt# colcon list --topological-graph --packages-up-to apex_launchtest --packages-above rclpy
rclpy            +**...
launch_ros        +  *.
ros2cli            +**.
ros2pkg             +*.
ros2launch           +*
apex_launchtest       +

We depend on ros2launch here: https://github.com/ApexAI/apex_rostest/blob/master/apex_launchtest/apex_launchtest/apex_launchtest_main.py#L21 to get the print_arguemnts_of_launch_description function. This function is also the reason we're not compatible with the ros2 release 'crystal'

Maybe we should go back to a copy/pasted version of this function.

We should add a ci job - maybe scheduled daily or weekly that builds from source following the instructions here: https://index.ros.org/doc/ros2/Installation/Linux-Development-Setup/ to make sure our dependencies are right, and to check to make sure we're not accidentally including rclpy in apex_launchtest

colcon list --topological-graph --packages-up-to apex_launchtest
colcon build --packages-up-to apex_launchtest

apex_launchtest_ros examples need some TLC

When I shoved all the ROS stuff into apex_launchtest_ros I left a bit of a mess. We should fix up this package a little and add at least one decent example of messing around with ROS nodes

Suggestion

  1. Get rid of the old examples - they're the result of one-off experiments
  2. Add a new example where we launch a talker and a listener
    • To make things more interesting, maybe remap the topics and fuzz the strings that the talker sends
  3. Fix up the MessagePump so it can work with an rclpy context

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.