GithubHelp home page GithubHelp logo

osrf / capabilities Goto Github PK

View Code? Open in Web Editor NEW
8.0 8.0 26.0 396 KB

Implements the concept of capabilities as part of the robots-in-concert system.

License: Other

Makefile 0.46% Python 98.76% CMake 0.78%

capabilities's People

Contributors

bit-pirate avatar cottsay avatar esteve avatar jonbinney avatar nikolausdemmel avatar patrickcjh avatar sloretz avatar tfoote avatar wjwwood avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

capabilities's Issues

make run_tests fails

$ make run_tests
Scanning dependencies of target clean_test_results
Built target clean_test_results
Scanning dependencies of target tests
Built target tests
Scanning dependencies of target _run_tests_capabilities_nosetests_test
-- run_tests.py: execute commands
  /usr/bin/cmake -E make_directory /opt/sandbox_workspace/sandbox_capabilities/build/test_results/capabilities
  /usr/bin/nosetests -P --process-timeout=60 --where=/opt/sandbox_workspace/sandbox_capabilities/src/capabilities/test --with-xunit --xunit-file=/opt/sandbox_workspace/sandbox_capabilities/build/test_results/capabilities/nosetests-test.xml
..............E........
======================================================================
ERROR: Test source code for PEP8 conformance
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest
    self.test(*self.arg)
  File "/opt/sandbox_workspace/sandbox_capabilities/src/capabilities/test/test_code_quality.py", line 7, in test_pep8_conformance
    pep8style = pep8.StyleGuide(max_line_length=120)
AttributeError: 'module' object has no attribute 'StyleGuide'

----------------------------------------------------------------------
Ran 23 tests in 0.206s

FAILED (errors=1)
-- run_tests.py: verify result "/opt/sandbox_workspace/sandbox_capabilities/build/test_results/capabilities/nosetests-test.xml"
Built target _run_tests_capabilities_nosetests_test
Scanning dependencies of target _run_tests_capabilities_nosetests
Built target _run_tests_capabilities_nosetests
Scanning dependencies of target _run_tests_capabilities
Built target _run_tests_capabilities
Scanning dependencies of target run_tests
Built target run_tests

Race condition between capability_server start and calling "start_capability"

If capability server is launched after another node that calls the "start_capability" service as soon as it's available, capability_server will sometimes store "None" as the PID of the launched capability. You get an error like:

ERROR: service [/capability_server/stop_capability] responded with an error: error processing request: No running launch file with PID of 'None'

Echoing the "events" topic when this happens shows that the "launched" event happens before the "server_ready" event:

header: 
  seq: 1
  stamp: 
    secs: 1430405899
    nsecs: 543864965
  frame_id: ''
capability: slam/Localization
provider: slam/localization
type: "launched"
pid: 22241

---
header: 
  seq: 2
  stamp: 
    secs: 0
    nsecs: 0
  frame_id: ''
capability: ''
provider: ''
type: "server_ready"
pid: 0

I am able to reproduce this about 25% of the time with the following small node:

#include <ros/ros.h>
#include <capabilities/GetCapabilitySpecs.h>
#include <capabilities/StartCapability.h>

int main(int argc, char **argv)
{
  ros::init(argc, argv, "capability_test");
  ros::NodeHandle n;

  ros::ServiceClient start_capability =
    n.serviceClient<capabilities::StartCapability>("capability_server/start_capability");

  ros::service::waitForService("capability_server/start_capability");

  capabilities::StartCapability start_srv;
  start_srv.request.capability = "slam/Localization";
  start_capability.call(start_srv);

  return 0;
}

I considered listening for the "server_ready" event before trying to call the service, but that causes issues if the client node start long after the "server_ready" event happens because it will miss this event.

App Manager Integration

Proposal for how workflow between capability server and rapp manager might happen.

Rapp Manager Initialisation

  • Query the capability server

Query the capability server to get information about the capabilities it provides. If we're assuming that capabilities are static as part of a bootstrap constant, then the rapp manager probably only needs to do this at startup and cache/save the information.

  • Prune list of both local and installable apps

Check against capabilities and prune apps which can't be run because they lack capabilities. Again, if we're assuming static capabilities, this can be done at startup to save having to do it every time list_apps is called.

Start Rapp

  • Check rapp is in the pruned list when /start_app is called, [1] [2]
  • Rapp manager reads the rapp definition for preferred, required, optional capabilities [1] [2]

Am envisioning three kinds of capability specifications as listed above. The preferred should always come listed with a fallback. I think this would get predominately used with semantic capabilities. One or two simple example rapps might be useful in the rocon_apps package (it's probably better thought of as rocon_tutorial_apps).

  • Rapp manager makes requests to start capabilities on the capability server
  • Rapp manager starts the app with capability server remapping [1]

Some information should get to the rapp somehow. Remapping the capability server for the rapp so that it can make it's own queries and decisions would be a good minimum point. I'm not sure we can intelligently do any remapping for the rapp since the rapp doesn't provide its own private interface specification, so let it work things out for itself.

  • Stop app process should work in reverse in all the expected ways

Variations

The rapp manager only does single rapp management right now. With capabilities, capability counts, locks or sharing priveleges, we could feasibly do multiple apps which would be great.

Demonstrables

Integrating stuff next week, should focus on what we might want to show. Options:

  • Portability: same app on two robots utilising the same capability via different capability providers.
  • Tuning your app: using semantics to choose capability more intelligently.
  • Simplified Bringup/Teardown : graphical tool which helps bring up capabilities in a modular way and set application meaningful parameters (the rest hide of course in the capability provider).

The first one I could integrate here on two robots - navigation would be the best example. The second one would probably take more work to make it meaningful. The last would be quite useful for robotics researchers and developers in general I think. It would also mean we could delay integration into the app manager until after roscon.

Any other ideas? I'll have a couple of days next week to work on integration for the demo, which is right after roscon.

GetNodeletManagerName does not return real name

When renaming the nodelet manager as below (using the parameter doesn't work, see #56):

    <node pkg="nodelet" type="nodelet"
          name="new_name"
          output="screen" args="manager" />

The service still returns capability_server_nodelet_manager. This might have the same root cause as #56.

Add node name to log output

Then one can easily identify where a log message is coming from (without the need of rqt_console).

Here for example the app manager:

[INFO] [WallTime: 1381746358.003231] App Manager : loading app 'turtlebot_core_apps/auto_docking'
[INFO] [WallTime: 1381746358.360199] Servicing request for capability specs...

Second message is from the capability server.

Instead of prepending a (fixed) name, we could also append the real node name, e.g. capability_server.

Nodelet manager name parameter has no effect

The arg value set with <arg name="capability_server_nodelet_manager_name" default="test_name" /> is always overwritten. Changing default to value leads to this issue:

Invalid <arg> tag: cannot override arg 'capability_server_nodelet_manager_name', which has already been set. 

Arg xml is <arg name="capability_server_nodelet_manager_name" value="test_name"/>

This is related to #52. Also, this error still exists after apply PR #55.

Capabilities to automatically start when prerequisites met?

Example use case is a platform which has separate indoor and outdoor pose estimators: the indoor one starts automatically by default, but once the gps fix topic is alive, user has the option to shut down the indoor one and replace it with the outdoor one.

Further possibilities would have the user able to permanently change the default, perhaps by blacklisting the indoor estimator (file in etc?) or some other scheme less intense than having to uninstall the providing package.

CapabilityServer.__catch_and_log makes it hard to catch exceptions

While improving the error handling in the rapp manager for issue #68, it took me a while to do the right exception handling. Only when I realised that CapabilityServer.__catch_and_log() redefines the RuntimeError exception raised by CapabilityServer.__free_capability(), I understood why my RuntimeError catching didn't work.

I'm under the assumption that doing a catch all via except Exception is bad practise. So, I'm wondering if __catch_and_log should be changed.

Use the absolute nodelet manager name for starting caps

The ~/get_nodelet_manager_name service already returns the absolute name. Internally however, only the nodelet manager name itself is stored and used. This approach fails, when nodelets are loaded from within a namespace.

Suggestion:

Move the logic for retrieving the absolute name out of the service callback (https://github.com/osrf/capabilities/blob/master/src/capabilities/server.py#L915-L917) and into the LaunchManager class, which stores the name (https://github.com/osrf/capabilities/blob/master/src/capabilities/launch_manager.py#L100).

Cannot run deb installed capability server

When attempting this, I get the following errors:

With roslaunch:

$ roslaunch robosem_meta app_platform.launch --screen

[...]

NODES
  /
    app_manager (rocon_app_manager/rapp_manager.py)
    capability_server (capabilities/capability_server)
    master (rocon_master_info/master.py)

[...]

ERROR: cannot launch node of type [capabilities/capability_server]: can't locate node [capability_server] in package [capabilities]

With rosrun:

$ rosrun capabilities capability_server 
[rosrun] Couldn't find executable named capability_server below /opt/ros/hydro/share/capabilities

There is no problem, when running this from the source install.

blacklist semantics interfaces by interface

Hi,

Currently if an interface is in blacklist, the derived semantic_interfaces are not blackisted by default. Is this a feature?

If yes, then rqt_capabilities crashes if the interfaces is blacklisted and semantics interfaces aren't.

Namespacing issue with nodelet manager

When starting the capability server in a namespace, I get this error:

Invalid <node> tag: node name cannot contain a namespace. 

Node xml is <node args="manager" name="$(arg capability_server_nodelet_manager_name)" output="screen" pkg="nodelet" type="nodelet"/>
[INFO] [WallTime: 1395035918.374652] Capability Server Ready
[ERROR] [WallTime: 1395035918.472600] Capability server's nodelet manager terminated unexpectedly.

It looks to me like this has something to do with the arg parsing within the capability server. If in https://github.com/osrf/capabilities/blob/master/src/capabilities/capability_server_nodelet_manager.launch

I change

name="$(arg capability_server_nodelet_manager_name)"

to

name="capability_server_nodelet_manager"

the error disappears.

Discussion: How to prevent erroneous free'ing of capabilities

This is copied from #70 (comment):


Ideally, the client should not get into the situation where it asks to stop/free an already stopped/freed cap.

In the current case the rapp rocon_apps/robot_teleop depends on both the RGBDSensor and the DifferentialMobileBase cap. However, down the dependency tree of the RGBDSensor cap there is also the dependency on the DifferentialMobileBase cap. Stopping the DifferentialMobileBase cap would now also stop all caps depending on it. And here there might be an error.

The app manager does not stop, but free the capabilities. So when the rapp is stopped, it first frees the DifferentialMobileBase cap.

[INFO] [WallTime: 1403701919.301008] Rapp Manager : stopped rapp [rocon_apps/teleop]'.
[INFO] [WallTime: 1403701919.301422] Rapp Manager : Stopping required capabilities.
[INFO] [WallTime: 1403701919.306375] Request to free usage of capability 'std_capabilities/DifferentialMobileBase' (bond id 'eb8e9780-fc63-11e3-a9ca-f46d04929542')
And the cap server responds:

[INFO] [WallTime: 1403701919.306902] Capability 'std_capabilities/DifferentialMobileBase' being stopped because it has zero references
However, at this point the RGBDSensor cap should still have an indirect dependency on it.

[INFO] [WallTime: 1403701919.307257] Capability 'turtlebot_capabilities/robot_state_publisher' being stopped because its dependency 'std_capabilities/DifferentialMobileBase' is being stopped.
[INFO] [WallTime: 1403701919.307622] Capability 'turtlebot_capabilities/turtlebot2_bringup' being stopped because its dependency 'std_capabilities/RobotStatePublisher' is being stopped.
[INFO] [WallTime: 1403701919.307938] Capability 'turtlebot_capabilities/rgbd_sensor' being stopped because its dependency 'turtlebot_capabilities/TurtleBotBringup' is being stopped.
[INFO] [WallTime: 1403701921.447684] Capability Provider 'turtlebot_capabilities/rgbd_sensor' for Capability 'std_capabilities/RGBDSensor' has terminated.
[INFO] [WallTime: 1403701922.014306] Capability Provider 'turtlebot_capabilities/turtlebot2_bringup' for Capability 'turtlebot_capabilities/TurtleBotBringup' has terminated.
[INFO] [WallTime: 1403701922.014992] Stopping the 'turtlebot_capabilities/diagnostics' provider of the 'std_capabilities/Diagnostics' interface, because it has no dependents left.
[INFO] [WallTime: 1403701922.015352] Rapp Manager : Stopped required capability 'std_capabilities/DifferentialMobileBase'.
[INFO] [WallTime: 1403701922.019269] Request to free usage of capability 'std_capabilities/RGBDSensor' (bond id 'eb8e9780-fc63-11e3-a9ca-f46d04929542')
[WARN] [WallTime: 1403701922.019799] RuntimeError: Cannot free Capability 'std_capabilities/RGBDSensor', because it is not running
[INFO] [WallTime: 1403701922.020218] Rapp Manager : Stopped required capability 'std_capabilities/RGBDSensor'.
[INFO] [WallTime: 1403701922.020605] Rapp Manager : All required capabilities have been stopped.
Do you have a test case covering this situation?

With this in mind it might be better to revert this exception to an error. Currently can't think of a situation, where freeing a capability would throw this exception and it would be expected ... then again its already late over here. :-)


I decided to go ahead and merge #70 and open this issue for further discussion on the follow up topic described above.

Timeline for release? Developer Mailing List?

Just found this.

What is the current timeline for releasing and/or locking down things?

I recall the early discussions about this system -- is there a mailing list somewhere discussing this system?

Recommendation on interfacing with capabilities

While integrating the capabilities into the app manager, I started to wonder what is the recommended way to interface with the capability server. So far I did it the standard ROS way, i.e. using services and topics. When starting to work with the capability specifications, I realised this is not very convenient and when looking for a solution I found the various available python convenience functions, such as these here: https://github.com/osrf/capabilities/blob/master/src/capabilities/specs/interface.py

Hence, I wonder, if I should request improvements on the ROS API or just switch to use those python tools. What would be the recommended way and why?

Starting a capability fails silently if it is currently shutting down

We have the need to restart capabilities sometimes, and the way we do it is:

  1. Call stop_capability service
  2. Wait until the capability is stopped
  3. Call start_capability service to start capability again
    I haven't found any reliable way to do step (2) though. Currently we poll the get_running_capabilities service, but this stops including the capability instance once it is in the "stopped" state. If we then call the start_capability service, it silently fails to start the capability because, even though it is "stopped", it is still in __capability_instances here:
    if x.interface not in self.__capability_instances:

Alternatively, we could try listening to the events topic for a terminate event, and then call start_capability after we see the terminate. This still has a race condition though, because even thought LaunchManager.__monitor_process has published a terminate event, the capability server may not have yet received it, processed it, and updated the graph to remove that capability instance.

Is there another way to restart a capability that would be more reliable than this? The easiest fix I can see is that if you call the start_capability service on a capability which is already running/stopped/terminated but still in capability_instances, the service should return an error code that at least lets you know it failed. Then you have the option to wait a bit and try again.

@tlau

Release to Noetic?

It looks like all packages depended on by this repo have been released to Noetic. Mind making a Noetic release after resolving #89?

Stopping multiple capabilities can cause errors

Currently testing the rocon_apps/teleop rapp with capability support and noticed a strange error when shutting down the rapp and its dependent capabilities.

[INFO] [WallTime: 1399886941.441992] Rapp Manager : Stopping rapp 'rocon_apps/teleop'.
[turtlebot/relay_video-3] killing on exit
[turtlebot/relay_cmd_vel-2] killing on exit
[turtlebot/teleop_velocity_smoother-1] killing on exit
[ INFO] [1399886941.469684467]: Unloading nodelet /turtlebot/teleop_velocity_smoother from manager /capability_server_nodelet_manager
[INFO] [WallTime: 1399886942.874621] Rapp Manager : stopped rapp [rocon_apps/teleop]'.
[INFO] [WallTime: 1399886942.875491] Rapp Manager : Stopping required capabilities.
[INFO] [WallTime: 1399886942.884842] Request to free usage of capability 'std_capabilities/DifferentialMobileBase' (bond id 'b054c6ae-d9b7-11e3-b584-2cd05ab796d3')
[INFO] [WallTime: 1399886942.885948] Capability 'std_capabilities/DifferentialMobileBase' being stopped because it has zero references
[depthimage_to_laserscan-18] killing on exit
[camera/disparity_registered_hw-17] killing on exit
[camera/disparity_registered_sw-16] killing on exit
[camera/disparity_depth-15] killing on exit
[camera/points_xyzrgb_hw_registered-14] killing on exit
 [camera/points_xyzrgb_sw_registered-12] killing on exit
[camera/depth_registered_rectify_depth-13] killing on exit
[camera/register_depth_rgb-11] killing on exit
[camera/depth_points-10] killing on exit
[camera/depth_metric-9] killing on exit
[camera/depth_metric_rect-8] killing on exit
[camera/depth_rectify_depth-7] killing on exit
[camera/rectify_ir-6] killing on exit
[camera/rectify_color-5] killing on exit
[camera/rectify_mono-4] killing on exit
[camera/debayer-3] killing on exit
[camera/driver-2] killing on exit
[camera/camera_nodelet_manager-1] killing on exit
[camera/camera_nodelet_manager-1] escalating to SIGTERM
shutting down processing monitor...
... shutting down processing monitor complete
done
[turtlebot_bringup-2] killing on exit
Traceback (most recent call last):
  File "/home/yujin/ros_ws/turtlebot_ws/src/turtlebot/turtlebot_capabilities/providers/launch/placeholder.py", line 6, in <module>
    sleep(0.1)
KeyboardInterrupt
shutting down processing monitor...
... shutting down processing monitor complete
done
[robot_state_publisher-1] killing on exit
shutting down processing monitor...
... shutting down processing monitor complete
done
[ERROR] [WallTime: 1399886960.565455] Capability Instance 'turtlebot_capabilities/rgbd_sensor' cannot transition to 'stopping' from anything but 'launching' or 'running', current state is 'stopping'                                                                    
[INFO] [WallTime: 1399886960.566842] Capability Provider 'turtlebot_capabilities/rgbd_sensor' for Capability 'std_capabilities/RGBDSensor' has terminated.
[INFO] [WallTime: 1399886960.567999] Capability Provider 'turtlebot_capabilities/turtlebot2_bringup' for Capability 'turtlebot_capabilities/TurtleBotBringup' has terminated.
[INFO] [WallTime: 1399886960.568654] Stopping the 'turtlebot_capabilities/diagnostics' provider of the 'std_capabilities/Diagnostics' interface, because it has no dependents left.
[ERROR] [WallTime: 1399886960.570503] RuntimeError: No running launch file with PID of '815'
[diagnostic_aggregator-1] killing on exit
shutting down processing monitor...
... shutting down processing monitor complete
done
[INFO] [WallTime: 1399886961.621882] Capability Provider 'turtlebot_capabilities/robot_state_publisher' for Capability 'std_capabilities/RobotStatePublisher' has terminated.
[INFO] [WallTime: 1399886961.622872] Capability Provider 'turtlebot_capabilities/diagnostics' for Capability 'std_capabilities/Diagnostics' has terminated.
^C[zeroconf/zeroconf-6] killing on exit
[interactions-5] killing on exit
[app_manager-3] killing on exit
 [master-4] killing on exit
[capability_server-2] killing on exit
Traceback (most recent call last):
  File "/home/yujin/ros_ws/rocon_ws/src/capabilities/src/capabilities/placeholder_script", line 6, in <module>
    sleep(0.1)
KeyboardInterrupt
[ERROR] [WallTime: 1399887012.879617] Capability Instance 'kobuki_capabilities/kobuki_differential_mobile_base' terminated unexpectedly, it was previously in the 'running' state.
[INFO] [WallTime: 1399887012.880260] Capability Provider 'kobuki_capabilities/kobuki_differential_mobile_base' for Capability 'std_capabilities/DifferentialMobileBase' has terminated.
[INFO] [WallTime: 1399887012.880685] Stopping the 'kobuki_capabilities/kobuki_bringup' provider of the 'kobuki_capabilities/KobukiBringup' interface, because it has no dependents left.

So, here an error is thrown when the RGBDSensor capability is requested to be stopped, while it is in the stopping state. This error is reported in the service response, which makes the rapp manager abort the stopping of the other caps.

I see two improvements required.

  • Requesting to stop a stopping cap probably shouldn't be treated as an error
  • The rapp manager should continue to stop the remaining capabilities.

I'll look into the stopping capability issue in the rapp manager (robotics-in-concert/rocon_app_platform#233). @wjwwood could you take a look at the capability server part of this issue?

Not able to define roscpp services in capability package.

After running the capability server i am getting below warnings. No defualt provider given for capability interface minimal_pkg/slowtodie and minimal_pkg/specific minimal has only one provider.

I have services written in roscpp and want to define those services in capability server package so that those services can be controlled from this package. but i am newbie to python/yaml so not sure where this services needs to be define. in package.xml file all the capability server related services are defined. kindly let me know the approach so that i can explore from my end. Currently i am able to up the server but not sure how to define my services through capability that part i am not aware.

Thoughts on the Specifications

Remappings for non-public interfaces

Looking at the nav stack provider specifically here. It defines a remapping for the scan interface. Is that a remapping for itself or for the LaserObservation dependency?. In the former case, since that is a non-public interface, couldn't/shouldn't that be covered by the provider's launch file? In the latter case, wouldn't the LaserObservation provider (hokuyo_base) handle it?

Interfaces

  • Topic-Provides-Requires, Services-Provides

Is there a reason you classify like this instead of explicitly pub, sub, service? I'm not opposed to it, just curious. I should make sure to adopt a similar approach to defining interfaces through other components like the gateways and rapps.

  • Action-Provides-Requires?

Will this be accommodate or simpler to break it down to topics?

  • Dynamic Parameters

By dynamic_reconfigure or as a todo idea?

  • Semantic Interface Provider

Just trying to imagine what this would look like. Questions, e.g. would it be re-using the launch file provided by a regular camera provider?

Simplify usage of semantic and normal capabilities

I think, from a user point of view (e.g. starting/stopping capabilies, defining dependencies on them) it's not really important to distinguish between semantic and normal capabilities/interfaces.

Is there a need to keep them separate, e.g. get_interfaces, get_semantic_interfaces? The information about their difference is stored in their specification anyway.

Travis CI failing

Since #93 got merged, it seems that Travis CI is failing with

======================================================================
ERROR: Failure: ImportError (No module named pycodestyle)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/travis/virtualenv/python2.7.17/lib/python2.7/site-packages/nose/loader.py", line 418, in loadTestsFromName
    addr.filename, addr.module)
  File "/home/travis/virtualenv/python2.7.17/lib/python2.7/site-packages/nose/importer.py", line 47, in importFromPath
    return self.importFromDir(dir_path, fqname)
  File "/home/travis/virtualenv/python2.7.17/lib/python2.7/site-packages/nose/importer.py", line 94, in importFromDir
    mod = load_module(part_fqname, fh, filename, desc)
  File "/tmp/capabilities_build/src/capabilities/test/unit/test_code_quality.py", line 1, in <module>
    import pycodestyle
ImportError: No module named pycodestyle

e.g. https://travis-ci.org/github/osrf/capabilities/builds/681663229

The package.xml defines a rosdep on python-pycodestyle:

capabilities/package.xml

Lines 44 to 45 in df54fa8

<test_depend condition="$ROS_PYTHON_VERSION == 2">python-pycodestyle</test_depend>
<test_depend condition="$ROS_PYTHON_VERSION == 3">python3-pycodestyle</test_depend>

.travis.yml installs other python rosdeps via pip but not pycodestyle:

install:
- pip install nose coverage pep8 PyYAML catkin_pkg rospkg empy python-coveralls
- sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu bionic main" > /etc/apt/sources.list.d/ros-latest.list'
- wget http://packages.ros.org/ros.key -O - | sudo apt-key add -
- sudo apt-get update
- sudo apt-get install python-rosdep
- sudo `which rosdep` init
- rosdep update
- rosdep install --from-paths ./ --rosdistro melodic -y

I think python-pycodestyle might not be installed automatically during the rosdep install because ROS_PYTHON_VERSION is not set yet at this point.

Solution A: Add pycodestyle to line 7 in travis.yml

Solution B: Set ROS_PYTHON_VERSION=2 before line 14 in travis.yml (and cleanup line 7: remove the packages that are installed via rosdep)

If B works I think it would be the better solution.

Fix broken tests

Running tests shows a couple of failing ones:

Running tests...
Test project /opt/rocon_workspace/build
      Start  1: _ctest_capabilities_nosetests_test
 1/41 Test  #1: _ctest_capabilities_nosetests_test ..................................................................***Failed    0.74 sec
      Start  2: _ctest_capabilities_rostest_test_rostest_test_client_test_client_module.test
 2/41 Test  #2: _ctest_capabilities_rostest_test_rostest_test_client_test_client_module.test ........................***Failed    2.05 sec
      Start  3: _ctest_capabilities_rostest_test_rostest_test_launch_manager_test_launch_manager.test
 3/41 Test  #3: _ctest_capabilities_rostest_test_rostest_test_launch_manager_test_launch_manager.test ...............***Failed    1.38 sec
      Start  4: _ctest_capabilities_rostest_test_rostest_test_server_test_client.test
 4/41 Test  #4: _ctest_capabilities_rostest_test_rostest_test_server_test_client.test ...............................***Failed    1.60 sec
      Start  5: _ctest_capabilities_rostest_test_rostest_test_server_test_default_provider.test
 5/41 Test  #5: _ctest_capabilities_rostest_test_rostest_test_server_test_default_provider.test .....................***Failed    1.65 sec
      Start  6: _ctest_capabilities_rostest_test_rostest_test_server_test_dependent_capabilities.test
 6/41 Test  #6: _ctest_capabilities_rostest_test_rostest_test_server_test_dependent_capabilities.test ...............***Failed    1.75 sec
      Start  7: _ctest_capabilities_rostest_test_rostest_test_server_test_invalid_specs.test
 7/41 Test  #7: _ctest_capabilities_rostest_test_rostest_test_server_test_invalid_specs.test ........................***Failed    1.58 sec
      Start  8: _ctest_capabilities_rostest_test_rostest_test_server_test_package_white_black_lists.test
 8/41 Test  #8: _ctest_capabilities_rostest_test_rostest_test_server_test_package_white_black_lists.test ............***Failed    1.65 sec
      Start  9: _ctest_capabilities_rostest_test_rostest_test_server_test_remapping.test
 9/41 Test  #9: _ctest_capabilities_rostest_test_rostest_test_server_test_remapping.test ............................***Failed    1.59 sec
      Start 10: _ctest_capabilities_rostest_test_rostest_test_server_test_ros_services.test
10/41 Test #10: _ctest_capabilities_rostest_test_rostest_test_server_test_ros_services.test .........................***Failed    1.65 sec
      Start 11: _ctest_capabilities_rostest_test_rostest_test_server_test_white_black_lists.test
11/41 Test #11: _ctest_capabilities_rostest_test_rostest_test_server_test_white_black_lists.test ....................***Failed    1.65 sec
      Start 12: _ctest_capabilities_rostest_test_rostest_test_service_discovery_test_spec_index_from_service.test
12/41 Test #12: _ctest_capabilities_rostest_test_rostest_test_service_discovery_test_spec_index_from_service.test ...***Failed    2.12 sec

Am I doing sth wrong?

Error using with python3

Is this package compatible with python3? I saw that there are a few commits related to this topic.

I am trying to build ROS from source using python3, everything builds just fine.
But when I run capability_server I get the following error:

Traceback (most recent call last):
  File "/ros_catkin_ws/install_isolated/lib/capabilities/capability_server", line 8, in <module>
    from rospy.impl import tcpros_service
  File "/ros_catkin_ws/install_isolated/lib/python3/dist-packages/rospy/__init__.py", line 49, in <module>
    from .client import spin, myargv, init_node, \
  File "/ros_catkin_ws/install_isolated/lib/python3/dist-packages/rospy/client.py", line 52, in <module>
    import roslib
  File "/ros_catkin_ws/install_isolated/lib/python3/dist-packages/roslib/__init__.py", line 50, in <module>
    from roslib.launcher import load_manifest  # noqa: F401
  File "/ros_catkin_ws/install_isolated/lib/python3/dist-packages/roslib/launcher.py", line 42, in <module>
    import rospkg
ImportError: No module named rospkg
================================================================================REQUIRED process [capability_server-7] has died!
process has died [pid 27571, exit code 1, cmd /ros_catkin_ws/install_isolated/lib/capabilities/capability_server __name:=capability_server __log:=/root/.ros/log/211af6f2-a85e-11eb-a4ff-0242ac110002/capability_server-7.log].
log file: /root/.ros/log/211af6f2-a85e-11eb-a4ff-0242ac110002/capability_server-7*.log
Initiating shutdown!
================================================================================

It seems that this package is still trying to import rospkg from python2. I don't know if I build it wrongly or if it is related to the package itself. Sorry if this is my mistake.

I am building the workspace like this:
$ export ROS_PYTHON_VERSION=3 && python3 ./src/catkin/bin/catkin_make_isolated --install -CMAKE_BUILD_TYPE=Release -j1 --catkin-make-args all

BTW, I am using ubuntu 18.04 and ROS melodic.

Make roslauncher for providers with dependencies optional

For implementing multiple caps made available by just one node/launcher (see #22 ), it might be need to implement many stubs (providers with empty launchers). This doesn't play nicely with the capability server's rules to consider a capability shut down, when the launcher shuts down.

For now, I implemented a simple node, which does nothing to keep the launcher alive.

Could we make the launcher optional, if the provider depends on another interface? The state of that provider would then be determined by the state of its dependency.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.