GithubHelp home page GithubHelp logo

autowarefoundation / autoware Goto Github PK

View Code? Open in Web Editor NEW
8.6K 431.0 2.9K 211.92 MB

Autoware - the world's leading open-source software project for autonomous driving

Home Page: https://www.autoware.org/

License: Apache License 2.0

Shell 73.87% Dockerfile 23.10% HCL 3.03%
autoware autonomous-vehicles ros autonomous-driving ros2

autoware's People

Contributors

aaryanmurgunde avatar awf-autoware-bot[bot] avatar badai-nguyen avatar brkay54 avatar dependabot[bot] avatar esteve avatar h-ohta avatar hansrobo avatar isamu-takagi avatar kazuki0824 avatar keisukeshima avatar kenji-miyake avatar kminoda avatar lexavtanke avatar maxime-clem avatar mitsudome-r avatar naophis avatar oguzkaganozt avatar owen-liuyuxuan avatar pre-commit-ci[bot] avatar rsasaki0109 avatar sharrrrk avatar shmpwk avatar taikitanaka3 avatar takahoribe avatar vrichardjp avatar wep21 avatar xmfcx avatar youtalk avatar yukke42 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

autoware's Issues

Add NOTICE files to each repository

I'll change the name and year from https://github.com/autowarefoundation/autoware/blob/main/NOTICE depending on the repository's files.

  • autoware_common
  • autoware.core
  • autoware.universe
  • autoware_launch
  • sample_sensor_kit_launch
  • sample_vehicle_launch
  • autoware_individual_params
  • autoware-github-actions
  • autoware-documentation

FYI, to clone these repositories:

repositories:
  autoware:
    type: git
    url: https://github.com/autowarefoundation/autoware.git
    version: main
  autoware_common:
    type: git
    url: https://github.com/autowarefoundation/autoware_common.git
    version: main
  autoware.core:
    type: git
    url: https://github.com/autowarefoundation/autoware.core.git
    version: main
  autoware.universe:
    type: git
    url: https://github.com/autowarefoundation/autoware.universe.git
    version: main
  autoware_launch:
    type: git
    url: https://github.com/autowarefoundation/autoware_launch.git
    version: main
  sample_sensor_kit_launch:
    type: git
    url: https://github.com/autowarefoundation/sample_sensor_kit_launch.git
    version: main
  sample_vehicle_launch:
    type: git
    url: https://github.com/autowarefoundation/sample_vehicle_launch.git
    version: main
  autoware_individual_params:
    type: git
    url: https://github.com/autowarefoundation/autoware_individual_params.git
    version: main
  autoware-github-actions:
    type: git
    url: https://github.com/autowarefoundation/autoware-github-actions.git
    version: main
  autoware-documentation:
    type: git
    url: https://github.com/autowarefoundation/autoware-documentation.git
    version: main

Move external repositories to src/external in the .repos file

Checklist

  • I've read the contribution guidelines.
  • I've searched other issues and no duplicate issues were found.
  • I've agreed with the maintainers that I can plan this task.

Description

As described in #370 (comment) it makes sense to move external repositories to src/external for easier maintenance

Purpose

Move repositories to src/external

Possible approaches

Update .repos file so that external repositories are cloned to src/external

Definition of done

All external repositories are cloned to src/external

CUDA installation for ARM64 is broken for some reasons

Checklist

  • I've read the contribution guidelines.
  • I've searched other issues and no duplicate issues were found.
  • I'm convinced that this is not my fault but a bug.

Description

https://github.com/autowarefoundation/autoware/runs/6212294556?check_suite_focus=true#step:6:4715

image

Expected behavior

No failure.

Actual behavior

It failed.

Steps to reproduce

Run the workflow.

Versions

No response

Possible causes

No response

Additional context

I'll re-run the CI after a while.

sending output via udp

Checklist

  • I've read the contribution guidelines.
  • I've searched other issues and no duplicate issues were found.
  • I've agreed with the maintainers that I can plan this task.

Description

i have managed to run autoware in jetson AGX and everything seems to work well!
the only thing I am looking for is that how to send the output of the lidar via UDP!
what I mean is that I used autoware to detect and track objects using lidar only
is there any way to make the output send thro udp
like xyz

Purpose

to run with a new contral unit

Possible approaches

using autoware with different robot not only a car

Definition of done

autwoare is installed

build faild

Checklist

  • I've read the contribution guidelines.
  • I've searched other issues and no duplicate issues were found.
  • I'm convinced that this is not my fault but a bug.

Description

--- stderr: traffic_light_classifier
CMake Error at ament_cmake_symlink_install/ament_cmake_symlink_install.cmake:100 (message):
ament_cmake_symlink_install_directory() can't find
'/home/autoware/src/universe/autoware.universe/perception/traffic_light_classifier/data'
Call Stack (most recent call first):
ament_cmake_symlink_install/ament_cmake_symlink_install.cmake:332 (ament_cmake_symlink_install_directory)
cmake_install.cmake:41 (include)

Expected behavior

--- stderr: traffic_light_classifier
CMake Error at ament_cmake_symlink_install/ament_cmake_symlink_install.cmake:100 (message):
ament_cmake_symlink_install_directory() can't find
'/home/autoware/src/universe/autoware.universe/perception/traffic_light_classifier/data'
Call Stack (most recent call first):
ament_cmake_symlink_install/ament_cmake_symlink_install.cmake:332 (ament_cmake_symlink_install_directory)
cmake_install.cmake:41 (include)

Actual behavior

--- stderr: traffic_light_classifier
CMake Error at ament_cmake_symlink_install/ament_cmake_symlink_install.cmake:100 (message):
ament_cmake_symlink_install_directory() can't find
'/home/autoware/src/universe/autoware.universe/perception/traffic_light_classifier/data'
Call Stack (most recent call first):
ament_cmake_symlink_install/ament_cmake_symlink_install.cmake:332 (ament_cmake_symlink_install_directory)
cmake_install.cmake:41 (include)

Steps to reproduce

--- stderr: traffic_light_classifier
CMake Error at ament_cmake_symlink_install/ament_cmake_symlink_install.cmake:100 (message):
ament_cmake_symlink_install_directory() can't find
'/home/autoware/src/universe/autoware.universe/perception/traffic_light_classifier/data'
Call Stack (most recent call first):
ament_cmake_symlink_install/ament_cmake_symlink_install.cmake:332 (ament_cmake_symlink_install_directory)
cmake_install.cmake:41 (include)

Versions

No response

Possible causes

No response

Additional context

No response

Permission denied errors occur when building perception packages that require trained model ONNX files to be downloaded from Google Drive

Checklist

  • I've read the contribution guidelines.
  • I've searched other issues and no duplicate issues were found.
  • I'm convinced that this is not my fault but a bug.

Description

When doing a clean build of any perception packages that require trained model ONNX files (eg: lidar_apollo_instance_segmentation), a permission denied error occurs when gdown tries to download the files from Google Drive.

Expected behavior

ONNX files should be downloaded as expected and the perception packages should build without error

Actual behavior

gdown reports a permission denied error

Screenshot_from_2022-04-20_16-49-59

Steps to reproduce

I am not sure how to reproduce the issue short of doing a complete clean build of Autoware

Versions

  • OS: Ubuntu 20.04
  • ROS 2: Galactic
  • Autoware: Universe
  • gdown: 4.4.0

Possible causes

The theory is that there is a limit to the number of requests that can be made via Google Drive API, and downloads via gdown will fail when the limit is hit.

Additional context

No response

Fail to update a workspace

Checklist

  • I've read the contribution guidelines.
  • I've searched other issues and no duplicate issues were found.
  • I'm convinced that this is not my fault but a bug.

Description

I'm following How to update a workspace.
In order to update repos, it says we command

vcs import src < autoware.repos
vcs pull

However, it seems it ignores under src directory because of .gitignore.
Thus we should command as follows?

vcs import src < autoware.repos
vcs pull src

Expected behavior

Updating repos by vcs pull

Actual behavior

No repos updated by vcs pull

Steps to reproduce

Follow How to update a workspace

Versions

No response

Possible causes

No response

Additional context

No response

Make the Docker image size smaller

Checklist

  • I've read the contribution guidelines.
  • I've searched other issues and no duplicate issues were found.
  • I've agreed with the maintainers that I can plan this task.

Description

After #340, the image size is too large because unnecessary packages are installed.

https://github.com/autowarefoundation/autoware/pkgs/container/autoware-universe/22935531?tag=humble-latest-arm64
image

Purpose

To reduce the size.

Possible approaches

Select which packages to install.

Definition of done

The image size is compressed.

Installation of ros-humble-desktop fails

Checklist

  • I've read the contribution guidelines.
  • I've searched other issues and no duplicate issues were found.
  • I'm convinced that this is not my fault but a bug.

Description

Installation of ros-humble-desktop fails.
https://github.com/autowarefoundation/autoware/runs/7851163697?check_suite_focus=true#step:5:1924

#18 155.6 TASK [autoware.dev_env.ros2 : Install ros-humble-desktop] **********************
#18 161.1 fatal: [localhost]: FAILED! => {"cache_update_time": 1660626759, "cache_updated": false, "changed": false, "msg": "'/usr/bin/apt-get -y -o \"Dpkg::Options::=--force-confdef\" -o \"Dpkg::Options::=--force-confold\"       install 'ros-humble-desktop'' failed: E: Unable to correct problems, you have held broken packages.\n", "rc": 100, "stderr": "E: Unable to correct problems, you have held broken packages.\n", "stderr_lines": ["E: Unable to correct problems, you have held broken packages."], "stdout": "Reading package lists...\nBuilding dependency tree...\nReading state information...\nSome packages could not be installed. This may mean that you have\nrequested an impossible situation or if you are using the unstable\ndistribution that some required packages have not yet been created\nor been moved out of Incoming.\nThe following information may help to resolve the situation:\n\nThe following packages have unmet dependencies:\n libignition-math6-dev : Depends: libignition-cmake2-dev (>= 2.13.0) but 2.12.1-2~jammy is to be installed\n", "stdout_lines": ["Reading package lists...", "Building dependency tree...", "Reading state information...", "Some packages could not be installed. This may mean that you have", "requested an impossible situation or if you are using the unstable", "distribution that some required packages have not yet been created", "or been moved out of Incoming.", "The following information may help to resolve the situation:", "", "The following packages have unmet dependencies:", " libignition-math6-dev : Depends: libignition-cmake2-dev (>= 2.13.0) but 2.12.1-2~jammy is to be installed"]}

Expected behavior

Installation succeeds.

Actual behavior

Installation fails.

Steps to reproduce

git clone [email protected]:autowarefoundation/autoware.git -b humble
cd autoware
./docker/build.sh

The error log is as follows:

#17 119.1 TASK [autoware.dev_env.ros2 : Install ros-humble-desktop] **********************
#17 122.7 fatal: [localhost]: FAILED! => {"cache_update_time": 1660627442, "cache_updated": false, "changed": false, "msg": "'/usr/bin/apt-get -y -o \"Dpkg::Options::=--force-confdef\" -o \"Dpkg::Options::=--force-confold\"       install 'ros-humble-desktop'' failed: E: Unable to correct problems, you have held broken packages.\n", "rc": 100, "stderr": "E: Unable to correct problems, you have held broken packages.\n", "stderr_lines": ["E: Unable to correct problems, you have held broken packages."], "stdout": "Reading package lists...\nBuilding dependency tree...\nReading state information...\nSome packages could not be installed. This may mean that you have\nrequested an impossible situation or if you are using the unstable\ndistribution that some required packages have not yet been created\nor been moved out of Incoming.\nThe following information may help to resolve the situation:\n\nThe following packages have unmet dependencies:\n libignition-math6-dev : Depends: libignition-cmake2-dev (>= 2.13.0) but 2.12.1-2~jammy is to be installed\n", "stdout_lines": ["Reading package lists...", "Building dependency tree...", "Reading state information...", "Some packages could not be installed. This may mean that you have", "requested an impossible situation or if you are using the unstable", "distribution that some required packages have not yet been created", "or been moved out of Incoming.", "The following information may help to resolve the situation:", "", "The following packages have unmet dependencies:", " libignition-math6-dev : Depends: libignition-cmake2-dev (>= 2.13.0) but 2.12.1-2~jammy is to be installed"]}
#17 122.7
#17 122.7 PLAY RECAP *********************************************************************
#17 122.7 localhost                  : ok=9    changed=5    unreachable=0    failed=1    skipped=1    rescued=0    ignored=0
#17 122.7
#17 122.7 Failed.
#17 ERROR: executor failed running [/bin/bash -o pipefail -c ./setup-dev-env.sh -y $SETUP_ARGS universe   && pip uninstall -y ansible ansible-core   && mkdir src   && vcs import src < autoware.repos   && rosdep update   && DEBIAN_FRONTEND=noninteractive rosdep install -y --ignore-src --from-paths src --rosdistro "$ROS_DISTRO"   && apt-get clean   && rm -rf /var/lib/apt/lists/*]: exit code: 1
------
 > [devel devel  8/14] RUN --mount=type=ssh   ./setup-dev-env.sh -y --no-cuda-drivers universe   && pip uninstall -y ansible ansible-core   && mkdir src   && vcs import src < autoware.repos   && rosdep update   && DEBIAN_FRONTEND=noninteractive rosdep install -y --ignore-src --from-paths src --rosdistro "humble"   && apt-get clean   && rm -rf /var/lib/apt/lists/*:
#17 115.8 TASK [autoware.dev_env.ros2 : Add ROS 2 apt repository to source list] *********
#17 119.1 changed: [localhost]
#17 119.1
#17 119.1 TASK [autoware.dev_env.ros2 : Install ros-humble-desktop] **********************
#17 122.7 fatal: [localhost]: FAILED! => {"cache_update_time": 1660627442, "cache_updated": false, "changed": false, "msg": "'/usr/bin/apt-get -y -o \"Dpkg::Options::=--force-confdef\" -o \"Dpkg::Options::=--force-confold\"       install 'ros-humble-desktop'' failed: E: Unable to correct problems, you have held broken packages.\n", "rc": 100, "stderr": "E: Unable to correct problems, you have held broken packages.\n", "stderr_lines": ["E: Unable to correct problems, you have held broken packages."], "stdout": "Reading package lists...\nBuilding dependency tree...\nReading state information...\nSome packages could not be installed. This may mean that you have\nrequested an impossible situation or if you are using the unstable\ndistribution that some required packages have not yet been created\nor been moved out of Incoming.\nThe following information may help to resolve the situation:\n\nThe following packages have unmet dependencies:\n libignition-math6-dev : Depends: libignition-cmake2-dev (>= 2.13.0) but 2.12.1-2~jammy is to be installed\n", "stdout_lines": ["Reading package lists...", "Building dependency tree...", "Reading state information...", "Some packages could not be installed. This may mean that you have", "requested an impossible situation or if you are using the unstable", "distribution that some required packages have not yet been created", "or been moved out of Incoming.", "The following information may help to resolve the situation:", "", "The following packages have unmet dependencies:", " libignition-math6-dev : Depends: libignition-cmake2-dev (>= 2.13.0) but 2.12.1-2~jammy is to be installed"]}
#17 122.7
#17 122.7 PLAY RECAP *********************************************************************
#17 122.7 localhost                  : ok=9    changed=5    unreachable=0    failed=1    skipped=1    rescued=0    ignored=0
#17 122.7
#17 122.7 Failed.
------
error: failed to solve: executor failed running [/bin/bash -o pipefail -c ./setup-dev-env.sh -y $SETUP_ARGS universe   && pip uninstall -y ansible ansible-core   && mkdir src   && vcs import src < autoware.repos   && rosdep update   && DEBIAN_FRONTEND=noninteractive rosdep install -y --ignore-src --from-paths src --rosdistro "$ROS_DISTRO"   && apt-get clean   && rm -rf /var/lib/apt/lists/*]: exit code: 1

Versions

  • ROS 2: Humble

Possible causes

Some released packages might be broken.

Additional context

A similar issue that happened recently.
https://answers.ros.org/question/402781/ros-humble-ubuntu-2204-apt-install-issue/

The CI succeeded yesterday and failed today.
https://github.com/autowarefoundation/autoware/actions/runs/2857649523
https://github.com/autowarefoundation/autoware/actions/runs/2865405553

Enable multi-stage builds for Autoware

Checklist

  • I've read the contribution guidelines.
  • I've searched other issues and no duplicate issues were found.
  • I've agreed with the maintainers that I can plan this task.

Description

Enable multi-stage builds for Autoware (eg: using Docker Compose or another similar tool).

Purpose

Lay the ground work, with the right tool, to enable building Autoware.Universe as a microservices architecture.

Possible approaches

  • Use Docker Compose (this is the tool assumed in the definition of done steps below)

From https://docs.docker.com/compose/:
Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.

Definition of done

The obstacle avoidance marker is busy by default

Checklist

  • I've read the contribution guidelines.
  • I've searched other issues and no duplicate issues were found.
  • I've agreed with the maintainers that I can plan this task.

Description

The obstacle avoidance marker is busy by default.
image
In TIER IV, The fix for the obstacle avoidance marker has been incorporated but not in the awf repository.

Purpose

The debug visualization is turned off by default, for easier to see.

Possible approaches

modify rviz config

Definition of done

The debugging visualization in avoidance is turned off.
image

add allow-change-held-packages option after ansible 2.13 will be released

Checklist

  • I've read the contribution guidelines.
  • I've searched other issues and no duplicate issues were found.
  • I've agreed with the maintainers that I can plan this task.

Description

add allow-change-held-packages option here .
Unfortunately ansible 2.13 has not been released yet, where allow-change-held-packages option is implemented, so I will this option after ansible 2.13 is released.

Purpose

To prevent the problem described in #154 (comment).

Possible approaches

add allow-change-held-packages option here .

Definition of done

  • add allow-change-held-packages option here .

[Docker] Edit files on host and execute on a container

I'd like to propose the improvement on Dockerfile to modify the code on the host environment rather than on a container.

Current Dockerfile on the add-docker branch expects to modify the code on a container rather than the host. But it is inefficient to set up the development environment and change the code on a container.
So I propose modifying the files on the host, mounting src on a container, and running autoware on the container.

[CI] github-release creates multiple releases for the same branch

Checklist

  • I've read the contribution guidelines.
  • I've searched other issues and no duplicate issues were found.
  • I'm convinced that this is not my fault but a bug.

Description

With this branches/tags,
image

It will create multiple releases.
image

Expected behavior

The second trial should update the existing release that has the same name.

Actual behavior

Another release is created.

Steps to reproduce

  1. Create a tag.
  2. Create and push a beta branch.
  3. Add a commit to the branch and push again.

Versions

No response

Possible causes

I should have used gh release edit for the second trial.
https://cli.github.com/manual/gh_release_edit

Additional context

No response

Append morai_msgs package related to testing codes about checking simulator compatibility

Checklist

  • I've read the contribution guidelines.
  • I've searched other issues and no duplicate issues were found.
  • I've agreed with the maintainers that I can plan this task.

Description

The pull request (PR link) has been approved but it fails to proceed due to the dependency of the package, morai_msgs.
At first, the morai_msgs package was in [autoware.universe/tools/simulator_test/]. but in the simulation working group, it was decided to remove it and then, append it to autoware.repos to maintain consistency with other packages.

Purpose

Since the pull request (PR link) has been approved but failed to be built, the morai_msgs package would be added to autoware.repos for further steps.

Possible approaches

None.

Definition of done

the morai_msgs package is listed on the autoware.repos.

The location of autoware_auto_msgs

Currently autoware_auto_msgs repository for universe and core is located in tier4 group.
I think the repository should be in autowarefoundation group.

Autoware.universe install fail on Jetson Xavier platform

Discussed in #120

Originally posted by jason914 March 25, 2022
Dear all,

I use the Jetson Xavier platform. It is ARM64 platform. The OS is JetPack 4.4 (Ubuntu 18.04).
I refer https://autowarefoundation.github.io/autoware-documentation/main/installation/autoware/docker-installation/ commands.
But there are 2 questions when I install Autoware.universe.

Q1:
There is an error when use following command.
./setup-dev-env.sh docker
Following is error log.

`nvidia@xavier:~$ ./setup-dev-env.sh docker
Setting up the build environment take up to 1 hour.

Are you sure to run the setup? [y/N] y
[sudo] password for nvidia:
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package 'ansible' is not installed, so not removed
The following packages were automatically installed and are no longer required:
apt-clone archdetect-deb bogl-bterm busybox-static cryptsetup-bin
dpkg-repack gir1.2-timezonemap-1.0 gir1.2-xkl-1.0 grub-common
kde-window-manager kpackagetool5 kwayland-data kwin-common kwin-data
kwin-x11 libdebian-installer4 libkdecorations2-5v5
libkdecorations2private5v5 libkf5declarative-data libkf5declarative5
libkf5globalaccelprivate5 libkf5idletime5 libkf5kcmutils-data
libkf5kcmutils5 libkf5newstuff-data libkf5newstuff5 libkf5newstuffcore5
libkf5package-data libkf5package5 libkf5plasma5 libkf5quickaddons5
libkf5waylandclient5 libkf5waylandserver5 libkscreenlocker5
libkwin4-effect-builtins1 libkwineffects11 libkwinglutils11
libkwinxrenderutils11 libllvm9 libopts25 libqgsttools-p1 libqt5multimedia5
libqt5multimedia5-plugins libqt5multimediaquick-p5 libqt5multimediawidgets5
libxcb-composite0 libxcb-cursor0 libxcb-damage0 os-prober python-websocket
python3-dbus.mainloop.pyqt5 python3-icu python3-pam python3-pyqt5
python3-pyqt5.qtsvg python3-pyqt5.qtwebkit
qml-module-org-kde-kquickcontrolsaddons qml-module-qtmultimedia
qml-module-qtquick2 rdate sntp tasksel tasksel-data
Use 'sudo apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 503 not upgraded.
WARNING: pip is being invoked by an old script wrapper. This will fail in a future version of pip.
Please see pypa/pip#5599 for advice on fixing the underlying issue.
To avoid this problem you can invoke Python with '-m pip' instead of running pip directly.
WARNING: The directory '/home/nvidia/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you should use sudo's -H flag.
ERROR: Could not find a version that satisfies the requirement ansible==5.* (from versions: 1.0, 1.1, 1.2, 1.2.1, 1.2.2, 1.2.3, 1.3.0, 1.3.1, 1.3.2, 1.3.3, 1.3.4, 1.4, 1.4.1, 1.4.2, 1.4.3, 1.4.4, 1.4.5, 1.5, 1.5.1, 1.5.2, 1.5.3, 1.5.4, 1.5.5, 1.6, 1.6.1, 1.6.2, 1.6.3, 1.6.4, 1.6.5, 1.6.6, 1.6.7, 1.6.8, 1.6.9, 1.6.10, 1.7, 1.7.1, 1.7.2, 1.8, 1.8.1, 1.8.2, 1.8.3, 1.8.4, 1.9.0.1, 1.9.1, 1.9.2, 1.9.3, 1.9.4, 1.9.5, 1.9.6, 2.0.0.0, 2.0.0.1, 2.0.0.2, 2.0.1.0, 2.0.2.0, 2.1.0.0, 2.1.1.0, 2.1.2.0, 2.1.3.0, 2.1.4.0, 2.1.5.0, 2.1.6.0, 2.2.0.0, 2.2.1.0, 2.2.2.0, 2.2.3.0, 2.3.0.0, 2.3.1.0, 2.3.2.0, 2.3.3.0, 2.4.0.0, 2.4.1.0, 2.4.2.0, 2.4.3.0, 2.4.4.0, 2.4.5.0, 2.4.6.0, 2.5.0a1, 2.5.0b1, 2.5.0b2, 2.5.0rc1, 2.5.0rc2, 2.5.0rc3, 2.5.0, 2.5.1, 2.5.2, 2.5.3, 2.5.4, 2.5.5, 2.5.6, 2.5.7, 2.5.8, 2.5.9, 2.5.10, 2.5.11, 2.5.12, 2.5.13, 2.5.14, 2.5.15, 2.6.0a1, 2.6.0a2, 2.6.0rc1, 2.6.0rc2, 2.6.0rc3, 2.6.0rc4, 2.6.0rc5, 2.6.0, 2.6.1, 2.6.2, 2.6.3, 2.6.4, 2.6.5, 2.6.6, 2.6.7, 2.6.8, 2.6.9, 2.6.10, 2.6.11, 2.6.12, 2.6.13, 2.6.14, 2.6.15, 2.6.16, 2.6.17, 2.6.18, 2.6.19, 2.6.20, 2.7.0.dev0, 2.7.0a1, 2.7.0b1, 2.7.0rc1, 2.7.0rc2, 2.7.0rc3, 2.7.0rc4, 2.7.0, 2.7.1, 2.7.2, 2.7.3, 2.7.4, 2.7.5, 2.7.6, 2.7.7, 2.7.8, 2.7.9, 2.7.10, 2.7.11, 2.7.12, 2.7.13, 2.7.14, 2.7.15, 2.7.16, 2.7.17, 2.7.18, 2.8.0a1, 2.8.0b1, 2.8.0rc1, 2.8.0rc2, 2.8.0rc3, 2.8.0, 2.8.1, 2.8.2, 2.8.3, 2.8.4, 2.8.5, 2.8.6, 2.8.7, 2.8.8, 2.8.9, 2.8.10, 2.8.11, 2.8.12, 2.8.13, 2.8.14, 2.8.15, 2.8.16rc1, 2.8.16, 2.8.17rc1, 2.8.17, 2.8.18rc1, 2.8.18, 2.8.19rc1, 2.8.19, 2.8.20rc1, 2.8.20, 2.9.0b1, 2.9.0rc1, 2.9.0rc2, 2.9.0rc3, 2.9.0rc4, 2.9.0rc5, 2.9.0, 2.9.1, 2.9.2, 2.9.3, 2.9.4, 2.9.5, 2.9.6, 2.9.7, 2.9.8, 2.9.9, 2.9.10, 2.9.11, 2.9.12, 2.9.13, 2.9.14rc1, 2.9.14, 2.9.15rc1, 2.9.15, 2.9.16rc1, 2.9.16, 2.9.17rc1, 2.9.17, 2.9.18rc1, 2.9.18, 2.9.19rc1, 2.9.19, 2.9.20rc1, 2.9.20, 2.9.21rc1, 2.9.21, 2.9.22rc1, 2.9.22, 2.9.23rc1, 2.9.23, 2.9.24rc1, 2.9.24, 2.9.25rc1, 2.9.25, 2.9.26rc1, 2.9.26, 2.9.27rc1, 2.9.27, 2.10.0a1, 2.10.0a2, 2.10.0a3, 2.10.0a4, 2.10.0a5, 2.10.0a6, 2.10.0a7, 2.10.0a8, 2.10.0a9, 2.10.0b1, 2.10.0b2, 2.10.0rc1, 2.10.0, 2.10.1, 2.10.2, 2.10.3, 2.10.4, 2.10.5, 2.10.6, 2.10.7, 3.0.0b1, 3.0.0rc1, 3.0.0, 3.1.0, 3.2.0, 3.3.0, 3.4.0, 4.0.0a1, 4.0.0a2, 4.0.0a3, 4.0.0a4, 4.0.0b1, 4.0.0b2, 4.0.0rc1, 4.0.0, 4.1.0, 4.2.0, 4.3.0, 4.4.0, 4.5.0, 4.6.0, 4.7.0, 4.8.0, 4.9.0, 4.10.0, 5.0.0a1, 5.0.0a2, 5.0.0a3, 5.0.0b1, 5.0.0b2, 5.0.0rc1)
ERROR: No matching distribution found for ansible==5.*`

Q2:
There is an error when use following command.
rocker --nvidia --x11 --user --volume $HOME/autoware -- ghcr.io/autowarefoundation/autoware-universe:latest-arm64
Following is error log.

`nvidia@xavier:~$ rocker --nvidia --x11 --user --volume $HOME/autoware -- ghcr.io/autowarefoundation/autoware-universe:latest-arm64
Extension volume doesn't support default arguments. Please extend it.
Active extensions ['nvidia', 'volume', 'x11', 'user']
Step 1/12 : FROM python:3-slim-stretch as detector
---> 839514fd3bcb
Step 2/12 : RUN mkdir -p /tmp/distrovenv
---> Using cache
---> 1568774de25e
Step 3/12 : RUN python3 -m venv /tmp/distrovenv
---> Using cache
---> d457dc3a1045
Step 4/12 : RUN apt-get update && apt-get install -qy patchelf binutils
---> Using cache
---> 8633694baf65
Step 5/12 : RUN . /tmp/distrovenv/bin/activate && pip install distro pyinstaller==4.0 staticx==0.12.3
---> Running in ea9bf28a2354
Collecting distro
Downloading https://files.pythonhosted.org/packages/e1/54/d08d1ad53788515392bec14d2d6e8c410bffdc127780a9a4aa8e6854d502/distro-1.7.0-py3-none-any.whl
Collecting pyinstaller==4.0
Downloading https://files.pythonhosted.org/packages/82/96/21ba3619647bac2b34b4996b2dbbea8e74a703767ce24192899d9153c058/pyinstaller-4.0.tar.gz (3.5MB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing wheel metadata: started
Preparing wheel metadata: finished with status 'done'
Collecting staticx==0.12.3
Downloading https://files.pythonhosted.org/packages/92/ff/d9960ea1f9db48d6044a24ee0f3d78d07bcaddf96eb0c0e8806f941fb7d3/staticx-0.12.3.tar.gz (68kB)
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "", line 1, in
File "/tmp/pip-install-m_nm8mya/staticx/setup.py", line 4, in
from wheel.bdist_wheel import bdist_wheel
ModuleNotFoundError: No module named 'wheel'

Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-install-m_nm8mya/staticx/

You are using pip version 19.0.3, however version 22.0.4 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.

Removing intermediate container ea9bf28a2354
no more output and success not detected
Failed to build detector image
WARNING unable to detect os for base image 'ghcr.io/autowarefoundation/autoware-universe:latest-arm64', maybe the base image does not exist
nvidia@xavier:~$

`

Could you give me suggestion?

Best Regards

-Shu-Kang

Reduce docker image size for runtime usages

Checklist

  • I've read the contribution guidelines.
  • I've searched other issues and no duplicate issues were found.
  • I've agreed with the maintainers that I can plan this task.

Description

Originated from discussion posted by @kasperornmeck in #2780_

Purpose

Reduce docker images size for runtime environment where unnecessary software could be removed from development environment and reduce the disk space used.

Current status

Details of the container used in the analysis:

REPOSITORY: ghcr.io/autowarefoundation/autoware-universe
TAG: galactic-20220728-prebuilt-cuda-arm64
IMAGE ID: 8b7fbc56f6bf
CREATED: 8 days ago
SIZE: 19.3GB

Definition of done

  • runtime dependencies analysis
  • reduce sized docker image for runtime

Add Ccache into docker image to speed up recompilation

Checklist

  • I've read the contribution guidelines.
  • I've searched other issues and no duplicate issues were found.
  • I've agreed with the maintainers that I can plan this task.

Description

In autoware documentation page, Ccache is recommended to speed up recompilation. See here.

Purpose

  • Add Ccache into docker image to speed up recompilation.

Possible approaches

  • Adding Ccache in Ansible.

Definition of done

Add configuration file for TIER IV CI/CD Pipeline

Checklist

  • I've read the contribution guidelines.
  • I've searched other issues and no duplicate issues were found.
  • I've agreed with the maintainers that I can plan this task.

Description

I add configuration file for TIER IV's CI/CD Pipeline.
The configuration file can contain the following

  • Build type, which determines what type of build will be performed.
  • The type of simulation to run and its runtime arguments.

Purpose

Additional settings for testing Simulation in the CI/CD pipeline of TIER IV.

Possible approaches

add configuration.

Definition of done

Confiugration file is added.

Support ROS 2 Rolling on Ubuntu 22.04

Checklist

  • I've read the contribution guidelines.
  • I've searched other issues and no duplicate issues were found.
  • I've agreed with the maintainers that I can plan this task.

Description

ROS 2 Humble will be released this May.
https://docs.ros.org/en/rolling/Releases/Release-Humble-Hawksbill.html

To smoothly transit to Humble, it's good to support Rolling before that.

Related: autowarefoundation/autoware.universe#268

Purpose

To smoothly transit to Humble.

Possible approaches

Replace all Galactic-dependent parts with the new styles.

Definition of done

Autoware's tutorials work with ROS 2 Rolling on Ubuntu 22.04.

[Ansible] duplicated entries are written to nvidia-docker.list

If I run ./setup-dev-env.sh docker multiple times, duplicated entries are written.

$ cat /etc/apt/sources.list.d/nvidia-docker.list
deb https://nvidia.github.io/libnvidia-container/stable/ubuntu18.04/$(ARCH) /
# deb https://nvidia.github.io/libnvidia-container/experimental/ubuntu18.04/$(ARCH) /
deb https://nvidia.github.io/nvidia-container-runtime/stable/ubuntu18.04/$(ARCH) /
# deb https://nvidia.github.io/nvidia-container-runtime/experimental/ubuntu18.04/$(ARCH) /
deb https://nvidia.github.io/nvidia-docker/ubuntu18.04/$(ARCH) /

deb https://nvidia.github.io/libnvidia-container/stable/ubuntu18.04/$(ARCH) /
#deb https://nvidia.github.io/libnvidia-container/experimental/ubuntu18.04/$(ARCH) /
deb https://nvidia.github.io/nvidia-container-runtime/stable/ubuntu18.04/$(ARCH) /
#deb https://nvidia.github.io/nvidia-container-runtime/experimental/ubuntu18.04/$(ARCH) /
deb https://nvidia.github.io/nvidia-docker/ubuntu18.04/$(ARCH) /

deb https://nvidia.github.io/libnvidia-container/stable/ubuntu18.04/$(ARCH) /
#deb https://nvidia.github.io/libnvidia-container/experimental/ubuntu18.04/$(ARCH) /
deb https://nvidia.github.io/nvidia-container-runtime/stable/ubuntu18.04/$(ARCH) /
#deb https://nvidia.github.io/nvidia-container-runtime/experimental/ubuntu18.04/$(ARCH) /
deb https://nvidia.github.io/nvidia-docker/ubuntu18.04/$(ARCH) /

Docker images aren't pushed from scheduled jobs

Checklist

  • I've read the contribution guidelines.
  • I've searched other issues and no duplicate issues were found.
  • I'm convinced that this is not my fault but a bug.

Description

The push condition is false in scheduled jobs.
https://github.com/autowarefoundation/autoware/runs/7830173853?check_suite_focus=true#step:5:491

Therefore, even though the workflow has passed, the new images aren't pushed.
image
image

Expected behavior

The push condition should be true.

Actual behavior

The push condition is false.

Steps to reproduce

Trigger Docker CI workflows with the schedule event.

Versions

No response

Possible causes

This condition is wrong.

push: ${{ github.ref_name == github.event.repository.default_branch }}

Additional context

No response

CUDA GPG causes an error

Checklist

  • I've read the contribution guidelines.
  • I've searched other issues and no duplicate issues were found.
  • I'm convinced that this is not my fault but a bug.

Description

See https://github.com/autowarefoundation/autoware_common/runs/6220775368?check_suite_focus=true#step:6:16

Expected behavior

No error.

Actual behavior

Run sudo apt-get -yqq update
W: GPG error: https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64  InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY A4B469963BF863CC
E: The repository 'https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64  InRelease' is not signed.
Error: Process completed with exit code 100.

Steps to reproduce

Run the workflow.

Versions

No response

Possible causes

  • Update the key.
  • Remove CUDA from the source list after installation.

Additional context

Create separate containers for testing Open AD Kit

Checklist

  • I've read the contribution guidelines.
  • I've searched other issues and no duplicate issues were found.
  • I've agreed with the maintainers that I can plan this task.

Description

I will create 2 docker containers that will pull the universe and build selected packages separated by their basic functionality.

Container contents:

  • Container 1:
    • drivers, sensing, perception, localization
  • Container 2:
    • planning, control

The reason we split them this way is to ensure we minimize the data transfer overhead between containers.

Purpose

To prototype the first multi-container version of the Autoware Universe for Open AD Kit.

Possible approaches

I will first test them out on my local machine. Then maybe we can set-up the CI but I am not sure yet.

Definition of done

Vulkan driver fails to detect GPU

Checklist

  • I've read the contribution guidelines.
  • I've searched other issues and no duplicate issues were found.
  • I'm convinced that this is not my fault but a bug.

Description

In the current universe image, vulkaninfo cannot detect host GPU. This might cause simulator to crash in the future. This happens as I was trying to run svl using the universe docker image. I understand svl is going down very soon. But this error would possibly affect future development with simulator which might require vulkan driver. I have not figured out a fix. Please feel free to make comments if you have any idea.
By comparing the vulkaninfo output, you can see that problematic output, which is the output of universe image, does not detect GPU correctly.

Expected behavior

Being able to start simulator.
Below is the correct output.

==========
VULKANINFO
==========

Vulkan Instance Version: 1.2.131


Instance Extensions: count = 18
====================
	VK_EXT_acquire_xlib_display            : extension revision 1
	VK_EXT_debug_report                    : extension revision 10
	VK_EXT_debug_utils                     : extension revision 2
	VK_EXT_direct_mode_display             : extension revision 1
	VK_EXT_display_surface_counter         : extension revision 1
	VK_KHR_device_group_creation           : extension revision 1
	VK_KHR_display                         : extension revision 23
	VK_KHR_external_fence_capabilities     : extension revision 1
	VK_KHR_external_memory_capabilities    : extension revision 1
	VK_KHR_external_semaphore_capabilities : extension revision 1
	VK_KHR_get_display_properties2         : extension revision 1
	VK_KHR_get_physical_device_properties2 : extension revision 2
	VK_KHR_get_surface_capabilities2       : extension revision 1
	VK_KHR_surface                         : extension revision 25
	VK_KHR_surface_protected_capabilities  : extension revision 1
	VK_KHR_wayland_surface                 : extension revision 6
	VK_KHR_xcb_surface                     : extension revision 6
	VK_KHR_xlib_surface                    : extension revision 6

Layers: count = 4
=======
VK_LAYER_LUNARG_standard_validation (LunarG Standard Validation Layer) Vulkan version 1.0.131, layer version 1:
	Layer Extensions: count = 0
	Devices: count = 2
		GPU id 	: 0 (NVIDIA GeForce RTX 3070 Ti)
		Layer-Device Extensions: count = 0

		GPU id 	: 1 (llvmpipe (LLVM 12.0.0, 256 bits))
		Layer-Device Extensions: count = 0

VK_LAYER_MESA_device_select (Linux device selection layer) Vulkan version 1.2.73, layer version 1:
	Layer Extensions: count = 0
	Devices: count = 2
		GPU id 	: 0 (NVIDIA GeForce RTX 3070 Ti)
		Layer-Device Extensions: count = 0

		GPU id 	: 1 (llvmpipe (LLVM 12.0.0, 256 bits))
		Layer-Device Extensions: count = 0

Actual behavior

Problematic output.

==========
VULKANINFO
==========

Vulkan Instance Version: 1.2.131


Instance Extensions: count = 18
====================
	VK_EXT_acquire_xlib_display            : extension revision 1
	VK_EXT_debug_report                    : extension revision 10
	VK_EXT_debug_utils                     : extension revision 1
	VK_EXT_direct_mode_display             : extension revision 1
	VK_EXT_display_surface_counter         : extension revision 1
	VK_KHR_device_group_creation           : extension revision 1
	VK_KHR_display                         : extension revision 23
	VK_KHR_external_fence_capabilities     : extension revision 1
	VK_KHR_external_memory_capabilities    : extension revision 1
	VK_KHR_external_semaphore_capabilities : extension revision 1
	VK_KHR_get_display_properties2         : extension revision 1
	VK_KHR_get_physical_device_properties2 : extension revision 2
	VK_KHR_get_surface_capabilities2       : extension revision 1
	VK_KHR_surface                         : extension revision 25
	VK_KHR_surface_protected_capabilities  : extension revision 1
	VK_KHR_wayland_surface                 : extension revision 6
	VK_KHR_xcb_surface                     : extension revision 6
	VK_KHR_xlib_surface                    : extension revision 6

Layers: count = 3
=======
VK_LAYER_LUNARG_standard_validation (LunarG Standard Validation Layer) Vulkan version 1.0.131, layer version 1:
	Layer Extensions: count = 0
	Devices: count = 1
		GPU id 	: 0 (llvmpipe (LLVM 12.0.0, 256 bits))
		Layer-Device Extensions: count = 0

VK_LAYER_MESA_device_select (Linux device selection layer) Vulkan version 1.2.73, layer version 1:
	Layer Extensions: count = 0
	Devices: count = 1
		GPU id 	: 0 (llvmpipe (LLVM 12.0.0, 256 bits))
		Layer-Device Extensions: count = 0

VK_LAYER_MESA_overlay (Mesa Overlay layer) Vulkan version 1.1.73, layer version 1:
	Layer Extensions: count = 0
	Devices: count = 1
		GPU id 	: 0 (llvmpipe (LLVM 12.0.0, 256 bits))
		Layer-Device Extensions: count = 0

Steps to reproduce

sudo apt update
sudo apt install vulkan-tools -y
vulkaninfo 2>&1 | a.txt
vi a.txt

Versions

Docker images

Possible causes

No response

Additional context

No response

Run setup-dev-env.sh docker failed

Checklist

  • I've read the contribution guidelines.
  • I've searched other issues and no duplicate issues were found.
  • I'm convinced that this is not my fault but a bug.

Description

When I run setup-dev-env.sh docker for the first time, the following error occurs:

nv@xavier-5:~/ws/autoware$ ./setup-dev-env.sh docker
Setting up the build environment take up to 1 hour.
>  Are you sure to run the setup? [y/N] y
./setup-dev-env.sh: line 105: ansible-galaxy: command not found

Expected behavior

The script should run without error

Actual behavior

Error occured and stopped.

Steps to reproduce

  1. CLone project
  2. Run ./setup-dev-env.sh docker

Versions

  • OS: Ubuntu 20.04
  • ROS2: Galactic
  • Autoware: main

Possible causes

ansible was previously installed under user account, but not under root account. This may cause the version check to pass, but ansible-galaxy not found by root account.

Additional context

No response

[Humble] arm64 build fails

Checklist

  • I've read the contribution guidelines.
  • I've searched other issues and no duplicate issues were found.
  • I'm convinced that this is not my fault but a bug.

Description

https://github.com/autowarefoundation/autoware/runs/6556147878?check_suite_focus=true#step:7:5706

--- stderr: grid_map_core
In file included from /usr/include/eigen3/Eigen/Core:214,
                 from /autoware/src/universe/vendor/grid_map/grid_map_core/include/grid_map_core/GridMap.hpp:13,
                 from /autoware/src/universe/vendor/grid_map/grid_map_core/include/grid_map_core/iterators/PolygonIterator.hpp:14,
                 from /autoware/src/universe/vendor/grid_map/grid_map_core/src/iterators/PolygonIterator.cpp:11:
/usr/include/eigen3/Eigen/src/Core/arch/NEON/PacketMath.h: In function ‘Packet Eigen::internal::pload(const typename Eigen::internal::unpacket_traits<T>::type*) [with Packet = Eigen::internal::eigen_packet_wrapper<int, 2>; typename Eigen::internal::unpacket_traits<T>::type = signed char]’:
/usr/include/eigen3/Eigen/src/Core/arch/NEON/PacketMath.h:1671:9: error: ‘void* memcpy(void*, const void*, size_t)’ copying an object of non-trivial type ‘Eigen::internal::Packet4c’ {aka ‘struct Eigen::internal::eigen_packet_wrapper<int, 2>’} from an array of ‘const int8_t’ {aka ‘const signed char’} [-Werror=class-memaccess]
 1671 |   memcpy(&res, from, sizeof(Packet4c));
      |   ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /usr/include/eigen3/Eigen/Core:172,
                 from /autoware/src/universe/vendor/grid_map/grid_map_core/include/grid_map_core/GridMap.hpp:13,
                 from /autoware/src/universe/vendor/grid_map/grid_map_core/include/grid_map_core/iterators/PolygonIterator.hpp:14,
                 from /autoware/src/universe/vendor/grid_map/grid_map_core/src/iterators/PolygonIterator.cpp:11:
/usr/include/eigen3/Eigen/src/Core/GenericPacketMath.h:159:8: note: ‘Eigen::internal::Packet4c’ {aka ‘struct Eigen::internal::eigen_packet_wrapper<int, 2>’} declared here
  159 | struct eigen_packet_wrapper
      |        ^~~~~~~~~~~~~~~~~~~~

Expected behavior

No error.

Actual behavior

There are errors when building grid_map_core.

Steps to reproduce

docker run --rm -it ghcr.io/autowarefoundation/autoware-universe:humble-latest-arm64
colcon build --symlink-install --cmake-args -DCMAKE_EXPORT_COMPILE_COMMANDS=ON -DCMAKE_BUILD_TYPE=Release --packages-up-to grid_map_core

Versions

  • ROS 2 Humble
  • arm64

Possible causes

No response

Additional context

No response

CUDA environment is broken when I run a docker container with rocker

Checklist

  • I've read the contribution guidelines.
  • I've searched other issues and no duplicate issues were found.
  • I'm convinced that this is not my fault but a bug.

Description

When I run a docker container built on this repository with rocker, nvidia-smi and CUDA packages of autoware.universe didn't work.

Expected behavior

$ rocker --nvidia --x11 --user ghcr.io/autowarefoundation/autoware-universe:humble-latest nvidia-smi

returns the same result as

docker run --rm -it --gpus all -e DISPLAY -e TERM -e QT_X11_NO_MITSHM=1 -v /tmp/.X11-unix:/tmp/.X11-unix -v /etc/localtime:/etc/localtime:ro ghcr.io/autowarefoundation/autoware-universe:humble-latest

Actual behavior

$ rocker --nvidia --x11 --user --volume $PWD:$HOME/autoware -- ghcr.io/autowarefoundation/autoware-universe:humble-latest nvidia-smi
...
bash: nvidia-smi: command not found

or

# in the docker container
$ ros2 launch lidar_centerpoint lidar_centerpoint.launch.xml
[INFO] [launch]: All log files can be found below /home/yusuke/.ros/log/2022-05-27-18-24-13-889706-yusuke-desktop-14888
[INFO] [launch]: Default logging verbosity is set to INFO
[INFO] [lidar_centerpoint_node-1]: process started with pid [14889]
[lidar_centerpoint_node-1] terminate called after throwing an instance of 'thrust::system::detail::bad_alloc'
[lidar_centerpoint_node-1]   what():  std::bad_alloc: cudaErrorInsufficientDriver: CUDA driver version is insufficient for CUDA runtime version
[ERROR] [lidar_centerpoint_node-1]: process has died [pid 14889, exit code -6, cmd '/home/yusuke/autoware/install/lidar_centerpoint/lib/lidar_centerpoint/lidar_centerpoint_node --ros-args -r __node:=lidar_centerpoint --params-file /tmp/launch_params__7bviznb --params-file /tmp/launch_params_xhbpo0pj --params-file /tmp/launch_params_3cva1bu7 --params-file /tmp/launch_params_591wukuf --params-file /tmp/launch_params_binyjp_2 --params-file /tmp/launch_params_d_ubfmz8 --params-file /tmp/launch_params_3ciwtkcg --params-file /tmp/launch_params_cy1qmkld --params-file /home/yusuke/autoware/install/lidar_centerpoint/share/lidar_centerpoint/config/default.param.yaml -r ~/input/pointcloud:=/sensing/lidar/pointcloud -r ~/output/objects:=objects'].

Steps to reproduce

  1. build a docker image in docker directory
  2. run a docker container with rocker
  3. nvidia-smi

Versions

No response

Possible causes

No response

Additional context

No response

[Documentation] Commit message guidelines for cleaner commit history

The current commit history is automatically generated when merging, but it creates many lines and makes it harder to catch up on what is going on. So I'd like to propose formulating a commit message guideline for better commit history.

I think we should explicitly define what we should include in the commit history and what we don't include. We can refer to other OSS contribution guidelines.

Analyze the execution time of the function nodes

Checklist

  • I've read the contribution guidelines.
  • I've searched other issues and no duplicate issues were found.
  • I've agreed with the maintainers that I can plan this task.

Description

We have done some work to analyze and calculate the execution time of the function nodes using the ros2_tracing and tracecompass.

At present,we have analyzed the planning module across planning_simulator on adlink.

The statistical results of some nodes
planning

mission_planner
behavior_path
behavior_velocity
obstacle_avoidance

node: time

mission_planner: 1s(loading the vector map and mission_planning)
behavoir_path: 7ms;
behavior_velocity: 2ms
obstacle_avoidance: (normal:1ms ,replan:18ms)
obstacle_stop: 1.5ms
motion_velocity: 16ms
surround_obstacle: 900μs
scenario_select: 300μs

Purpose

1.When the self-driving vehicle is abnormal, the suspicious node can be found according it.
2.Find out the nodes with long execution time in modules and optimize the algorithm if it affects the system
3.Offer some suggestion about the design and deployment for bus odd

Possible approaches

tools:
https://gitlab.com/ros-tracing/ros2_tracing
https://www.eclipse.org/tracecompass/

Definition of done

execution time of the function nodes

Apply Autoware to the diff-drive robot

Checklist

  • I've read the contribution guidelines.
  • I've searched other issues and no duplicate issues were found.
  • I've agreed with the maintainers that I can plan this task.

Description

We are going to integrate Autoware into the diff-drive robot.

Prerequirements:

We have diff-drive robot with sensor attached.
We have motor drive interface.

Purpose

The diff-drive robot will be implemented with automatic driving functions.

Possible approaches

With simulation

  • spawn the urdf on the Rviz
  • change the controller to diff-drive type
  • planning simulation using the urdf

With robot

  • command the diff-drive robot with Autoware
  • control the diff-drive robot with Autoware
  • sensing with Autoware
  • localization with Autoware
  • apply plannning simulation to real robot

Definition of done

We don't know what we can do with Autoware right now, so we're integrating Autoware with diff-drive robots to investigate what it can do. For example,

  • The diff-drive robot drives autonomously in a room.
    • reach goal
    • avoid obstacle
    • change lane (option) 
    • parking

Related discussion

#170
https://github.com/autowarefoundation/autoware/discussions/173
#183,
#186,
#189,
#174,
#188,
#194,
#196,
#197,
#198,
#199

Improve PlotJugger installation

Checklist

  • I've read the contribution guidelines.
  • I've searched other issues and no duplicate issues were found.
  • I've agreed with the maintainers that I can plan this task.

Description

PlotJugger installation was added in #243.

But it has a problem, it can't be used if it's not released.
So if we add the source in autoware.repos, it will still cause an error.

Since it's low-priority for Galactic, it's okay if we work on it after we move to Humble.

Purpose

To prevent an error when it's not released.

Possible approaches

It can be resolved by adding exec_depend to some packages as explained in https://github.com/orgs/autowarefoundation/discussions/235#discussioncomment-2638505.

There are some choices for where to place the depends.

  • autoware_launch
  • autoware_common/dev_tools (A new package)
  • etc.

Considering many packages depend on autoware_common, is autoware_launch better?

Definition of done

  • Where to place the depends is agreed.
  • The depends are added.
  • The behavior is confirmed.

The lidar_centerpoint package can not download model automatically due to LIB_TORCH NOT FOUND

Checklist

  • I've read the contribution guidelines.
  • I've searched other issues and no duplicate issues were found.
  • I'm convinced that this is not my fault but a bug.

Description

lidar_centerpoint model can not download model automatically due to LIB_TORCH NOT FOUND

Expected behavior

lidar_centerpoint model can download model automatically

Actual behavior

lidar_centerpoint model can not download model automatically due to LIB_TORCH NOT FOUND

Steps to reproduce

ros2 launch tier4_perception_launch perception.launch.xml mode:=lidar

output is :
[WARNING] [launch_ros.actions.node]: Parameter file path is not a file: /autoware/install/lidar_centerpoint/share/lidar_centerpoint/config/default.param.yaml

Versions

ROS2

Possible causes

No response

Additional context

No response

Docker instructions for multiple terminal usage

I'm wondering if there is any cmd existing in universe such that we could enter a specific docker container from another terminal. For example, when we were using ade in Auto, if an environment has been created, we can simply type ade enter from another terminal. I understand that we could use docker exec --it container id to enter the same image from another terminal. It seems that when I was using rocker, the container was created on the fly without a tag. I think it would be good to have some other simple cmd as ade does to enter the created container rather than copy and paste an temporary id.

Autoware.Universe Docker Installation Procedures are wrong

Checklist

  • I've read the contribution guidelines.
  • I've searched other issues and no duplicate issues were found.
  • I'm convinced that this is not my fault but a bug.

Description

Hi,

The current steps for install Autoware.Universe via docker is wrong. The manual is hosted on Autoware's website and not here.
Can I post the correct steps here and someone can update the documentation over on Autoware?

Expected behavior

Installing with the current documentation will result in the following error:

  • CUDA not found
  • CUDNN not found
  • TensorRT not found
  • nvdia-smi and cuda version missmatch
  • unable to upgrade cuda tools due to invalid cross link device
  • rocker no longer has the cuda commands.
  • colcon build fails on cpluplus due to memory contraints.

Actual behavior

Should have installed correctly and built correctly.

Steps to reproduce

Follow documentation

Versions

OS: Ubuntu 20.04
CUDA: 11.6
Rocker: 2.10

Possible causes

Stated Above.

Additional context

Stated Above.

resolve libzmqpp-dev package in Ubuntu 22.04

Checklist

  • I've read the contribution guidelines.
  • I've searched other issues and no duplicate issues were found.
  • I've agreed with the maintainers that I can plan this task.

Description

scenario_simulator_v2 depends on libzmqpp-dev package in ubuntu, this package supports in focal but not supported in jammy.
In order to support humble and ubuntu jammy (22.04), we have to resolve this dependency.

Purpose

Support humble and ubuntu jammy.

Possible approaches

adding zmqpp_vendor package in simulator.repos file.
I already made a draft pull-request for this approach. #350

Definition of done

Resolve zmqpp dependency in ROS2 humble and Ubuntu Jammy(22.04).

Upgrade CUDA to 11.6 Update 2?

Checklist

  • I've read the contribution guidelines.
  • I've searched other issues and no duplicate issues were found.
  • I've agreed with the maintainers that I can plan this task.

Description

"All versions prior to CUDA Toolkit 11.6 Update 2" have a security risk.

https://nvidia.custhelp.com/app/answers/detail/a_id/5334

Purpose

To resolve the risk.

Possible approaches

Update the versions in ansible/playbooks/universe.yaml.

- cuda_version: 11-4
- cudnn_version: 8.2.2.26-1+cuda11.4
- tensorrt_version: 8.2.2-1+cuda11.4

Definition of done

The versions are updated and the build CI passes.

Use a common base image for CUDA libraries

Checklist

  • I've read the contribution guidelines.
  • I've searched other issues and no duplicate issues were found.
  • I've agreed with the maintainers that I can plan this task.

Description

Currently, we install CUDA using setup-dev-env.sh, but it creates different layers and enforces people to pull a large part of the image every time.

To avoid that, it's better to have a common base image.

Purpose

To minimize the differences between Autoware's Docker Images.

Possible approaches

  • Wait for NVIDIA to release a new image and deb packages, and use them. (Recommended)
    • We might have to add the --no-cuda option to setup-dev-env.sh.
  • Create a common Autoware base image.

Definition of done

Autoware's Docker Images are based on a common base image.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.