autowarefoundation / autoware Goto Github PK
View Code? Open in Web Editor NEWAutoware - the world's leading open-source software project for autonomous driving
Home Page: https://www.autoware.org/
License: Apache License 2.0
Autoware - the world's leading open-source software project for autonomous driving
Home Page: https://www.autoware.org/
License: Apache License 2.0
I'll change the name and year from https://github.com/autowarefoundation/autoware/blob/main/NOTICE depending on the repository's files.
FYI, to clone these repositories:
repositories:
autoware:
type: git
url: https://github.com/autowarefoundation/autoware.git
version: main
autoware_common:
type: git
url: https://github.com/autowarefoundation/autoware_common.git
version: main
autoware.core:
type: git
url: https://github.com/autowarefoundation/autoware.core.git
version: main
autoware.universe:
type: git
url: https://github.com/autowarefoundation/autoware.universe.git
version: main
autoware_launch:
type: git
url: https://github.com/autowarefoundation/autoware_launch.git
version: main
sample_sensor_kit_launch:
type: git
url: https://github.com/autowarefoundation/sample_sensor_kit_launch.git
version: main
sample_vehicle_launch:
type: git
url: https://github.com/autowarefoundation/sample_vehicle_launch.git
version: main
autoware_individual_params:
type: git
url: https://github.com/autowarefoundation/autoware_individual_params.git
version: main
autoware-github-actions:
type: git
url: https://github.com/autowarefoundation/autoware-github-actions.git
version: main
autoware-documentation:
type: git
url: https://github.com/autowarefoundation/autoware-documentation.git
version: main
https://github.com/autowarefoundation/autoware/runs/4749425172?check_suite_focus=true
2022-01-08T18:56:43.4386749Z TASK [autoware.dev_env.ros2 : Install ros-galactic-desktop] ********************
2022-01-08T18:59:11.7105352Z ##[warning]You are running out of disk space. The runner will stop working when the machine runs out of disk space. Free space left: 15 MB
Or, the capacity allocated for Docker images is too small.
As described in #370 (comment) it makes sense to move external repositories to src/external
for easier maintenance
Move repositories to src/external
Update .repos
file so that external repositories are cloned to src/external
All external repositories are cloned to src/external
https://github.com/autowarefoundation/autoware/runs/6212294556?check_suite_focus=true#step:6:4715
No failure.
It failed.
Run the workflow.
No response
No response
I'll re-run the CI after a while.
i have managed to run autoware in jetson AGX and everything seems to work well!
the only thing I am looking for is that how to send the output of the lidar via UDP!
what I mean is that I used autoware to detect and track objects using lidar only
is there any way to make the output send thro udp
like xyz
to run with a new contral unit
using autoware with different robot not only a car
autwoare is installed
Is there any sample bag file to try autoware software-suites?
--- stderr: traffic_light_classifier
CMake Error at ament_cmake_symlink_install/ament_cmake_symlink_install.cmake:100 (message):
ament_cmake_symlink_install_directory() can't find
'/home/autoware/src/universe/autoware.universe/perception/traffic_light_classifier/data'
Call Stack (most recent call first):
ament_cmake_symlink_install/ament_cmake_symlink_install.cmake:332 (ament_cmake_symlink_install_directory)
cmake_install.cmake:41 (include)
--- stderr: traffic_light_classifier
CMake Error at ament_cmake_symlink_install/ament_cmake_symlink_install.cmake:100 (message):
ament_cmake_symlink_install_directory() can't find
'/home/autoware/src/universe/autoware.universe/perception/traffic_light_classifier/data'
Call Stack (most recent call first):
ament_cmake_symlink_install/ament_cmake_symlink_install.cmake:332 (ament_cmake_symlink_install_directory)
cmake_install.cmake:41 (include)
--- stderr: traffic_light_classifier
CMake Error at ament_cmake_symlink_install/ament_cmake_symlink_install.cmake:100 (message):
ament_cmake_symlink_install_directory() can't find
'/home/autoware/src/universe/autoware.universe/perception/traffic_light_classifier/data'
Call Stack (most recent call first):
ament_cmake_symlink_install/ament_cmake_symlink_install.cmake:332 (ament_cmake_symlink_install_directory)
cmake_install.cmake:41 (include)
--- stderr: traffic_light_classifier
CMake Error at ament_cmake_symlink_install/ament_cmake_symlink_install.cmake:100 (message):
ament_cmake_symlink_install_directory() can't find
'/home/autoware/src/universe/autoware.universe/perception/traffic_light_classifier/data'
Call Stack (most recent call first):
ament_cmake_symlink_install/ament_cmake_symlink_install.cmake:332 (ament_cmake_symlink_install_directory)
cmake_install.cmake:41 (include)
No response
No response
No response
When doing a clean build of any perception packages that require trained model ONNX files (eg: lidar_apollo_instance_segmentation
), a permission denied error occurs when gdown tries to download the files from Google Drive.
ONNX files should be downloaded as expected and the perception packages should build without error
gdown reports a permission denied error
I am not sure how to reproduce the issue short of doing a complete clean build of Autoware
The theory is that there is a limit to the number of requests that can be made via Google Drive API, and downloads via gdown will fail when the limit is hit.
No response
I'm following How to update a workspace.
In order to update repos, it says we command
vcs import src < autoware.repos
vcs pull
However, it seems it ignores under src
directory because of .gitignore
.
Thus we should command as follows?
vcs import src < autoware.repos
vcs pull src
Updating repos by vcs pull
No repos updated by vcs pull
Follow How to update a workspace
No response
No response
No response
After #340, the image size is too large because unnecessary packages are installed.
https://github.com/autowarefoundation/autoware/pkgs/container/autoware-universe/22935531?tag=humble-latest-arm64
To reduce the size.
Select which packages to install.
The image size is compressed.
Opened by this comment: #29 (comment)
Reference links:
Installation of ros-humble-desktop
fails.
https://github.com/autowarefoundation/autoware/runs/7851163697?check_suite_focus=true#step:5:1924
#18 155.6 TASK [autoware.dev_env.ros2 : Install ros-humble-desktop] **********************
#18 161.1 fatal: [localhost]: FAILED! => {"cache_update_time": 1660626759, "cache_updated": false, "changed": false, "msg": "'/usr/bin/apt-get -y -o \"Dpkg::Options::=--force-confdef\" -o \"Dpkg::Options::=--force-confold\" install 'ros-humble-desktop'' failed: E: Unable to correct problems, you have held broken packages.\n", "rc": 100, "stderr": "E: Unable to correct problems, you have held broken packages.\n", "stderr_lines": ["E: Unable to correct problems, you have held broken packages."], "stdout": "Reading package lists...\nBuilding dependency tree...\nReading state information...\nSome packages could not be installed. This may mean that you have\nrequested an impossible situation or if you are using the unstable\ndistribution that some required packages have not yet been created\nor been moved out of Incoming.\nThe following information may help to resolve the situation:\n\nThe following packages have unmet dependencies:\n libignition-math6-dev : Depends: libignition-cmake2-dev (>= 2.13.0) but 2.12.1-2~jammy is to be installed\n", "stdout_lines": ["Reading package lists...", "Building dependency tree...", "Reading state information...", "Some packages could not be installed. This may mean that you have", "requested an impossible situation or if you are using the unstable", "distribution that some required packages have not yet been created", "or been moved out of Incoming.", "The following information may help to resolve the situation:", "", "The following packages have unmet dependencies:", " libignition-math6-dev : Depends: libignition-cmake2-dev (>= 2.13.0) but 2.12.1-2~jammy is to be installed"]}
Installation succeeds.
Installation fails.
git clone [email protected]:autowarefoundation/autoware.git -b humble
cd autoware
./docker/build.sh
The error log is as follows:
#17 119.1 TASK [autoware.dev_env.ros2 : Install ros-humble-desktop] **********************
#17 122.7 fatal: [localhost]: FAILED! => {"cache_update_time": 1660627442, "cache_updated": false, "changed": false, "msg": "'/usr/bin/apt-get -y -o \"Dpkg::Options::=--force-confdef\" -o \"Dpkg::Options::=--force-confold\" install 'ros-humble-desktop'' failed: E: Unable to correct problems, you have held broken packages.\n", "rc": 100, "stderr": "E: Unable to correct problems, you have held broken packages.\n", "stderr_lines": ["E: Unable to correct problems, you have held broken packages."], "stdout": "Reading package lists...\nBuilding dependency tree...\nReading state information...\nSome packages could not be installed. This may mean that you have\nrequested an impossible situation or if you are using the unstable\ndistribution that some required packages have not yet been created\nor been moved out of Incoming.\nThe following information may help to resolve the situation:\n\nThe following packages have unmet dependencies:\n libignition-math6-dev : Depends: libignition-cmake2-dev (>= 2.13.0) but 2.12.1-2~jammy is to be installed\n", "stdout_lines": ["Reading package lists...", "Building dependency tree...", "Reading state information...", "Some packages could not be installed. This may mean that you have", "requested an impossible situation or if you are using the unstable", "distribution that some required packages have not yet been created", "or been moved out of Incoming.", "The following information may help to resolve the situation:", "", "The following packages have unmet dependencies:", " libignition-math6-dev : Depends: libignition-cmake2-dev (>= 2.13.0) but 2.12.1-2~jammy is to be installed"]}
#17 122.7
#17 122.7 PLAY RECAP *********************************************************************
#17 122.7 localhost : ok=9 changed=5 unreachable=0 failed=1 skipped=1 rescued=0 ignored=0
#17 122.7
#17 122.7 Failed.
#17 ERROR: executor failed running [/bin/bash -o pipefail -c ./setup-dev-env.sh -y $SETUP_ARGS universe && pip uninstall -y ansible ansible-core && mkdir src && vcs import src < autoware.repos && rosdep update && DEBIAN_FRONTEND=noninteractive rosdep install -y --ignore-src --from-paths src --rosdistro "$ROS_DISTRO" && apt-get clean && rm -rf /var/lib/apt/lists/*]: exit code: 1
------
> [devel devel 8/14] RUN --mount=type=ssh ./setup-dev-env.sh -y --no-cuda-drivers universe && pip uninstall -y ansible ansible-core && mkdir src && vcs import src < autoware.repos && rosdep update && DEBIAN_FRONTEND=noninteractive rosdep install -y --ignore-src --from-paths src --rosdistro "humble" && apt-get clean && rm -rf /var/lib/apt/lists/*:
#17 115.8 TASK [autoware.dev_env.ros2 : Add ROS 2 apt repository to source list] *********
#17 119.1 changed: [localhost]
#17 119.1
#17 119.1 TASK [autoware.dev_env.ros2 : Install ros-humble-desktop] **********************
#17 122.7 fatal: [localhost]: FAILED! => {"cache_update_time": 1660627442, "cache_updated": false, "changed": false, "msg": "'/usr/bin/apt-get -y -o \"Dpkg::Options::=--force-confdef\" -o \"Dpkg::Options::=--force-confold\" install 'ros-humble-desktop'' failed: E: Unable to correct problems, you have held broken packages.\n", "rc": 100, "stderr": "E: Unable to correct problems, you have held broken packages.\n", "stderr_lines": ["E: Unable to correct problems, you have held broken packages."], "stdout": "Reading package lists...\nBuilding dependency tree...\nReading state information...\nSome packages could not be installed. This may mean that you have\nrequested an impossible situation or if you are using the unstable\ndistribution that some required packages have not yet been created\nor been moved out of Incoming.\nThe following information may help to resolve the situation:\n\nThe following packages have unmet dependencies:\n libignition-math6-dev : Depends: libignition-cmake2-dev (>= 2.13.0) but 2.12.1-2~jammy is to be installed\n", "stdout_lines": ["Reading package lists...", "Building dependency tree...", "Reading state information...", "Some packages could not be installed. This may mean that you have", "requested an impossible situation or if you are using the unstable", "distribution that some required packages have not yet been created", "or been moved out of Incoming.", "The following information may help to resolve the situation:", "", "The following packages have unmet dependencies:", " libignition-math6-dev : Depends: libignition-cmake2-dev (>= 2.13.0) but 2.12.1-2~jammy is to be installed"]}
#17 122.7
#17 122.7 PLAY RECAP *********************************************************************
#17 122.7 localhost : ok=9 changed=5 unreachable=0 failed=1 skipped=1 rescued=0 ignored=0
#17 122.7
#17 122.7 Failed.
------
error: failed to solve: executor failed running [/bin/bash -o pipefail -c ./setup-dev-env.sh -y $SETUP_ARGS universe && pip uninstall -y ansible ansible-core && mkdir src && vcs import src < autoware.repos && rosdep update && DEBIAN_FRONTEND=noninteractive rosdep install -y --ignore-src --from-paths src --rosdistro "$ROS_DISTRO" && apt-get clean && rm -rf /var/lib/apt/lists/*]: exit code: 1
Some released packages might be broken.
A similar issue that happened recently.
https://answers.ros.org/question/402781/ros-humble-ubuntu-2204-apt-install-issue/
The CI succeeded yesterday and failed today.
https://github.com/autowarefoundation/autoware/actions/runs/2857649523
https://github.com/autowarefoundation/autoware/actions/runs/2865405553
As pointed out here, it's useful if we can run scenario testing on CI.
We can use TIER IV's Autoware Evaluator for this use case.
In that case, we need to add .webauto-ci.yml
to this repository.
https://github.com/tier4/AutowareArchitectureProposal.proj/blob/474da436880f93bd59d87107bd9d83254b0cd9b3/.webauto-ci.yml
https://gitlab.com/autowarefoundation/autoware.auto/AutowareAuto/-/blob/635b36e0b461f66f25fd4a9c30231ca5f129e5db/.webauto-ci.yml
Enable multi-stage builds for Autoware (eg: using Docker Compose or another similar tool).
Lay the ground work, with the right tool, to enable building Autoware.Universe as a microservices architecture.
From https://docs.docker.com/compose/:
Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.
build-compose.sh
that uses Docker Compose to build an Autoware container as a multi-stage build (the current local build script is /autowarefoundation/autoware/docker/build.sh)build-compose.sh
file and request a review from @kenji-miyakebuild.sh
script for local buildsThe obstacle avoidance marker is busy by default.
In TIER IV, The fix for the obstacle avoidance marker has been incorporated but not in the awf repository.
The debug visualization is turned off by default, for easier to see.
modify rviz config
add allow-change-held-packages option here .
Unfortunately ansible 2.13 has not been released yet, where allow-change-held-packages option is implemented, so I will this option after ansible 2.13 is released.
To prevent the problem described in #154 (comment).
add allow-change-held-packages option here .
I'd like to propose the improvement on Dockerfile to modify the code on the host environment rather than on a container.
Current Dockerfile on the add-docker branch expects to modify the code on a container rather than the host. But it is inefficient to set up the development environment and change the code on a container.
So I propose modifying the files on the host, mounting src
on a container, and running autoware on the container.
It will create multiple releases.
The second trial should update the existing release that has the same name.
Another release is created.
No response
I should have used gh release edit
for the second trial.
https://cli.github.com/manual/gh_release_edit
No response
The pull request (PR link) has been approved but it fails to proceed due to the dependency of the package, morai_msgs.
At first, the morai_msgs package was in [autoware.universe/tools/simulator_test/]. but in the simulation working group, it was decided to remove it and then, append it to autoware.repos to maintain consistency with other packages.
Since the pull request (PR link) has been approved but failed to be built, the morai_msgs package would be added to autoware.repos for further steps.
None.
the morai_msgs package is listed on the autoware.repos.
Currently autoware_auto_msgs
repository for universe
and core
is located in tier4
group.
I think the repository should be in autowarefoundation
group.
Originally posted by jason914 March 25, 2022
Dear all,
I use the Jetson Xavier platform. It is ARM64 platform. The OS is JetPack 4.4 (Ubuntu 18.04).
I refer https://autowarefoundation.github.io/autoware-documentation/main/installation/autoware/docker-installation/ commands.
But there are 2 questions when I install Autoware.universe.
Q1:
There is an error when use following command.
./setup-dev-env.sh docker
Following is error log.
`nvidia@xavier:~$ ./setup-dev-env.sh docker
Setting up the build environment take up to 1 hour.
Are you sure to run the setup? [y/N] y
[sudo] password for nvidia:
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package 'ansible' is not installed, so not removed
The following packages were automatically installed and are no longer required:
apt-clone archdetect-deb bogl-bterm busybox-static cryptsetup-bin
dpkg-repack gir1.2-timezonemap-1.0 gir1.2-xkl-1.0 grub-common
kde-window-manager kpackagetool5 kwayland-data kwin-common kwin-data
kwin-x11 libdebian-installer4 libkdecorations2-5v5
libkdecorations2private5v5 libkf5declarative-data libkf5declarative5
libkf5globalaccelprivate5 libkf5idletime5 libkf5kcmutils-data
libkf5kcmutils5 libkf5newstuff-data libkf5newstuff5 libkf5newstuffcore5
libkf5package-data libkf5package5 libkf5plasma5 libkf5quickaddons5
libkf5waylandclient5 libkf5waylandserver5 libkscreenlocker5
libkwin4-effect-builtins1 libkwineffects11 libkwinglutils11
libkwinxrenderutils11 libllvm9 libopts25 libqgsttools-p1 libqt5multimedia5
libqt5multimedia5-plugins libqt5multimediaquick-p5 libqt5multimediawidgets5
libxcb-composite0 libxcb-cursor0 libxcb-damage0 os-prober python-websocket
python3-dbus.mainloop.pyqt5 python3-icu python3-pam python3-pyqt5
python3-pyqt5.qtsvg python3-pyqt5.qtwebkit
qml-module-org-kde-kquickcontrolsaddons qml-module-qtmultimedia
qml-module-qtquick2 rdate sntp tasksel tasksel-data
Use 'sudo apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 503 not upgraded.
WARNING: pip is being invoked by an old script wrapper. This will fail in a future version of pip.
Please see pypa/pip#5599 for advice on fixing the underlying issue.
To avoid this problem you can invoke Python with '-m pip' instead of running pip directly.
WARNING: The directory '/home/nvidia/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you should use sudo's -H flag.
ERROR: Could not find a version that satisfies the requirement ansible==5.* (from versions: 1.0, 1.1, 1.2, 1.2.1, 1.2.2, 1.2.3, 1.3.0, 1.3.1, 1.3.2, 1.3.3, 1.3.4, 1.4, 1.4.1, 1.4.2, 1.4.3, 1.4.4, 1.4.5, 1.5, 1.5.1, 1.5.2, 1.5.3, 1.5.4, 1.5.5, 1.6, 1.6.1, 1.6.2, 1.6.3, 1.6.4, 1.6.5, 1.6.6, 1.6.7, 1.6.8, 1.6.9, 1.6.10, 1.7, 1.7.1, 1.7.2, 1.8, 1.8.1, 1.8.2, 1.8.3, 1.8.4, 1.9.0.1, 1.9.1, 1.9.2, 1.9.3, 1.9.4, 1.9.5, 1.9.6, 2.0.0.0, 2.0.0.1, 2.0.0.2, 2.0.1.0, 2.0.2.0, 2.1.0.0, 2.1.1.0, 2.1.2.0, 2.1.3.0, 2.1.4.0, 2.1.5.0, 2.1.6.0, 2.2.0.0, 2.2.1.0, 2.2.2.0, 2.2.3.0, 2.3.0.0, 2.3.1.0, 2.3.2.0, 2.3.3.0, 2.4.0.0, 2.4.1.0, 2.4.2.0, 2.4.3.0, 2.4.4.0, 2.4.5.0, 2.4.6.0, 2.5.0a1, 2.5.0b1, 2.5.0b2, 2.5.0rc1, 2.5.0rc2, 2.5.0rc3, 2.5.0, 2.5.1, 2.5.2, 2.5.3, 2.5.4, 2.5.5, 2.5.6, 2.5.7, 2.5.8, 2.5.9, 2.5.10, 2.5.11, 2.5.12, 2.5.13, 2.5.14, 2.5.15, 2.6.0a1, 2.6.0a2, 2.6.0rc1, 2.6.0rc2, 2.6.0rc3, 2.6.0rc4, 2.6.0rc5, 2.6.0, 2.6.1, 2.6.2, 2.6.3, 2.6.4, 2.6.5, 2.6.6, 2.6.7, 2.6.8, 2.6.9, 2.6.10, 2.6.11, 2.6.12, 2.6.13, 2.6.14, 2.6.15, 2.6.16, 2.6.17, 2.6.18, 2.6.19, 2.6.20, 2.7.0.dev0, 2.7.0a1, 2.7.0b1, 2.7.0rc1, 2.7.0rc2, 2.7.0rc3, 2.7.0rc4, 2.7.0, 2.7.1, 2.7.2, 2.7.3, 2.7.4, 2.7.5, 2.7.6, 2.7.7, 2.7.8, 2.7.9, 2.7.10, 2.7.11, 2.7.12, 2.7.13, 2.7.14, 2.7.15, 2.7.16, 2.7.17, 2.7.18, 2.8.0a1, 2.8.0b1, 2.8.0rc1, 2.8.0rc2, 2.8.0rc3, 2.8.0, 2.8.1, 2.8.2, 2.8.3, 2.8.4, 2.8.5, 2.8.6, 2.8.7, 2.8.8, 2.8.9, 2.8.10, 2.8.11, 2.8.12, 2.8.13, 2.8.14, 2.8.15, 2.8.16rc1, 2.8.16, 2.8.17rc1, 2.8.17, 2.8.18rc1, 2.8.18, 2.8.19rc1, 2.8.19, 2.8.20rc1, 2.8.20, 2.9.0b1, 2.9.0rc1, 2.9.0rc2, 2.9.0rc3, 2.9.0rc4, 2.9.0rc5, 2.9.0, 2.9.1, 2.9.2, 2.9.3, 2.9.4, 2.9.5, 2.9.6, 2.9.7, 2.9.8, 2.9.9, 2.9.10, 2.9.11, 2.9.12, 2.9.13, 2.9.14rc1, 2.9.14, 2.9.15rc1, 2.9.15, 2.9.16rc1, 2.9.16, 2.9.17rc1, 2.9.17, 2.9.18rc1, 2.9.18, 2.9.19rc1, 2.9.19, 2.9.20rc1, 2.9.20, 2.9.21rc1, 2.9.21, 2.9.22rc1, 2.9.22, 2.9.23rc1, 2.9.23, 2.9.24rc1, 2.9.24, 2.9.25rc1, 2.9.25, 2.9.26rc1, 2.9.26, 2.9.27rc1, 2.9.27, 2.10.0a1, 2.10.0a2, 2.10.0a3, 2.10.0a4, 2.10.0a5, 2.10.0a6, 2.10.0a7, 2.10.0a8, 2.10.0a9, 2.10.0b1, 2.10.0b2, 2.10.0rc1, 2.10.0, 2.10.1, 2.10.2, 2.10.3, 2.10.4, 2.10.5, 2.10.6, 2.10.7, 3.0.0b1, 3.0.0rc1, 3.0.0, 3.1.0, 3.2.0, 3.3.0, 3.4.0, 4.0.0a1, 4.0.0a2, 4.0.0a3, 4.0.0a4, 4.0.0b1, 4.0.0b2, 4.0.0rc1, 4.0.0, 4.1.0, 4.2.0, 4.3.0, 4.4.0, 4.5.0, 4.6.0, 4.7.0, 4.8.0, 4.9.0, 4.10.0, 5.0.0a1, 5.0.0a2, 5.0.0a3, 5.0.0b1, 5.0.0b2, 5.0.0rc1)
ERROR: No matching distribution found for ansible==5.*`
Q2:
There is an error when use following command.
rocker --nvidia --x11 --user --volume $HOME/autoware -- ghcr.io/autowarefoundation/autoware-universe:latest-arm64
Following is error log.
`nvidia@xavier:~$ rocker --nvidia --x11 --user --volume $HOME/autoware -- ghcr.io/autowarefoundation/autoware-universe:latest-arm64
Extension volume doesn't support default arguments. Please extend it.
Active extensions ['nvidia', 'volume', 'x11', 'user']
Step 1/12 : FROM python:3-slim-stretch as detector
---> 839514fd3bcb
Step 2/12 : RUN mkdir -p /tmp/distrovenv
---> Using cache
---> 1568774de25e
Step 3/12 : RUN python3 -m venv /tmp/distrovenv
---> Using cache
---> d457dc3a1045
Step 4/12 : RUN apt-get update && apt-get install -qy patchelf binutils
---> Using cache
---> 8633694baf65
Step 5/12 : RUN . /tmp/distrovenv/bin/activate && pip install distro pyinstaller==4.0 staticx==0.12.3
---> Running in ea9bf28a2354
Collecting distro
Downloading https://files.pythonhosted.org/packages/e1/54/d08d1ad53788515392bec14d2d6e8c410bffdc127780a9a4aa8e6854d502/distro-1.7.0-py3-none-any.whl
Collecting pyinstaller==4.0
Downloading https://files.pythonhosted.org/packages/82/96/21ba3619647bac2b34b4996b2dbbea8e74a703767ce24192899d9153c058/pyinstaller-4.0.tar.gz (3.5MB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing wheel metadata: started
Preparing wheel metadata: finished with status 'done'
Collecting staticx==0.12.3
Downloading https://files.pythonhosted.org/packages/92/ff/d9960ea1f9db48d6044a24ee0f3d78d07bcaddf96eb0c0e8806f941fb7d3/staticx-0.12.3.tar.gz (68kB)
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "", line 1, in
File "/tmp/pip-install-m_nm8mya/staticx/setup.py", line 4, in
from wheel.bdist_wheel import bdist_wheel
ModuleNotFoundError: No module named 'wheel'
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-install-m_nm8mya/staticx/
You are using pip version 19.0.3, however version 22.0.4 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Removing intermediate container ea9bf28a2354
no more output and success not detected
Failed to build detector image
WARNING unable to detect os for base image 'ghcr.io/autowarefoundation/autoware-universe:latest-arm64', maybe the base image does not exist
nvidia@xavier:~$
`
Could you give me suggestion?
Best Regards
-Shu-Kang
Originated from discussion posted by @kasperornmeck in #2780_
Reduce docker images size for runtime environment where unnecessary software could be removed from development environment and reduce the disk space used.
Details of the container used in the analysis:
REPOSITORY: ghcr.io/autowarefoundation/autoware-universe
TAG: galactic-20220728-prebuilt-cuda-arm64
IMAGE ID: 8b7fbc56f6bf
CREATED: 8 days ago
SIZE: 19.3GB
In autoware documentation page, Ccache is recommended to speed up recompilation. See here.
I add configuration file for TIER IV's CI/CD Pipeline.
The configuration file can contain the following
Additional settings for testing Simulation in the CI/CD pipeline of TIER IV.
add configuration.
Confiugration file is added.
ROS 2 Humble will be released this May.
https://docs.ros.org/en/rolling/Releases/Release-Humble-Hawksbill.html
To smoothly transit to Humble, it's good to support Rolling before that.
Related: autowarefoundation/autoware.universe#268
To smoothly transit to Humble.
Replace all Galactic-dependent parts with the new styles.
Autoware's tutorials work with ROS 2 Rolling on Ubuntu 22.04.
If I run ./setup-dev-env.sh docker
multiple times, duplicated entries are written.
$ cat /etc/apt/sources.list.d/nvidia-docker.list
deb https://nvidia.github.io/libnvidia-container/stable/ubuntu18.04/$(ARCH) /
# deb https://nvidia.github.io/libnvidia-container/experimental/ubuntu18.04/$(ARCH) /
deb https://nvidia.github.io/nvidia-container-runtime/stable/ubuntu18.04/$(ARCH) /
# deb https://nvidia.github.io/nvidia-container-runtime/experimental/ubuntu18.04/$(ARCH) /
deb https://nvidia.github.io/nvidia-docker/ubuntu18.04/$(ARCH) /
deb https://nvidia.github.io/libnvidia-container/stable/ubuntu18.04/$(ARCH) /
#deb https://nvidia.github.io/libnvidia-container/experimental/ubuntu18.04/$(ARCH) /
deb https://nvidia.github.io/nvidia-container-runtime/stable/ubuntu18.04/$(ARCH) /
#deb https://nvidia.github.io/nvidia-container-runtime/experimental/ubuntu18.04/$(ARCH) /
deb https://nvidia.github.io/nvidia-docker/ubuntu18.04/$(ARCH) /
deb https://nvidia.github.io/libnvidia-container/stable/ubuntu18.04/$(ARCH) /
#deb https://nvidia.github.io/libnvidia-container/experimental/ubuntu18.04/$(ARCH) /
deb https://nvidia.github.io/nvidia-container-runtime/stable/ubuntu18.04/$(ARCH) /
#deb https://nvidia.github.io/nvidia-container-runtime/experimental/ubuntu18.04/$(ARCH) /
deb https://nvidia.github.io/nvidia-docker/ubuntu18.04/$(ARCH) /
The push
condition is false
in scheduled jobs.
https://github.com/autowarefoundation/autoware/runs/7830173853?check_suite_focus=true#step:5:491
Therefore, even though the workflow has passed, the new images aren't pushed.
The push
condition should be true
.
The push
condition is false
.
Trigger Docker CI workflows with the schedule
event.
No response
This condition is wrong.
No response
No error.
Run sudo apt-get -yqq update
W: GPG error: https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64 InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY A4B469963BF863CC
E: The repository 'https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64 InRelease' is not signed.
Error: Process completed with exit code 100.
Run the workflow.
No response
Currently, sensor_kit description/launch are separated, but vehicle description/launch are combined, which is not consistent.
sensor_kit
vehicle
We can reconsider the structure and naming.
I will create 2 docker containers that will pull the universe and build selected packages separated by their basic functionality.
Container contents:
The reason we split them this way is to ensure we minimize the data transfer overhead between containers.
To prototype the first multi-container version of the Autoware Universe for Open AD Kit.
I will first test them out on my local machine. Then maybe we can set-up the CI but I am not sure yet.
Decide what packages are required for the container
First container:
Second container:
In the current universe
image, vulkaninfo
cannot detect host GPU. This might cause simulator to crash in the future. This happens as I was trying to run svl
using the universe docker image. I understand svl
is going down very soon. But this error would possibly affect future development with simulator which might require vulkan driver
. I have not figured out a fix. Please feel free to make comments if you have any idea.
By comparing the vulkaninfo output, you can see that problematic output, which is the output of universe image, does not detect GPU correctly.
Being able to start simulator.
Below is the correct output.
==========
VULKANINFO
==========
Vulkan Instance Version: 1.2.131
Instance Extensions: count = 18
====================
VK_EXT_acquire_xlib_display : extension revision 1
VK_EXT_debug_report : extension revision 10
VK_EXT_debug_utils : extension revision 2
VK_EXT_direct_mode_display : extension revision 1
VK_EXT_display_surface_counter : extension revision 1
VK_KHR_device_group_creation : extension revision 1
VK_KHR_display : extension revision 23
VK_KHR_external_fence_capabilities : extension revision 1
VK_KHR_external_memory_capabilities : extension revision 1
VK_KHR_external_semaphore_capabilities : extension revision 1
VK_KHR_get_display_properties2 : extension revision 1
VK_KHR_get_physical_device_properties2 : extension revision 2
VK_KHR_get_surface_capabilities2 : extension revision 1
VK_KHR_surface : extension revision 25
VK_KHR_surface_protected_capabilities : extension revision 1
VK_KHR_wayland_surface : extension revision 6
VK_KHR_xcb_surface : extension revision 6
VK_KHR_xlib_surface : extension revision 6
Layers: count = 4
=======
VK_LAYER_LUNARG_standard_validation (LunarG Standard Validation Layer) Vulkan version 1.0.131, layer version 1:
Layer Extensions: count = 0
Devices: count = 2
GPU id : 0 (NVIDIA GeForce RTX 3070 Ti)
Layer-Device Extensions: count = 0
GPU id : 1 (llvmpipe (LLVM 12.0.0, 256 bits))
Layer-Device Extensions: count = 0
VK_LAYER_MESA_device_select (Linux device selection layer) Vulkan version 1.2.73, layer version 1:
Layer Extensions: count = 0
Devices: count = 2
GPU id : 0 (NVIDIA GeForce RTX 3070 Ti)
Layer-Device Extensions: count = 0
GPU id : 1 (llvmpipe (LLVM 12.0.0, 256 bits))
Layer-Device Extensions: count = 0
Problematic output.
==========
VULKANINFO
==========
Vulkan Instance Version: 1.2.131
Instance Extensions: count = 18
====================
VK_EXT_acquire_xlib_display : extension revision 1
VK_EXT_debug_report : extension revision 10
VK_EXT_debug_utils : extension revision 1
VK_EXT_direct_mode_display : extension revision 1
VK_EXT_display_surface_counter : extension revision 1
VK_KHR_device_group_creation : extension revision 1
VK_KHR_display : extension revision 23
VK_KHR_external_fence_capabilities : extension revision 1
VK_KHR_external_memory_capabilities : extension revision 1
VK_KHR_external_semaphore_capabilities : extension revision 1
VK_KHR_get_display_properties2 : extension revision 1
VK_KHR_get_physical_device_properties2 : extension revision 2
VK_KHR_get_surface_capabilities2 : extension revision 1
VK_KHR_surface : extension revision 25
VK_KHR_surface_protected_capabilities : extension revision 1
VK_KHR_wayland_surface : extension revision 6
VK_KHR_xcb_surface : extension revision 6
VK_KHR_xlib_surface : extension revision 6
Layers: count = 3
=======
VK_LAYER_LUNARG_standard_validation (LunarG Standard Validation Layer) Vulkan version 1.0.131, layer version 1:
Layer Extensions: count = 0
Devices: count = 1
GPU id : 0 (llvmpipe (LLVM 12.0.0, 256 bits))
Layer-Device Extensions: count = 0
VK_LAYER_MESA_device_select (Linux device selection layer) Vulkan version 1.2.73, layer version 1:
Layer Extensions: count = 0
Devices: count = 1
GPU id : 0 (llvmpipe (LLVM 12.0.0, 256 bits))
Layer-Device Extensions: count = 0
VK_LAYER_MESA_overlay (Mesa Overlay layer) Vulkan version 1.1.73, layer version 1:
Layer Extensions: count = 0
Devices: count = 1
GPU id : 0 (llvmpipe (LLVM 12.0.0, 256 bits))
Layer-Device Extensions: count = 0
sudo apt update
sudo apt install vulkan-tools -y
vulkaninfo 2>&1 | a.txt
vi a.txt
Docker images
No response
No response
When I run setup-dev-env.sh docker for the first time, the following error occurs:
nv@xavier-5:~/ws/autoware$ ./setup-dev-env.sh docker
Setting up the build environment take up to 1 hour.
> Are you sure to run the setup? [y/N] y
./setup-dev-env.sh: line 105: ansible-galaxy: command not found
The script should run without error
Error occured and stopped.
./setup-dev-env.sh docker
ansible was previously installed under user account, but not under root account. This may cause the version check to pass, but ansible-galaxy not found by root account.
No response
https://github.com/autowarefoundation/autoware/runs/6556147878?check_suite_focus=true#step:7:5706
--- stderr: grid_map_core
In file included from /usr/include/eigen3/Eigen/Core:214,
from /autoware/src/universe/vendor/grid_map/grid_map_core/include/grid_map_core/GridMap.hpp:13,
from /autoware/src/universe/vendor/grid_map/grid_map_core/include/grid_map_core/iterators/PolygonIterator.hpp:14,
from /autoware/src/universe/vendor/grid_map/grid_map_core/src/iterators/PolygonIterator.cpp:11:
/usr/include/eigen3/Eigen/src/Core/arch/NEON/PacketMath.h: In function ‘Packet Eigen::internal::pload(const typename Eigen::internal::unpacket_traits<T>::type*) [with Packet = Eigen::internal::eigen_packet_wrapper<int, 2>; typename Eigen::internal::unpacket_traits<T>::type = signed char]’:
/usr/include/eigen3/Eigen/src/Core/arch/NEON/PacketMath.h:1671:9: error: ‘void* memcpy(void*, const void*, size_t)’ copying an object of non-trivial type ‘Eigen::internal::Packet4c’ {aka ‘struct Eigen::internal::eigen_packet_wrapper<int, 2>’} from an array of ‘const int8_t’ {aka ‘const signed char’} [-Werror=class-memaccess]
1671 | memcpy(&res, from, sizeof(Packet4c));
| ~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /usr/include/eigen3/Eigen/Core:172,
from /autoware/src/universe/vendor/grid_map/grid_map_core/include/grid_map_core/GridMap.hpp:13,
from /autoware/src/universe/vendor/grid_map/grid_map_core/include/grid_map_core/iterators/PolygonIterator.hpp:14,
from /autoware/src/universe/vendor/grid_map/grid_map_core/src/iterators/PolygonIterator.cpp:11:
/usr/include/eigen3/Eigen/src/Core/GenericPacketMath.h:159:8: note: ‘Eigen::internal::Packet4c’ {aka ‘struct Eigen::internal::eigen_packet_wrapper<int, 2>’} declared here
159 | struct eigen_packet_wrapper
| ^~~~~~~~~~~~~~~~~~~~
No error.
There are errors when building grid_map_core
.
docker run --rm -it ghcr.io/autowarefoundation/autoware-universe:humble-latest-arm64
colcon build --symlink-install --cmake-args -DCMAKE_EXPORT_COMPILE_COMMANDS=ON -DCMAKE_BUILD_TYPE=Release --packages-up-to grid_map_core
No response
No response
When I run a docker container built on this repository with rocker, nvidia-smi
and CUDA packages of autoware.universe didn't work.
$ rocker --nvidia --x11 --user ghcr.io/autowarefoundation/autoware-universe:humble-latest nvidia-smi
returns the same result as
docker run --rm -it --gpus all -e DISPLAY -e TERM -e QT_X11_NO_MITSHM=1 -v /tmp/.X11-unix:/tmp/.X11-unix -v /etc/localtime:/etc/localtime:ro ghcr.io/autowarefoundation/autoware-universe:humble-latest
$ rocker --nvidia --x11 --user --volume $PWD:$HOME/autoware -- ghcr.io/autowarefoundation/autoware-universe:humble-latest nvidia-smi
...
bash: nvidia-smi: command not found
or
# in the docker container
$ ros2 launch lidar_centerpoint lidar_centerpoint.launch.xml
[INFO] [launch]: All log files can be found below /home/yusuke/.ros/log/2022-05-27-18-24-13-889706-yusuke-desktop-14888
[INFO] [launch]: Default logging verbosity is set to INFO
[INFO] [lidar_centerpoint_node-1]: process started with pid [14889]
[lidar_centerpoint_node-1] terminate called after throwing an instance of 'thrust::system::detail::bad_alloc'
[lidar_centerpoint_node-1] what(): std::bad_alloc: cudaErrorInsufficientDriver: CUDA driver version is insufficient for CUDA runtime version
[ERROR] [lidar_centerpoint_node-1]: process has died [pid 14889, exit code -6, cmd '/home/yusuke/autoware/install/lidar_centerpoint/lib/lidar_centerpoint/lidar_centerpoint_node --ros-args -r __node:=lidar_centerpoint --params-file /tmp/launch_params__7bviznb --params-file /tmp/launch_params_xhbpo0pj --params-file /tmp/launch_params_3cva1bu7 --params-file /tmp/launch_params_591wukuf --params-file /tmp/launch_params_binyjp_2 --params-file /tmp/launch_params_d_ubfmz8 --params-file /tmp/launch_params_3ciwtkcg --params-file /tmp/launch_params_cy1qmkld --params-file /home/yusuke/autoware/install/lidar_centerpoint/share/lidar_centerpoint/config/default.param.yaml -r ~/input/pointcloud:=/sensing/lidar/pointcloud -r ~/output/objects:=objects'].
docker
directorynvidia-smi
No response
No response
No response
The current commit history is automatically generated when merging, but it creates many lines and makes it harder to catch up on what is going on. So I'd like to propose formulating a commit message guideline for better commit history.
I think we should explicitly define what we should include in the commit history and what we don't include. We can refer to other OSS contribution guidelines.
We have done some work to analyze and calculate the execution time of the function nodes using the ros2_tracing and tracecompass.
At present,we have analyzed the planning module across planning_simulator on adlink.
The statistical results of some nodes
node: time
mission_planner: 1s(loading the vector map and mission_planning)
behavoir_path: 7ms;
behavior_velocity: 2ms
obstacle_avoidance: (normal:1ms ,replan:18ms)
obstacle_stop: 1.5ms
motion_velocity: 16ms
surround_obstacle: 900μs
scenario_select: 300μs
1.When the self-driving vehicle is abnormal, the suspicious node can be found according it.
2.Find out the nodes with long execution time in modules and optimize the algorithm if it affects the system
3.Offer some suggestion about the design and deployment for bus odd
tools:
https://gitlab.com/ros-tracing/ros2_tracing
https://www.eclipse.org/tracecompass/
execution time of the function nodes
We are going to integrate Autoware into the diff-drive robot.
We have diff-drive robot with sensor attached.
We have motor drive interface.
The diff-drive robot will be implemented with automatic driving functions.
We don't know what we can do with Autoware right now, so we're integrating Autoware with diff-drive robots to investigate what it can do. For example,
#170
https://github.com/autowarefoundation/autoware/discussions/173
#183,
#186,
#189,
#174,
#188,
#194,
#196,
#197,
#198,
#199
enable open cda workflow
PlotJugger installation was added in #243.
But it has a problem, it can't be used if it's not released.
So if we add the source in autoware.repos
, it will still cause an error.
Since it's low-priority for Galactic, it's okay if we work on it after we move to Humble.
To prevent an error when it's not released.
It can be resolved by adding exec_depend
to some packages as explained in https://github.com/orgs/autowarefoundation/discussions/235#discussioncomment-2638505.
There are some choices for where to place the depends.
autoware_launch
autoware_common/dev_tools
(A new package)Considering many packages depend on autoware_common
, is autoware_launch
better?
libembree-dev
can't be installed on ARM.
#8 (comment)
embree
manuallyrosdep install
with --skip-keys=embree
lidar_centerpoint model can not download model automatically due to LIB_TORCH NOT FOUND
lidar_centerpoint model can download model automatically
lidar_centerpoint model can not download model automatically due to LIB_TORCH NOT FOUND
ros2 launch tier4_perception_launch perception.launch.xml mode:=lidar
output is :
[WARNING] [launch_ros.actions.node]: Parameter file path is not a file: /autoware/install/lidar_centerpoint/share/lidar_centerpoint/config/default.param.yaml
ROS2
No response
No response
The current Dockerfile doesn't install kvaser-interface even autoware.universe requires it
To install kvaser-interface we need to explicitly add the repository as in the installation procedure so we need to update the ansible playbook
lidar_centerpoint cannot be built on Docker since the current Dockerfile doesn't install libtorch.
lidar_centerpoint explicitly requires libtorch installation to build the executable so we need to add the lines to download and install libtorch on the Dockerfile or ansible.
I'm wondering if there is any cmd
existing in universe
such that we could enter a specific docker container from another terminal. For example, when we were using ade
in Auto
, if an environment has been created, we can simply type ade enter
from another terminal. I understand that we could use docker exec --it container id
to enter the same image from another terminal. It seems that when I was using rocker
, the container was created on the fly without a tag. I think it would be good to have some other simple cmd
as ade
does to enter the created container rather than copy and paste an temporary id.
A follow-up for #2435.
To support ROS 2 Humble.
Improve autowarefoundation/autoware_core_universe_prototype#248.
Autoware's Humble Docker image is published.
Hi,
The current steps for install Autoware.Universe via docker is wrong. The manual is hosted on Autoware's website and not here.
Can I post the correct steps here and someone can update the documentation over on Autoware?
Installing with the current documentation will result in the following error:
Should have installed correctly and built correctly.
Follow documentation
OS: Ubuntu 20.04
CUDA: 11.6
Rocker: 2.10
Stated Above.
Stated Above.
scenario_simulator_v2 depends on libzmqpp-dev package in ubuntu, this package supports in focal but not supported in jammy.
In order to support humble and ubuntu jammy (22.04), we have to resolve this dependency.
Support humble and ubuntu jammy.
adding zmqpp_vendor package in simulator.repos file.
I already made a draft pull-request for this approach. #350
Resolve zmqpp dependency in ROS2 humble and Ubuntu Jammy(22.04).
"All versions prior to CUDA Toolkit 11.6 Update 2" have a security risk.
https://nvidia.custhelp.com/app/answers/detail/a_id/5334
To resolve the risk.
Update the versions in ansible/playbooks/universe.yaml
.
autoware/ansible/playbooks/universe.yaml
Lines 6 to 8 in b260f67
The versions are updated and the build CI passes.
Currently, we install CUDA using setup-dev-env.sh
, but it creates different layers and enforces people to pull a large part of the image every time.
To avoid that, it's better to have a common base image.
To minimize the differences between Autoware's Docker Images.
--no-cuda
option to setup-dev-env.sh
.Autoware's Docker Images are based on a common base image.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.