osrf / rocker Goto Github PK
View Code? Open in Web Editor NEWA tool to run docker containers with overlays and convenient options for things like GUIs etc.
License: Apache License 2.0
A tool to run docker containers with overlays and convenient options for things like GUIs etc.
License: Apache License 2.0
ROS 2 rviz - trying to use sommand listed, but I think it's missing --x11?
rocker --nvidia osrf/ros:crystal-desktop rviz2
I'm putting this here for reference as it was one of the first things I tried to do with rviz2. It looks like there's an issue upstream about this: moby/moby#38442
As a workaround you can use the option --noexecute
and copy the docker run command adding --privileged
to the invocation.
The error you will see is:
dbus[1]: The last reference on a connection was dropped without closing the connection. This is a bug in an application. See dbus_connection_unref() documentation for details.
Most likely, the application was supposed to call dbus_connection_close(), since this is a private connection.
D-Bus not built with -rdynamic so unable to print a backtrace
Is it possible to have an option to not docker run with --rm?
Perhaps this is something to do with my system, but after installing rocker through apt on Ubuntu 18.04, I seem be getting this error:
$ rocker --nvidia --user --pull --pulse osrf/ros:crystal-desktop rviz2
Traceback (most recent call last):
File "/usr/bin/rocker", line 6, in <module>
from pkg_resources import load_entry_point
File "/usr/local/lib/python3.6/dist-packages/pkg_resources/__init__.py", line 3098, in <module>
@_call_aside
File "/usr/local/lib/python3.6/dist-packages/pkg_resources/__init__.py", line 3082, in _call_aside
f(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/pkg_resources/__init__.py", line 3111, in _initialize_master_working_set
working_set = WorkingSet._build_master()
File "/usr/local/lib/python3.6/dist-packages/pkg_resources/__init__.py", line 573, in _build_master
ws.require(__requires__)
File "/usr/local/lib/python3.6/dist-packages/pkg_resources/__init__.py", line 891, in require
needed = self.resolve(parse_requirements(requirements))
File "/usr/local/lib/python3.6/dist-packages/pkg_resources/__init__.py", line 777, in resolve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'distro' distribution was not found and is required by rocker
This didn't seem to resolve the issue above:
https://stackoverflow.com/a/6200314
More info:
$ apt show python3-rocker
Package: python3-rocker
Version: 0.0.1-100
Priority: optional
Section: python
Maintainer: Tully Foote <[email protected]>
Installed-Size: 48.1 kB
Depends: python3-empy, python3-pexpect, python3-requests, python3:any (>= 3.2~), python3-docker
Conflicts: python-rocker
Download-Size: 7,584 B
APT-Manual-Installed: yes
APT-Sources: http://packages.ros.org/ros/ubuntu bionic/main amd64 Packages
Description: A tool to run docker containers with customized extras
A tool to run docker containers with customized extra added like nvidia gui support overlayed.
It would be great if the container started with rocker could be named so we do not have to look up random strings each time we want to exec something in the running container.
it's currently only available in the cli interface. It could easily be a free function for easier reuse.
The os_detector relies on the python base images. I've been using the locally cached version but there was a new version that i only noticed it got when I manually invoked the pull mechanism.
It would be good to at least tell users that their image is cached and out of date compared to upstream.
I believe this is a great tool for developing in ROS and makes the job of teaching ROS also much easier. In addition, I believe that this is a great tool for improving the quality of code that the ROS community creates.
We do need to have a wiki and at the same time also update the ROS wiki as it is very misleading and does not showcase/explain rocker's full potential.
Line 247 in b1fae79
This line here seems to pass -it
to docker if --mode non-interactive
is passed to rocker. It seems to me that actually makes the container start in interactive mode. Isn't the -d
flag what we want for detached execution?
For example the nvidia extension will need the x11 extension to be useful.
Also the home extension should require the user extension for permissions.
There' s a question of ordering, required before or not. And for each sequential element aka snippets.
Preamble elements are named so order independent, and arguments are order independent as far as I know as long as they're escaped properly.
This will resolve this todo:
Line 134 in cea0680
It would be helpful if it was possible to pass parameters with which to call docker run
. In the case of the MoveIt docker image, the rocker --user
flag does not work, but a custom command would have made a fine workaround. This has come up in moveit/moveit.ros.org#282
I'm not clear on best practices here, but naively I would have imagined a call like this:
rocker --nvidia --x11 --docker-run-options '-v /home/my_user/ws_moveit:/home/root/ws_moveit' moveit/moveit:master-source bash
Might be related to #19; I'm not sure I understand the scope of that.
X based content fails when attempting to xforward as unable to connect to the x server.
root@27b685fa76ad:/# xeyes
Error: Can't open display: localhost:10.0
It can be sent to the local display when running remotely though successfully like this
root@27b685fa76ad:/# DISPLAY=:0.0 xeyes
Right now it's not called and done in precondition_environment usually with a sys.exit call.
Some things that are done in the overlayed image might make more sense to be supported in extending the docker entrypoint.
This would allow you to do things like extend paths, set environment variables, and other things more related to runtime. Also elements baked into the entrypoint can allow greater reuse of rocker produced images.
One possible approach would be to have a generic entrypoint that sources all setup files in a directory where extensions can insert their own elements, using a numbered file for ordering.
Right now it crashes when it tries to reinject the nvidia settings. It would be good to either detect that and skip. Or detect that it's incompatible (such as different target platform) and error out. instead of erroring in the build step:
building > ---> 2ffed4320c8f
building > Step 8/11 : RUN ( echo '/usr/local/lib/x86_64-linux-gnu' >> /etc/ld.so.conf.d/glvnd.conf && ldconfig || grep -q /usr/local/lib/x86_64-linux-gnu /etc/ld.so.conf.d/glvnd.conf ) && ( echo '/usr/local/lib/i386-linux-gnu' >> /etc/ld.so.conf.d/glvnd.conf && ldconfig || grep -q /usr/local/lib/i386-linux-gnu /etc/ld.so.conf.d/glvnd.conf )
building > ---> Running in 4d3162f4f30b
building > /bin/sh: 1: cannot create /etc/ld.so.conf.d/glvnd.conf: Permission denied
building > grep: /etc/ld.so.conf.d/glvnd.conf: No such file or directory
When extending the nvidia tests to cover xenial and bionic it failed to build on bionic using both the nvidia and user arguments.
I ended up removing the user element for the unit test as it was unnecessary: 67281fc#diff-deb510f3e7ec57637aba0ae0de5382d6L61
This appears to be a perl issue with differing versions, that's most commonly associated with mixing binary distributions and cpan distributions.
building > Step 12/14 : RUN apt-get update && apt-get install -y sudo && apt-get clean
building > ---> Using cache
building > ---> 9d5a0b1cf12d
building > Step 13/14 : RUN useradd -U --uid 1000 -ms /bin/bash tfoote && echo "tfoote:tfoote" | chpasswd && adduser tfoote sudo && echo "tfoote ALL=NOPASSWD: ALL" >> /etc/sudoers.d/tfoote
building > ---> Running in 8458b0534fea
building > Fcntl.c: loadable library and perl binaries are mismatched (got handshake key 0xdb00080, needed 0xde00080)
no more output and success not detected
In a previous build I also detected an issue slightly earlier, but it seemed to recover:
Need to get 428 kB of archives.
After this operation, 1765 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu bionic/main amd64 sudo amd64 1.8.21p2-3ubuntu1 [428 kB]
building > IO.c: loadable library and perl binaries are mismatched (got handshake key 0xdb00080, needed 0xde00080)
building > Fetched 428 kB in 1s (403 kB/s)
building > Selecting previously unselected package sudo.
Getting into that container I was able to confirm that it's the adduser
command that's crashing
$ docker run -ti --rm 9d5a0b1cf12d bash
root@b64960dbb3e3:/# adduser -h
Fcntl.c: loadable library and perl binaries are mismatched (got handshake key 0xdb00080, needed 0xde00080)
But there's nothing from cpan in the system
root@b64960dbb3e3:/# env | grep PERL
root@b64960dbb3e3:/#
Things like a dist-upgrade don't help either.
Perl itself seems to be running fine
root@b64960dbb3e3:/# perl --version
This is perl 5, version 26, subversion 1 (v5.26.1) built for x86_64-linux-gnu-thread-multi
(with 67 registered patches, see perl -V for more detail)
Copyright 1987-2017, Larry Wall
Perl may be copied only under the terms of either the Artistic License or the
GNU General Public License, which may be found in the Perl 5 source kit.
Complete documentation for Perl, including FAQ lists, should be found on
this system using "man perl" or "perldoc perl". If you have access to the
Internet, point your browser at http://www.perl.org/, the Perl Home Page.
root@b64960dbb3e3:/# which perl
/usr/bin/perl
Other apt functions are having trouble:
root@b64960dbb3e3:/# sudo apt-get update
Hit:1 http://archive.ubuntu.com/ubuntu bionic InRelease
Get:2 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
Get:3 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
Get:4 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]
Fetched 252 kB in 1s (208 kB/s)
Reading package lists... Done
root@b64960dbb3e3:/# sudo apt-get dist-upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
apt libapt-pkg5.0 tar
3 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
1 not fully installed or removed.
Need to get 2203 kB of archives.
After this operation, 22.5 kB of additional disk space will be used.
Do you want to continue? [Y/n]
Get:1 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 tar amd64 1.29b-2ubuntu0.1 [234 kB]
Get:2 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libapt-pkg5.0 amd64 1.6.8 [804 kB]
Get:3 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 apt amd64 1.6.8 [1165 kB]
Fetched 2203 kB in 2s (1399 kB/s)
IO.c: loadable library and perl binaries are mismatched (got handshake key 0xdb00080, needed 0xde00080)
(Reading database ... 4762 files and directories currently installed.)
Preparing to unpack .../tar_1.29b-2ubuntu0.1_amd64.deb ...
Unpacking tar (1.29b-2ubuntu0.1) over (1.29b-2) ...
Setting up tar (1.29b-2ubuntu0.1) ...
(Reading database ... 4762 files and directories currently installed.)
Preparing to unpack .../libapt-pkg5.0_1.6.8_amd64.deb ...
Unpacking libapt-pkg5.0:amd64 (1.6.8) over (1.6.6ubuntu0.1) ...
Setting up libapt-pkg5.0:amd64 (1.6.8) ...
(Reading database ... 4762 files and directories currently installed.)
Preparing to unpack .../archives/apt_1.6.8_amd64.deb ...
Unpacking apt (1.6.8) over (1.6.6ubuntu0.1) ...
Setting up apt (1.6.8) ...
Installing new version of config file /etc/apt/apt.conf.d/01autoremove ...
Fcntl.c: loadable library and perl binaries are mismatched (got handshake key 0xdb00080, needed 0xde00080)
Cwd.c: loadable library and perl binaries are mismatched (got handshake key 0xdb00080, needed 0xde00080)
Cwd.c: loadable library and perl binaries are mismatched (got handshake key 0xdb00080, needed 0xde00080)
Cwd.c: loadable library and perl binaries are mismatched (got handshake key 0xdb00080, needed 0xde00080)
Cwd.c: loadable library and perl binaries are mismatched (got handshake key 0xdb00080, needed 0xde00080)
Cwd.c: loadable library and perl binaries are mismatched (got handshake key 0xdb00080, needed 0xde00080)
Cwd.c: loadable library and perl binaries are mismatched (got handshake key 0xdb00080, needed 0xde00080)
Setting up less (487-0.1) ...
Fcntl.c: loadable library and perl binaries are mismatched (got handshake key 0xdb00080, needed 0xde00080)
dpkg: error processing package less (--configure):
installed less package post-installation script subprocess returned error exit status 1
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Errors were encountered while processing:
less
E: Sub-process /usr/bin/dpkg returned an error code (1)
What I believe is the equivalent command line for the failing test works fine
rocker --nvidia --user testfixture_bionic_glmark2
When there's a build error the output in rocker is suppressed.
As you can see manually running the command afterwards shows that it's failing due to executable file not found. But the rocker build just ends with an error code and no information. It is an invalid configuration but that doesn't mean we shouldn't show the error message.
$ rocker --home --user hello-world --noex
Plugins found: ['dev_helpers', 'home', 'nvidia', 'pulse', 'user']
Active extensions ['home', 'user']
Writing dockerfile to /tmp/tmpwfb8q_7_/Dockerfile
vvvvvv
# Preamble from extension [home]
# Preamble from extension [user]
FROM hello-world
# Snippet from extension [home]
# Snippet from extension [user]
# make sure sudo is installed to be able to give user sudo access in docker
RUN apt-get update \
&& apt-get install -y \
sudo \
&& apt-get clean
RUN useradd -U --uid 1000 -ms /bin/bash osrf \
&& echo "osrf:osrf" | chpasswd \
&& adduser osrf sudo \
&& echo "osrf ALL=NOPASSWD: ALL" >> /etc/sudoers.d/osrf
# Commands below run as the developer user
USER osrf
^^^^^^
Building docker file with arguments: {'path': '/tmp/tmpwfb8q_7_', 'rm': True, 'decode': True, 'nocache': False, 'tag': 'rocker_hello-world_home_user'}
building > Step 1/4 : FROM hello-world
building > ---> fce289e99eb9
building > Step 2/4 : RUN apt-get update && apt-get install -y sudo && apt-get clean
building > ---> Running in 6f95c70df683
2 osrf@osrf-toughpad1:~$ docker run -ti --rm 6f95c70df683
Unable to find image '6f95c70df683:latest' locally
docker: Error response from daemon: pull access denied for 6f95c70df683, repository does not exist or may require 'docker login'.
See 'docker run --help'.
125 osrf@osrf-toughpad1:~$ docker run -ti --rm fce289e99eb9 bash
docker: Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "exec: \"bash\": executable file not found in $PATH": unknown.
127 osrf@osrf-toughpad1:~$ docker run -ti --rm fce289e99eb9 sh
docker: Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "exec: \"sh\": executable file not found in $PATH": unknown.
127 osrf@osrf-toughpad1:~$ docker run -ti --rm fce289e99eb9
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
The project on pypi has been migrated to this, but the old version was 0.1.0 so the next increment should go above that to make sure to install over the older ones.
The --x11
extension fails to run xauth
on the host if the hard-coded /tmp/.docker.xauth
already exists and has been created by another user (permissions are 0600):
xauth: timeout in locking authority file /tmp/.docker.xauth
Failed setting up XAuthority with command xauth nlist localhost:11.0 | sed -e 's/^..../ffff/' | xauth -f /tmp/.docker.xauth nmerge -
Failed to precondition for extension [x11] with error: Command 'xauth nlist localhost:11.0 | sed -e 's/^..../ffff/' | xauth -f /tmp/.docker.xauth nmerge -' returned non-zero exit status 1.
deactivating
I suggest to create the file with tempfile.mkstemp instead on the host, such that each rocker invocation is independent of previous runs:
rocker/src/rocker/nvidia_extension.py
Line 44 in b4a736b
I get the following error when docker is not running and I try to use rocker
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/urllib3/connectionpool.py", line 699, in urlopen
httplib_response = self._make_request(
File "/usr/lib/python3.9/site-packages/urllib3/connectionpool.py", line 394, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/lib/python3.9/http/client.py", line 1255, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/lib/python3.9/http/client.py", line 1301, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/lib/python3.9/http/client.py", line 1250, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/lib/python3.9/http/client.py", line 1010, in _send_output
self.send(msg)
File "/usr/lib/python3.9/http/client.py", line 950, in send
self.connect()
File "/home/ridhwan/.local/lib/python3.9/site-packages/docker/transport/unixconn.py", line 43, in connect
sock.connect(self.unix_socket)
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/requests/adapters.py", line 439, in send
resp = conn.urlopen(
File "/usr/lib/python3.9/site-packages/urllib3/connectionpool.py", line 755, in urlopen
retries = retries.increment(
File "/usr/lib/python3.9/site-packages/urllib3/util/retry.py", line 531, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/usr/lib/python3.9/site-packages/urllib3/packages/six.py", line 734, in reraise
raise value.with_traceback(tb)
File "/usr/lib/python3.9/site-packages/urllib3/connectionpool.py", line 699, in urlopen
httplib_response = self._make_request(
File "/usr/lib/python3.9/site-packages/urllib3/connectionpool.py", line 394, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/lib/python3.9/http/client.py", line 1255, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/lib/python3.9/http/client.py", line 1301, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/lib/python3.9/http/client.py", line 1250, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/lib/python3.9/http/client.py", line 1010, in _send_output
self.send(msg)
File "/usr/lib/python3.9/http/client.py", line 950, in send
self.connect()
File "/home/ridhwan/.local/lib/python3.9/site-packages/docker/transport/unixconn.py", line 43, in connect
sock.connect(self.unix_socket)
urllib3.exceptions.ProtocolError: ('Connection aborted.', ConnectionRefusedError(111, 'Connection refused'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ridhwan/.local/lib/python3.9/site-packages/docker/api/client.py", line 214, in _retrieve_server_version
return self.version(api_version=False)["ApiVersion"]
File "/home/ridhwan/.local/lib/python3.9/site-packages/docker/api/daemon.py", line 181, in version
return self._result(self._get(url), json=True)
File "/home/ridhwan/.local/lib/python3.9/site-packages/docker/utils/decorators.py", line 46, in inner
return f(self, *args, **kwargs)
File "/home/ridhwan/.local/lib/python3.9/site-packages/docker/api/client.py", line 237, in _get
return self.get(url, **self._set_request_timeout(kwargs))
File "/usr/lib/python3.9/site-packages/requests/sessions.py", line 555, in get
return self.request('GET', url, **kwargs)
File "/usr/lib/python3.9/site-packages/requests/sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python3.9/site-packages/requests/sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python3.9/site-packages/requests/adapters.py", line 498, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionRefusedError(111, 'Connection refused'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ridhwan/.local/bin/rocker", line 8, in <module>
sys.exit(main())
File "/home/ridhwan/.local/lib/python3.9/site-packages/rocker/cli.py", line 45, in main
extension_manager.extend_cli_parser(parser, default_args)
File "/home/ridhwan/.local/lib/python3.9/site-packages/rocker/core.py", line 99, in extend_cli_parser
p.register_arguments(parser, default_args)
File "/home/ridhwan/.local/lib/python3.9/site-packages/rocker/extensions.py", line 137, in register_arguments
client = get_docker_client()
File "/home/ridhwan/.local/lib/python3.9/site-packages/rocker/core.py", line 121, in get_docker_client
docker_client = docker.APIClient()
File "/home/ridhwan/.local/lib/python3.9/site-packages/docker/api/client.py", line 197, in __init__
self._version = self._retrieve_server_version()
File "/home/ridhwan/.local/lib/python3.9/site-packages/docker/api/client.py", line 221, in _retrieve_server_version
raise DockerException(
docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', ConnectionRefusedError(111, 'Connection refused'))
We do have a dependency missing exception but it isn't triggered here. Maybe we should add this as well.
Run rocker without starting docker daemon.
rocker can generate images, for convenient sharing these images could be uploaded to a registry or the Dockerfile can be committed to a repository with auto builds enabled on dockerhub.
Then rocker could use that
--no-build
option would be valuable to support thisEven more powerful would be if the extensions used in the layer were captured so that rocker run
could generate the correct runtime arguments for the extensions embedded into the image. This could possibly be implemented using labels or marker files in the image. Potentially the image could embed the rocker extensions in a known location such that the extensions would not need to be installed on the host system.
This would be valuable for reproducible aggregate images. The host system verification checks could still be run.
Talking with @sloretz it would be good to have the ability to do things as the user in the Dockerfile snippets.
I think that the recommended pattern would be that the snippet would switch to the USER command at the top of it and then return to the USER root at the end. To make this useful though a few helper functions would be valuable to let the plugins query for the username to switch to. Alternatively they could ask for their snippet to be run as user.
Also related the user account would need to be setup at the beginning before the snippet and the final USER setting postpended.
This would drive a need for extension point dependencies. And maybe the --user option would drive the --user-account option and anything that wants to run as a user would also require the --user-account option. And we could make sure that user-account is run first.
A slightly more integrated solution would be to allow the snippet to be registered with an option "run as user" and then the switch to and from USER would be handled by rocker instead of by the snippet. This would make it less likely that a snippet would forget to return the USER to root and break everything downstream.
I use rocker --network host --device /dev/dri/card0 --x11 --pulse --user --home --env "DISPLAY" "XDG_CONFIG_HOME=~/.config" TERM=xterm-256color -- my_image
. The final docker call is docker run -it --rm --device /dev/dri/card0 --device /dev/dri/card0 -e DISPLAY -e 'XDG_CONFIG_HOME=~/.config' -e TERM=xterm-256color -v /home/gael:/home/gael --network host -v /run/user/1000/pulse:/run/user/1000/pulse --device /dev/snd -e PULSE_SERVER=unix:/run/user/1000/pulse/native -v /run/user/1000/pulse/native:/run/user/1000/pulse/native --group-add 29 -e DISPLAY -e TERM -e QT_X11_NO_MITSHM=1 -e XAUTHORITY=/tmp/.docker.xauth -v /tmp/.docker.xauth:/tmp/.docker.xauth -v /tmp/.X11-unix:/tmp/.X11-unix -v /etc/localtime:/etc/localtime:ro d3ddfa229ca8
.
The TERM
environment is set twice and the last one takes TERM
from the calling environment, which in my case doesn't work because this particular terminal is not installed in the image. I know some workarounds but this is inconvenient.
By default most things go through except for the window resize signals.
https://pexpect.readthedocs.io/en/stable/api/pexpect.html#pexpect.spawn.interact
On my Focal host I cannot run the nvidia images on xenial 16.04 anymore. However the tests pass on my bionic host 18.04.
The nvidia 450 driver and the nvidia 440 drivers are respectively installed.
The quick solution is to sudo apt install python3-distutils
but it should be declared properly.
https://github.com/boxboat/fixuid uses a Go program to change the UID and GID inside, we could leverage this or use the same technique with a script bash or python.
This would be more robust if it provided it's own entrypoint.
Backtrace of the error.
$ rocker --x11 --nvidia --user --home --pulse ddm
Plugins found: ['dev_helpers', 'home', 'nvidia', 'pulse', 'user', 'x11']
Active extensions ['home', 'nvidia', 'pulse', 'x11', 'user']
Traceback (most recent call last):
File "/usr/bin/rocker", line 11, in <module>
load_entry_point('rocker==0.1.4', 'console_scripts', 'rocker')()
File "/usr/lib/python3/dist-packages/rocker/cli.py", line 58, in main
dig = DockerImageGenerator(active_extensions, args_dict, base_image)
File "/usr/lib/python3/dist-packages/rocker/core.py", line 77, in __init__
self.dockerfile = generate_dockerfile(active_extensions, self.cliargs, base_image)
File "/usr/lib/python3/dist-packages/rocker/core.py", line 180, in generate_dockerfile
dockerfile_str += el.get_preamble(args_dict) + '\n'
File "/usr/lib/python3/dist-packages/rocker/nvidia_extension.py", line 107, in get_preamble
return em.expand(preamble, self.get_environment_subs(cliargs))
File "/usr/lib/python3/dist-packages/rocker/nvidia_extension.py", line 92, in get_environment_subs
dist, ver, codename = detect_os(cliargs['base_image'])
TypeError: 'NoneType' object is not iterable
Example of the error that caused the issue:
docker run -it --rm -v /home/tfoote:/home/tfoote -v /run/user/1000/pulse:/run/user/1000/pulse --device /dev/snd -e PULSE_SERVER=unix:/run/user/1000/pulse/native -v /run/user/1000/pulse/native:/run/user/1000/pulse/native --group-add 29 -e DISPLAY -e TERM -e QT_X11_NO_MITSHM=1 -e XAUTHORITY=/tmp/.docker.xauth -v /tmp/.docker.xauth:/tmp/.docker.xauth -v /tmp/.X11-unix:/tmp/.X11-unix -v /etc/localtime:/etc/localtime:ro rocker_ddm_home_pulse_x11_user --entrypoint /bin/bash
/entrypoint.sh: 4: .: Can't open /workspace/drone_demo/install/setup.sh
Traceback (most recent call last):
File "/usr/bin/rocker", line 6, in
from pkg_resources import load_entry_point
File "/usr/lib/python3/dist-packages/pkg_resources/init.py", line 2927, in
@_call_aside
File "/usr/lib/python3/dist-packages/pkg_resources/init.py", line 2913, in _call_aside
f(*args, **kwargs)
File "/usr/lib/python3/dist-packages/pkg_resources/init.py", line 2940, in _initialize_master_working_set
working_set = WorkingSet._build_master()
File "/usr/lib/python3/dist-packages/pkg_resources/init.py", line 635, in _build_master
ws.require(requires)
File "/usr/lib/python3/dist-packages/pkg_resources/init.py", line 943, in require
needed = self.resolve(parse_requirements(requirements))
File "/usr/lib/python3/dist-packages/pkg_resources/init.py", line 829, in resolve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'docker' distribution was not found and is required by rocker
Right now you get a long traceback if you don't have permissions to build the docker container
$ rocker docker/hello-world
Plugins found: ['dev_helpers', 'home', 'nvidia', 'pulse', 'user']
Active extensions []
Writing dockerfile to /tmp/tmp93bgqbui/Dockerfile
vvvvvv
FROM docker/hello-world
^^^^^^
Building docker file with arguments: {'path': '/tmp/tmp93bgqbui', 'rm': True, 'decode': True, 'nocache': False, 'tag': 'rocker_docker/hello-world'}
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 601, in urlopen
chunked=chunked)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 357, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/lib/python3.6/http/client.py", line 1239, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/lib/python3.6/http/client.py", line 1285, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/lib/python3.6/http/client.py", line 1234, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/lib/python3.6/http/client.py", line 1026, in _send_output
self.send(msg)
File "/usr/lib/python3.6/http/client.py", line 964, in send
self.connect()
File "/usr/lib/python3/dist-packages/docker/transport/unixconn.py", line 33, in connect
sock.connect(self.unix_socket)
PermissionError: [Errno 13] Permission denied
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 440, in send
timeout=timeout
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 639, in urlopen
_stacktrace=sys.exc_info()[2])
File "/usr/lib/python3/dist-packages/urllib3/util/retry.py", line 357, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/usr/lib/python3/dist-packages/six.py", line 692, in reraise
raise value.with_traceback(tb)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 601, in urlopen
chunked=chunked)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 357, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/lib/python3.6/http/client.py", line 1239, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/lib/python3.6/http/client.py", line 1285, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/lib/python3.6/http/client.py", line 1234, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/lib/python3.6/http/client.py", line 1026, in _send_output
self.send(msg)
File "/usr/lib/python3.6/http/client.py", line 964, in send
self.connect()
File "/usr/lib/python3/dist-packages/docker/transport/unixconn.py", line 33, in connect
sock.connect(self.unix_socket)
urllib3.exceptions.ProtocolError: ('Connection aborted.', PermissionError(13, 'Permission denied'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/bin/rocker", line 11, in <module>
load_entry_point('rocker==0.1.3', 'console_scripts', 'rocker')()
File "/usr/lib/python3/dist-packages/rocker/cli.py", line 63, in main
exit_code = dig.build(**vars(args))
File "/usr/lib/python3/dist-packages/rocker/core.py", line 64, in build
for line in docker_client.build(**arguments):
File "/usr/lib/python3/dist-packages/docker/api/build.py", line 246, in build
timeout=timeout,
File "/usr/lib/python3/dist-packages/docker/utils/decorators.py", line 46, in inner
return f(self, *args, **kwargs)
File "/usr/lib/python3/dist-packages/docker/api/client.py", line 185, in _post
return self.post(url, **self._set_request_timeout(kwargs))
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 567, in post
return self.request('POST', url, data=data, json=json, **kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 520, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 630, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 490, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', PermissionError(13, 'Permission denied'))
For every invocation of something like rocker --x11 --nvidia ..
, it seems that rocker rebuilds the same dockerfile with no caching, even if the base image/tag referred to in command remains unchanged. The results in slow spin up of containers as well as network access requirements, as users must wait apt install steps to complete. I think rocker should only seek to rebuild the child image used for spawning the eventual container when the child image/tag is out of sync with the parent image/tag provided. So, what is the current source of entropy that is breaking the docker build cache?
Users have run into issues which is likely too low a version on xenial: osrf/car_demo#49 (comment)
You can get errors like:
/usr/bin/docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]]
If you don't have a new enough nvidia driver.
The docker-py
library was renamed to docker
in 2.0.0 https://github.com/docker/docker-py/releases/tag/2.0.0 . If both docker-py
and docker
are installed then import docker
is the module that was found first.
It can be confusing when rocker appears to be doing nothing, but it's actually downloading the docker images necessary to run.
Getting hooks into the actual download might be a challenge, but if we detect and prefetch the base images before hand we can provide feedback during that and not have to worry about getting the feedback from the build stage.
This likely will overlap with #26
Also make sure to cover intermediate/helper image fetching such as the python image used for the os detector. It's 900Mb which is bigger than I'd hoped.
I have installed docker rootless on a machine of our computer pool for students. We do not want the students to have root-like access, thus we avoid setting up the docker group.
When trying a simple rocker --nvidia nvidia/cuda:11.0-base nvidia-smi
, the following error occurs:
rocker: error: DependencyMissing encountered: Docker Client failed to connect to docker daemon. Please verify that docker is installed and running. As well as that you have permission to access the docker daemon. This is usually by being a member of the docker group.
The error occurs when instantiating the docker.APIClient()
which can not find the rootless docker instance. However, docker.from_env().api
works and could serve as a drop in replacement without any other code changes. I will open a pull request, when I'm done testing my container.
On the nvidia side, I also had to change the /etc/nvidia-container-runtime/config.toml to allow the device access.
[nvidia-container-cli]
no-cgroups = true
Otherwise programs that try to use the home directory will get permission denied errors.
Such as osrf/car_demo#56
Resources:
https://askubuntu.com/questions/152707/how-to-make-user-home-folder-after-account-creation
https://askubuntu.com/questions/335961/create-default-home-directory-for-existing-user-in-terminal
It looks like mkhomedir_helper might be available widely enough.
http://man7.org/linux/man-pages/man8/mkhomedir_helper.8.html
The auto generated Dockerfile seems to encounter a build issue in the first stage of it multistage build.
$ rocker --nvidia --user --pull --pulse osrf/ros:crystal-desktop rviz2
Plugins found: ['dev_helpers', 'home', 'nvidia', 'pulse', 'user']
Active extensions ['nvidia', 'pulse', 'user']
Pulling image osrf/ros:crystal-desktop
b'{"status":"Pulling from osrf/ros","id":"crystal-desktop"}\r\n'
b'{"status":"Digest: sha256:62579d806386b2b3ff2ecf3924f0ba39b8d8422fb7979db42e469cfbfd26e19f"}\r\n'
b'{"status":"Status: Image is up to date for osrf/ros:crystal-desktop"}\r\n'
Writing dockerfile to /tmp/tmpg_lv8g3_/Dockerfile
vvvvvv
...
^^^^^^
Building docker file with arguments: {'path': '/tmp/tmpg_lv8g3_', 'rm': True, 'decode': True, 'nocache': False, 'tag': 'rocker_osrf/ros:crystal-desktop_nvidia_pulse_user'}
building > Step 1/16 : FROM nvidia/opengl:1.0-glvnd-devel-ubuntu18.04 as glvnd
building > ---> 04c47f164887
building > Step 2/16 : FROM osrf/ros:crystal-desktop
building > ---> c3d97d8c2f50
building > Step 3/16 : COPY --from=glvnd /usr/local/lib/x86_64-linux-gnu /usr/local/lib/x86_64-linux-gnu
Cannot run if build has not passed.
Would be nice if the rocker could forward the build error to the printout, rather than needing to manually build the same dockerfile to identify the exact error.
$ docker build -t temp /tmp/tmpg_lv8g3_/
Sending build context to Docker daemon 4.096kB
Step 1/16 : FROM nvidia/opengl:1.0-glvnd-devel-ubuntu18.04 as glvnd
---> 04c47f164887
Step 2/16 : FROM osrf/ros:crystal-desktop
---> c3d97d8c2f50
Step 3/16 : COPY --from=glvnd /usr/local/lib/x86_64-linux-gnu /usr/local/lib/x86_64-linux-gnu
COPY failed: stat /var/lib/docker/overlay2/471568f79754e1b366acbc4647bda7de3ccc8bde738d60023154c746cbcb85a7/merged/usr/local/lib/x86_64-linux-gnu: no such file or directory
# Preamble from extension [nvidia]
# Ubuntu 16.04 with nvidia-docker2 beta opengl support
FROM nvidia/opengl:1.0-glvnd-devel-ubuntu18.04 as glvnd
# Preamble from extension [pulse]
# Preamble from extension [user]
FROM osrf/ros:crystal-desktop
# Snippet from extension [nvidia]
# Open nvidia-docker2 GL support
COPY --from=glvnd /usr/local/lib/x86_64-linux-gnu /usr/local/lib/x86_64-linux-gnu
COPY --from=glvnd /usr/local/lib/i386-linux-gnu /usr/local/lib/i386-linux-gnu
COPY --from=glvnd /usr/lib/x86_64-linux-gnu /usr/lib/x86_64-linux-gnu
COPY --from=glvnd /usr/lib/i386-linux-gnu /usr/lib/i386-linux-gnu
COPY --from=glvnd /usr/local/share/glvnd/egl_vendor.d/10_nvidia.json /usr/local/share/glvnd/egl_vendor.d/10_nvidia.json
# if the path is alreaady present don't fail because of being unable to append
RUN ( echo '/usr/local/lib/x86_64-linux-gnu' >> /etc/ld.so.conf.d/glvnd.conf && ldconfig || grep -q /usr/local/lib/x86_64-linux-gnu /etc/ld.so.conf.d/glvnd.conf ) && \
( echo '/usr/local/lib/i386-linux-gnu' >> /etc/ld.so.conf.d/glvnd.conf && ldconfig || grep -q /usr/local/lib/i386-linux-gnu /etc/ld.so.conf.d/glvnd.conf )
ENV LD_LIBRARY_PATH /usr/local/lib/x86_64-linux-gnu:/usr/local/lib/i386-linux-gnu${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
ENV NVIDIA_VISIBLE_DEVICES ${NVIDIA_VISIBLE_DEVICES:-all}
ENV NVIDIA_DRIVER_CAPABILITIES ${NVIDIA_DRIVER_CAPABILITIES:+$NVIDIA_DRIVER_CAPABILITIES,}graphics
# Snippet from extension [pulse]
RUN mkdir -p /etc/pulse
RUN echo '\n\
# Connect to the hosts server using the mounted UNIX socket\n\
default-server = unix:/run/user/1000/pulse/native\n\
\n\
# Prevent a server running in the container\n\
autospawn = no\n\
daemon-binary = /bin/true\n\
\n\
# Prevent the use of shared memory\n\
enable-shm = false\n\
\n'\
> /etc/pulse/client.conf
# Snippet from extension [user]
# make sure sudo is installed to be able to give user sudo access in docker
RUN apt-get update \
&& apt-get install -y \
sudo \
&& apt-get clean
RUN useradd -U --uid 1000 -ms /bin/bash ubuntu \
&& echo "ubuntu:ubuntu" | chpasswd \
&& adduser ubuntu sudo \
&& echo "ubuntu ALL=NOPASSWD: ALL" >> /etc/sudoers.d/ubuntu
# Commands below run as the developer user
USER ubuntu
It seems that the console output proxy in empy breaks.
======================================================================
ERROR: test_nvidia_glmark2 (test_nvidia.NvidiaTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/tmp/test_venv/lib/python3.5/site-packages/empy-3.3.2-py3.5.egg/em.py", line 2686, in installProxy
sys.stdout._testProxy()
AttributeError: '_io.StringIO' object has no attribute '_testProxy'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/tfoote/work/github/tfoote/rocker/test/test_nvidia.py", line 52, in test_nvidia_glmark2
dig = DockerImageGenerator(active_extensions, '', self.dockerfile_tag)
File "/home/tfoote/work/github/tfoote/rocker/src/rocker/core.py", line 38, in __init__
self.dockerfile = generate_dockerfile(active_extensions, self.cliargs, base_image)
File "/home/tfoote/work/github/tfoote/rocker/src/rocker/core.py", line 140, in generate_dockerfile
dockerfile_str += el.get_preamble(args_dict) + '\n'
File "/home/tfoote/work/github/tfoote/rocker/src/rocker/extensions.py", line 120, in get_preamble
return em.expand(preamble, self.get_environment_subs())
File "/tmp/test_venv/lib/python3.5/site-packages/empy-3.3.2-py3.5.egg/em.py", line 3028, in expand
globals=_globals)
File "/tmp/test_venv/lib/python3.5/site-packages/empy-3.3.2-py3.5.egg/em.py", line 2078, in __init__
self.installProxy()
File "/tmp/test_venv/lib/python3.5/site-packages/empy-3.3.2-py3.5.egg/em.py", line 2693, in installProxy
raise Error("interpreter stdout proxy lost")
em.Error: interpreter stdout proxy lost
Exception ignored in: <bound method Interpreter.__del__ of <empy pseudomodule/interpreter at 0x7fbf694cac50>>
Traceback (most recent call last):
File "/tmp/test_venv/lib/python3.5/site-packages/empy-3.3.2-py3.5.egg/em.py", line 2094, in __del__
self.shutdown()
File "/tmp/test_venv/lib/python3.5/site-packages/empy-3.3.2-py3.5.egg/em.py", line 2157, in shutdown
self.finalize()
File "/tmp/test_venv/lib/python3.5/site-packages/empy-3.3.2-py3.5.egg/em.py", line 2646, in finalize
self.push()
File "/tmp/test_venv/lib/python3.5/site-packages/empy-3.3.2-py3.5.egg/em.py", line 2200, in push
sys.stdout.push(self)
AttributeError: '_io.StringIO' object has no attribute 'push'
The short term workaround is to just invoke nosetests with the -s
option
Currently testing some stuff with ROS2 Foxy on Ubutu 20.04. Looks like --nvidia
isn't supported yet:
WARNING distro version 20.04 not in supported list by Nvidia supported versions ['16.04', '18.04']
Related: https://gitlab.com/nvidia/container-images/opengl/-/issues/7
Does anyone have experience testing/using this on MacOS?
Hello, thanks for sharing this package!
I'm just concerned about people googling for it will most probably end up in:
https://github.com/grammarly/rocker
As it was a relatively famous (and very useful) package, now discontinued.
I thought I had this running at least once today with no changes and then ran into some issues where now it is saying i need an absolute path:
docker command:
sudo rocker --nvidia --x11 --user --home --pull --pulse tfoote/drone_demo
Executing command:
docker run -it --rm -v /home/roboubuntu:/home/roboubuntu --gpus all -v /run/user/0/pulse:/run/user/0/pulse --device /dev/snd -e PULSE_SERVER=unix:None/pulse/native -v None/pulse/native:None/pulse/native --group-add 29 -e DISPLAY -e TERM -e QT_X11_NO_MITSHM=1 -e XAUTHORITY=/tmp/.docker.xauth -v /tmp/.docker.xauth:/tmp/.docker.xauth -v /tmp/.X11-unix:/tmp/.X11-unix -v /etc/localtime:/etc/localtime:ro 38a84f5027c9
/usr/bin/docker: Error response from daemon: invalid volume specification: 'None/pulse/native:None/pulse/native': invalid mount config for type "volume": invalid mount path: 'None/pulse/native' mount path must be absolute.
See '/usr/bin/docker run --help'.
I ran rocker inside byobu and got the following traceback.
Immediately rerunning it did not cause the same error.
It appeared to be hung but this may be it downloading a bigger image in the background because it was a fresh machine. And now with the silent pull in the background pexpect might timeout due to no output from the process. If confirmed this will be related to #86
Successfully built e02c380413c7
running, docker run -it --rm e02c380413c7
Traceback (most recent call last):
File "/home/ubuntu/venv/bin/rocker", line 11, in <module>
load_entry_point('rocker==0.2.3', 'console_scripts', 'rocker')()
File "/home/ubuntu/venv/lib/python3.8/site-packages/rocker-0.2.3-py3.8.egg/rocker/cli.py", line 72, in main
File "/home/ubuntu/venv/lib/python3.8/site-packages/rocker-0.2.3-py3.8.egg/rocker/core.py", line 249, in run
File "/home/ubuntu/novnc-rocker/novnc_rocker/turbovnc.py", line 19, in precondition_environment
detected_os = detect_os(cli_args['base_image'], print, nocache=cli_args.get('nocache', False))
File "/home/ubuntu/venv/lib/python3.8/site-packages/rocker-0.2.3-py3.8.egg/rocker/os_detector.py", line 65, in detect_os
File "/home/ubuntu/venv/lib/python3.8/site-packages/pexpect-4.8.0-py3.8.egg/pexpect/spawnbase.py", line 444, in read
self.expect(self.delimiter)
File "/home/ubuntu/venv/lib/python3.8/site-packages/pexpect-4.8.0-py3.8.egg/pexpect/spawnbase.py", line 343, in expect
return self.expect_list(compiled_pattern_list,
File "/home/ubuntu/venv/lib/python3.8/site-packages/pexpect-4.8.0-py3.8.egg/pexpect/spawnbase.py", line 372, in expect_list
return exp.expect_loop(timeout)
File "/home/ubuntu/venv/lib/python3.8/site-packages/pexpect-4.8.0-py3.8.egg/pexpect/expect.py", line 181, in expect_loop
return self.timeout(e)
File "/home/ubuntu/venv/lib/python3.8/site-packages/pexpect-4.8.0-py3.8.egg/pexpect/expect.py", line 144, in timeout
raise exc
pexpect.exceptions.TIMEOUT: Timeout exceeded.
<pexpect.pty_spawn.spawn object at 0x7fd297ef8b20>
command: /usr/bin/docker
args: ['/usr/bin/docker', 'run', '-it', '--rm', 'e02c380413c7']
buffer (last 100 chars): b''
before (last 100 chars): ''
after: <class 'pexpect.exceptions.TIMEOUT'>
match: None
match_index: None
exitstatus: None
flag_eof: False
pid: 46996
child_fd: 5
closed: False
timeout: 30
delimiter: <class 'pexpect.exceptions.EOF'>
logfile: None
logfile_read: None
logfile_send: None
maxread: 2000
ignorecase: False
searchwindowsize: None
delaybeforesend: 0.05
delayafterclose: 0.1
delayafterterminate: 0.1
searcher: searcher_re:
0: EOF
Example output output: [6] Cannot open self /tmp/staticx-OdKefo/detect_os or archive /tmp/staticx-OdKefo/detect_os.pkg
There seems to be a recent regression in PyInstaller that's compounded with staticx
I have not found a version of pyinstaller to pin it to that will work. It might be related to needing to pin the bootloader too. pyinstaller/pyinstaller#2357 (comment)
This does specifically work with the pyinstaller output but not after being run through staticx
There's a related older ticket here too: JonathonReinhart/staticx#71
This fails in my binary installation from debs, but I cannot reproduce it in my workspace
$ rocker --nvidia --x11 --user --home --pulse --nocache tfoote/drone_demo
tfoote@snowman5:~ Last: [130] (0s Seconds)
$ rocker --version
rocker 0.2.2
tfoote@snowman5:~ Last: [0] (0s Seconds)
$ which rocker
/usr/bin/rocker
There is an issue that --nocache doesn't pass through to the detector.
I feel that the --network
arg should be using the exact string passed to it to resolve which docker network to attach to, and not assume how the networks are named on the current system.
Line 37 in 8f0da1f
A good user message for this would be good. It will need to be interleaved with the pull or implicit pull appropriately.
This will also avoid the plugins potentially trying to introspect a non-existent image. Currently if nvidia is enabled it will be the first thing to crash. But if we make this catch earlier the extensions won't need to be robust.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.