GithubHelp home page GithubHelp logo

eclipse-leda / leda-distro Goto Github PK

View Code? Open in Web Editor NEW
14.0 5.0 8.0 944 KB

Eclipse Leda provides a Yocto-based build setup for SDV.EDGE components

Home Page: https://eclipse-leda.github.io/leda/

License: Apache License 2.0

Shell 62.22% Batchfile 1.03% Dockerfile 6.69% RobotFramework 21.19% Python 8.87%
automotive software-defined-vehicle yocto yocto-layer edge-computing embedded iot

leda-distro's People

Contributors

eriksven avatar mikehaller avatar stlachev avatar vasilvas99 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

leda-distro's Issues

Cannot stop container-management service

Describe the bug
I have run the latest 0.0.6 image for x86_64 and am not able to restart (nor stop) the container-management service in order to pick up some changes I made in some deployment manifest files.

To Reproduce
Steps to reproduce the behaviour:

  1. start up leda image in QEMU
  2. log in as root
  3. wait for systemctl status container-management to indicate that the service is up and running
  4. restart the service using systemctl restart container-management
  5. wait forever ...

Expected behaviour
container-management service stops and restarts after a few seconds

Leda Version (please complete the following information):

  • Version: 0.0.6 as downloaded from GitHub releases
  • Machine: qemux86_64
  • Connectivity: transparent internet access

How can I start a container from a private registry

I am using the version 0.0.6 leda x86_64 QEMU image, trying to start an additional container from a private container registry. I have created a /data/var/containers/manifests/my-container.json file:

{
    "container_id": "my-container",
    "container_name": "my-container",
    "image": {
        "name": "ghcr.io/my-repo/my-container:latest"
    },
    "host_config": {
        "devices": [],
        "network_mode": "bridge",
        "privileged": false,
        "restart_policy": {
            "maximum_retry_count": 0,
            "retry_timeout": 0,
            "type": "unless-stopped"
        },
        "runtime": "io.containerd.runc.v2",
        "extra_hosts": [
            "databroker:container_databroker-host"
        ],
        "port_mappings": [
        ],
        "log_config": {
            "driver_config": {
                "type": "json-file",
                "max_files": 2,
                "max_size": "1M",
                "root_dir": ""
            },
            "mode_config": {
                "mode": "blocking",
                "max_buffer_size": ""
            }
        },
        "resources": null
    },
    "io_config": {
        "open_stdin": false,
        "tty": false
    },
    "config": {
        "env": [
            "RUST_LOG=info",
            "KUKSA_DATA_BROKER_URI=http://databroker:55555"
        ],
        "cmd": []
    }
}

However, the repository on GitHub is private so, not unexpectedly, starting the container fails with an error being logge to /var/log/container-management/container-management.log:

time="2023-04-12T13:15:35.697926488Z" level=warning msg="[container-management][ctrd_client_internal.go:44][pkg:github][func:com/eclipse-kanto/container-management/containerm/ctr] the default resolver by containerd will be used for image ghcr.io/my-repo/my-container:latest"
time="2023-04-12T13:15:37.511585085Z" level=error msg="[container-management][ctrd_client.go:94][pkg:github][func:com/eclipse-kanto/container-management/containerm/ctr] error while trying to get container image with ID = ghcr.io/my-repo/my-container:latest for container ID = my-container \n\t Error: failed to resolve reference "ghcr.io/my-repo/my-container:latest": failed to authorize: failed to fetch anonymous token: unexpected status: 401 Unauthorized "

Is there any way to configure leda to use certain credentials for authenticating to a container registry?

Kanto cm containers not found in sdv-health

I have followed this document to run eclipse leda on qemu (https://eclipse-leda.github.io/leda/docs/general-usage/running-qemu/). I have installed qemu in my linux machine and downloaded eclipse-leda-qemu-x86_64.tar.xz and extracted into my working directory. I have created a device id in azure iot hub as mentioned in the document (https://eclipse-leda.github.io/leda/docs/device-provisioning/script-provisioning/). I can find container json files in /data/var/containers/manifests but sdv-health is displaying kanto cm containers are not found.

growdisk on RPi4 with sfdisk instead of parted (signaling kernel)

We are using now sfdisk instead of parted. The SD card last partition grows till the end, but we cannot signal kernel to update the size in order tools as:
df -h
to see it.
fdisk -l
works fine
Upon reboot everything is ok and all tools show the correct size.
How we can signal kernel to rescan for new size.
echo 1 > /sys/block/$DEVICE/device/rescan didn't work
raspberry-growdisk.sh

No WiFi on raspberry4

Describe the bug
Leda image 0.1.0-M2 on a Raspberry4B does not connect to WiFi

To Reproduce

  1. follow the steps to write the image to an SD card as described in https://eclipse-leda.github.io/leda/docs/general-usage/raspberry-pi/
  2. Add a wpa_supplicant.conf file to the boot partition with the following content
    ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
    update_config=1
    country=DE
    
    network={
         scan_ssid=1
         ssid="my_ssid"
         psk="my_wpa2_pwd"
    }
    
  3. Boot Raspi from SD card
  4. Verify that Raspi has not connected to WiFi using router Web UI

Expected behaviour
Raspi should have connected to WiFi and should show up in router's Web UI

Screenshots / Logfiles

Leda Version (please complete the following information):

  • Version: 0.1.0-M2
  • Machine: raspberrypi4-64
  • Connectivity: local WiFi with transparent internet access

Home/End key navigation does not work with putty

Leda image uses TERM=xterm, which is causing issues in command line editting (home and end keys do not work).
export TERM=linux fixes that isses.
As $HOME/.bashrc is not used, probably put it in /etc/profile

Integrate Eclipse Kuksa.VAL - GPS Feeder

The GPS feeder of Kuksa provides GPS coordinates as Vehicle Signal in the Data Broker according to the Vehicle Signal Specification.
The GPS feeder uses a locally running gpsd, which needs to be preinstalled into the Leda distribution.

Default configuration will not work out of the box, as the configuration depends on the specific hardware attached to the device. On raspi, that may be a GPS connected via USB or serial. On qemu, we may want to use a GPS faker application to simulate the location.

Provisioning via USB sticks

Is your feature request related to a problem? Please describe.
It's cumbersome to do the device (pre-)provisioning: for copying device certificates into "anonymous" devices, network must be set up first and you need a separate computer, keyboards, displays etc.

If device provisioning information could be pre-provisioned using an external storage, such as an USB stick, it would make life easier for hackathon setups.

Describe the solution you'd like

  • Device provisioning information is prepared by hackathon organizer upfront onto device-specific USB-stick
  • USB stick is attached to a device, device is booted and automatically performs the provisioning

Additional context
Supported provisioning information should include:

  • Network setup: one or more network configurations which get automatically applied. Should include Wifi SSID and credentials (non-device specific, so take care of salts problems) but also eth0 setups. Should automatically start up the network interfaces and reconnect them. May also include other network types, such as CAN-Bus or the Multicast-Setup required for Some/IP
  • Device certificates for cloud connectors. This may also include having to restart the respective containers, once the information is updated at runtime or after the device was already booted.
  • Additional containers: Airgap offline import of containers + the respective container descriptors.
  • Removal of existing containers: Sometimes, the default setup containers are out of date, or configured differently and need to be replaced. This may also be problematic when we use Desired State, as it's not yet specified in which order a container is deployed: name conflicts, conflicts with existing host port mappings etc.
  • Hardware configuration: For Raspberry Pi, that involves modifying the config.txt with specific dtoverlays, such as different CAN extensions

Additional Notes

  • There is another topic: To do multiple devices "at once", it would be very convenient to have all the devices up and running with standard Leda images and a zero-conf network communication setup. That way, a single "controller" could be set up with the device provisioning information. As long as the network can auto-connect (eth0 or unsecure wifi setups...), the controller may distribute the provisioning information to all Leda-devices in the same network and assign the specifics to each device, basically "reserving" and assigning specific DeviceIDs to specific devices (e.g. based on MAC). This does not work when the network needs pre-configuration before it works. Having Leda images connect to a pre-defined "controller" AP may be a solution though. The "controller" may offer the AP and could be just another bootable partition (or a script on the default image...)
  • Another idea is to use a "shared" USB-stick and let the user chose which Device to provision on the screen. Having to attach a display and keyboard is acceptable, although sometimes it is a bit cumbersome, since the raspi may be integrated into a suitcase etc.

vc4-kms-v3d dtoverlay prevents standard Raspi 7" display from working

Describe the bug
In the current (0.0.6, 0.1.0-M1) builds, we have a config.txt dtoverlay with the v4-kms-.. graphics driver. This works for HDMI displays, but does not work with the standard Raspberry Pi 7" display, which is installed on the DSI port (flat cable).

Display:
https://www.raspberrypi.com/products/raspberry-pi-touch-display/

To Reproduce
Steps to reproduce the behaviour:

  1. Install non-HDMI display on Raspi
  2. Boot Leda 0.0.6 or 0.1.0-M1
  3. Display powers off after kernel is loaded (Boot screen is visible ofc)

Expected behaviour
Display stays powered on
Login shell appears

Leda Version (please complete the following information):

  • Version: 0.0.6, 0.1.0-M1
  • Machine: raspberrypi4-64

Workaround
Edit config.txt
Comment out or remove the dtoverlay line:

# COMMENT OUT OR REMOVE TO FIX DISPLAY
# dtoverlay=vc4-kms-v3d

Support Nvidia Jetson

For the Eclipse SDV Show-Cases, it would be great if we could run Leda as the default image for the Jetson Nano racer model cars (owned by Eclipse Foundation).

For the Eclipse community events, it would be a great hardware platform to demonstrate the SDV show cases. It's always more fun to see something moving than just having a virtual docker compose setup.,

Task:

Raspi4 doesn't boot with release 0.0.3

Raspberry doesn't boot.
With leda-distro 0.0.3 the "Raspberry Pi 4 Model B 8GB" doesn't boot due to
"Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(179,2)"

To Reproduce

  1. download eclipse-leda-raspberrypi.tar.xz from https://github.com/eclipse-leda/leda-distro/releases/tag/0.0.3
  2. extract contained image file sdv-image-all-raspberrypi4-64.wic
  3. flash it on a 32GB-microSD card using Win32DiskImager
    6 partitions exist: boot.fat, grubenv.fat, rescue.img, root_a.img, root_b.img, data.img
    (The first two appear as new drives in Win10.)
  4. Insert card into Raspi (connected are a USB keyboard and a wireless USB mouse, no Ethernet cable) and switch it on.
    U-Boot starts and boots the kernel
    HDMI-Monitor shows 4 Raspi logos.
    The kernel logs via serial connection:
    ...
[    3.273058] No filesystem could mount root, tried: 
[    3.273067]  ext4
[    3.278038] 
[    3.281509] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(179,2)
[    3.290071] CPU: 1 PID: 1 Comm: swapper/0 Not tainted 5.15.34-v8 #1
[    3.296434] Hardware name: Raspberry Pi 4 Model B Rev 1.5 (DT)
[    3.302349] Call trace:
[    3.304825]  dump_backtrace+0x0/0x1b0
[    3.308556]  show_stack+0x24/0x30
[    3.309068] usb 1-1.3: new full-speed USB device number 4 using xhci_hcd
[    3.311925]  dump_stack_lvl+0x8c/0xb8
[    3.322427]  dump_stack+0x18/0x34
[    3.325791]  panic+0x178/0x370
[    3.328889]  mount_block_root+0x224/0x240
[    3.332962]  mount_root+0x210/0x24c
[    3.336503]  prepare_namespace+0x13c/0x17c
[    3.340661]  kernel_init_freeable+0x290/0x2d4
[    3.345084]  kernel_init+0x30/0x140
[    3.348626]  ret_from_fork+0x10/0x20
[    3.352256] SMP: stopping secondary CPUs
[    3.356240] Kernel Offset: 0x2909000000 from 0xffffffc008000000
[    3.362243] PHYS_OFFSET: 0x0
[    3.365161] CPU features: 0x200004f1,00000846
[    3.369579] Memory Limit: none
[    3.372680] ---[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(179,2) ]---

Expected behaviour
The Raspi boots whole Linux system, running Leda.

Screenshots / Logfiles
The log file can be requested.

Leda Version (please complete the following information):

  • Version: 0.0.3 (file date 5.12.2022, 14:43)
  • Machine: raspberrypi4-64
  • Connectivity: not connected to network

Additional context

  • Tried booting with and without connected USB mouse and keyboard -> boot failed
  • Modified flashed /boot/config.txt to not contain the last lines
    dtoverlay=vc4-kms-v3d
    dtoverlay=mcp2515-can0,oscillator=16000000,interrupt=25
    -> boot failed

Missing MOTD

In the previous builds, we had a dynamic MOTD which displays some system status information after login, e.g. disk usage, network information etc. With current build, no motd is being displayed.

Expected:
sdv-motd is displayed, eg after login (console and ssh)

How to contribute "SDV Use Cases" to Leda?

As of today, we described how to contribute to leda on documentation, bitbake recipe or SDV core components (e.g. technical infrastructure).

What we did not yet define (and also don't have proper examples) is how to contribute SDV Use Cases, actual "functional" applications (or "business" applications).

In the Eclipse SDV working group, some members are discussing about SDV Distros & Use Cases (see https://gitlab.eclipse.org/mhaller/sdv-distro-usecases) and it would be good, if Eclipse Lead would provide documentation on how to add use cases.

The first few examples, which should already by integrated into leda documentation and distro, are these:

  • Eclipse Velocitas: The official seat-adjuster and dog-mode examples
  • Eclipse BCX Hackathon: Driving Score Challenge, Hack The Truck, Dynamic Car Insurance, Passenger Welcome, Control Lights

Task of this issue: Describe how people can provide these use cases (from a Leda point of view)

  • either documentation only, but code would be very appreciated , for example in the form of Velocitas-template based apps
  • adding these to the sdv-example-containers recipes for pre-deployment on the Quickstart image
  • checking if it would make sense to have a "sdv-usecases-image" which pre-deploys all of them, and the quickstart to be more like an "empty platform" image.

Replacing k9s

As long as we used k3s as a container orchestration / control plane, we used the k9s cli utility to quickly manage pods, view logs etc.

With the Kanto container management, there is no such convenient utility available, and containerd only has ctr / nerdctl to work with.

It would be nice to have a TUI to navigate through containers, starting, stopping and viewing logs in a quick way.

Documentation for desired state deployment

The current documentation is outdated, as it is based on k3s deployment descriptors:
https://eclipse-leda.github.io/leda/docs/device-provisioning/vehicle-update-manager/

It needs to be updated to the new desired state message format and provide an example deployment document as well:

The example deployment document should be provided as an example/template within the leda-distro as well, so it's easy to copy and adapt.

mosquitto_pub -t vehicleupdate/desiredstate -f my_deployment.json

my_deployment.json:

{
	"activityId": "my-app-poc-activity",
	"payload": {
		"domains": [
			{
				"id": "containers",
				"components": [
					{
						"id": "vehicle-update-manager",
						"version": "v0.0.2",
...

Partition sizes are not multiple of 4096 byte for Rauc block-hash-index

Describe the bug
Partition sizes must be aligned to 4K. Rauc needs for faster installation of bundles with block-hash-indexes partition sizes that are multiples of 4096 bytes.
Bitbake has variable IMAGE_ROOTFS_ALIGNMENT = "4" for this, which must be set for minimal, full and rescue images.
But images sizes can differ and are not always dividable by 4096.

To Reproduce
Steps to reproduce the behaviour:

  1. Build distro for one of the machines, e.g. Qemu-arm64
  2. Run Qemu out of eclipse-leda-qemu-arm64.tar.xz.zip (Image-qemuarm64.bin, sdv-image-all-qemuarm64.wic.qcow2)
  3. In Qemu install with Rauc the rescue image out of same file, which should succeed (same content and size as in wic, e.g.
rauc install .../sdv-rauc-bundle-rescue-qemuarm64.raucb
  1. journal shows a warning log message of Rauc that the size is not a multiple of 4K:
journalctl -u rauc
...
Updating /dev/vda3 with /run/rauc/bundle/sdv-image-rescue-qemuarm64.ext4
Continuing after adaptive mode error: failed to open target slot hash index for rescue.0: data file size (**167528448**) is not a multiple of 4096 bytes
...

Installation takes longer than expected.
5. Compare image size out of manifest

rauc info  .../sdv-rauc-bundle-rescue-qemuarm64.raucb
... 163258368   / 4096 = 39858.0

with partition size

fdisk -l
...
/dev/vda3   323584  650787  327204 159.8M Linux filesystem   (650787-323584+1)*512=**167528448**, /4096=40900**.5**
/dev/vda4   655360 1728575 1073216   524M Linux filesystem   partition for full image  ... =134152.0
/dev/vda5  1736704 2678527  941824 459.9M Linux filesystem   partition for minimal image  ... = 117728.0

In the wic file sdv-image-all-qemuarm64.wic.qcow2 you can directly check the partition sizes (7-Zip):

"2.rescue.img"    **167528448**       / 4096 =   40900**.5**  size not ok
"3.root_a.img"    549486592       / 4096 = 134152.0  partition size for full image ok
"4.root_b.img"    482213888      / 4096 = 117728.0   partition size for minimal image ok

The full-image and minimal-image sizes are in the bundle manifests ok, but also differ from partition sizes:

rauc info sdv-rauc-bundle-full.raucb
   size: 494252032 / 4096 = 120667.0  ok
rauc info sdv-rauc-bundle-minimal.raucb 
  size: 426606592 / 4096 = 104152.0  ok

In eclipse-leda-raspberrypi.tar.xz.zip in sdv-image-all-raspberrypi4-64.wic the partitions for full and minimal image violate the 4K-rule:

"2.rescue.img"    201457664      / 4096  = 49184.0
"3.root_a.img"    533480448       / 4096 = 130244.25 !!!
"4.root_b.img"    466212864       / 4096 = 113821.5   !!!

Expected behaviour
The size of partitions for rescue.img, root_a.img and root_b.img must be multiple of 4096 to guarantee fast block-hash-index Rauc bundle installations.

Screenshots / Logfiles

Leda Version (please complete the following information):

  • Version: Commit 62e53c5
  • Machine: qemuarm64, raspberrypi4-64
  • Connectivity: -

enp0s2 IP address is not assigned

Describe the bug
an IP address for the enp0s2 is not assigned. Well I'm not even sure why it is enp0s2? It should be eth0

To Reproduce
Steps to reproduce the behavior:

  1. ./run-leda.sh
  2. ifconfig enp0s2
root@qemux86-64:~# ifconfig enp0s2
enp0s2    Link encap:Ethernet  HWaddr 52:54:00:12:34:02
          inet6 addr: fe80::5054:ff:fe12:3402/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:22 errors:0 dropped:0 overruns:0 frame:0
          TX packets:17 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:3884 (3.7 KiB)  TX bytes:2418 (2.3 KiB)

Expected behaviour
eth0 shall be assigned automatically

Screenshots / Logfiles
no

Leda Version (please complete the following information):

  • Version: 2022, 0.0.4
  • Machine: qemux86_64
  • Connectivity: no

Additional context

$ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=22.04
DISTRIB_CODENAME=jammy
DISTRIB_DESCRIPTION="Ubuntu 22.04.2 LTS"

SDV Container images

I am trying to build this from scratch for x86 Qemu.

Face issue when it comes to SDV container images. Also tried to pull the source code for incubation components. It gives 404.

Preinstall COVESA VSS and Tooling

The tooling to convert VSS vspec files into various formats and the VSS specification itself should be available on the quickstart image pre-installed.

It is easy to install later, if you have connectivity and git installed (which is not the case). However, if we want to prepare an environment where you can easily customize the vehicles data model, the databroker container would also be affected.

Hence, there are configurations like the global VSS datamodel which can be customized and is required by multiple software packages.

Request:

  • Install latest VSS spec in defined folder
  • Install VSS Tools such as vspec2x for converting
  • Think about convenience tooling to add a customized mapping more easily, as multiple steps are needed.

Example setup includes databroker with custom vss plus the dbc2val feeder which also requires the model.

It is also worth checking out if certain DBC related tools should be preinstalled.
Alternatively, ae may use vss-tools container, if such a thing exists.

Image for qemux86_64 contains potentially unnecessary grub tools

Doing a filesystem search on the sdv-image-all, it reveals some binaries which are larger than 1mb and may be unnecessary:

/usr/sbin/grub-install 1588 KB
/usr/sbin/grub-sparc64-setup 1300 KB
/usr/sbin/grub-probe 1300 KB
/usr/sbin/grub-bios-setup 1296 KB
/usr/sbin/grub-macbless 1280 KB
/usr/bin/grub-mkrescue 1260 KB
/usr/bin/grub-fstest 1204 KB
...

To save disk space, each should be checked whether it is necessary for the function of SDV.EDGE stack and be removed if not needed.

A quick test shows these binaries which must not be removed:

  • grub-editenv - This is required by the RAUC Update Service

Leda hackathon User Interface (TUI and auto-login on Terminal 1)

During hackathons, it would be convenient to have auto-login on terminal 1 and start an SDV-specific user interface.

As we want to minimize the overhead, a full-blown GUI is not necessary, but maybe a small TUI. It should allow you to exit to a shell though. A TUI also has the advantage of no need for mouse.

The use cases for the TUI are:

  • Health: run and display sdv-health infos
  • Network: Scan and connect to wifi, WPA credentials, Set static IP addresses, Enabling or disabling network interfaces, Setting up CAN bus settings, Network and Online tests, Reset CAN-Bus
  • Hardware configuration: Modify config.txt and add/change/remove dtoverlay settings, Changing the CAN-hat configuration
  • Device Provisioning: import certificates from external storage (USB media), Network configuration files, Additional custom user files (eg for hackathon preparations)
  • Containers: Import airgap containers, Import container descriptors, Resetting/pruning containers and redeploy, Clearing container logs > check possible overlaps with kantui
  • Applications: Install pre-defined applications/usecases > See Eclipse SDV Reference Distribution Use Cases for examples.
  • Self Updates: Trigger update from local RAUC bundles or from remote locations
  • System maintenance: fsck (reset RO flag), reboot, shutdown, shell

Additional Notes

  • A functionality to "Reset to factory defaults and reboot" would be great. Unfortunately, we only have the sdv-image-minimal available as a "source" for the reset. So the "reset to factory defaults" would require to download and install the rauc update bundle for sdv-image-full and then perform the reboot into the second partition.

Network utilities for troubleshooting (Some/IP, Multicasting)

Is your feature request related to a problem? Please describe.
During a hackathon preparation, we wanted to set up Some/IP multicast routing.
Unfortunately, we at first didn't realize that we did not set up a specific routing, hence, the multicast messages went to the wrong network interface. We were looking at eth0, but the default route was wlan0.

We were also unsure whether the container with the Some/IP adapter was able to send/receive multicast packets to/from 239.0.0.1, since it uses veth (which is set up by kanto-cm).

For troubleshooting purposes, it would have been great to have network utilities available on the full quickstart image:

  • tcpdump
  • iperf or another tool to verify multicast works (ping 239.0.0.1 and check if there is a reply from someone)

Describe the solution you'd like

  • Pre-install tcpdump on the quickstart image (sdv-image-full)
  • More detailed documentation on the network setup, esp different setups and multicast routing for Some/IP

Additional context
It would also be great if sdv-health could verify that multicast networking is supported and the necessary Linux-level system settings are properly set (ip_forward, ip link set eth0 multicast on, ip route add ...)
It should include a test as well, e.g. sending multicast pings (Note that ICMP is different and needs to be enabled AS WELL) and checking for responses.

Reproducable build infrastructure

Right now, our build infrastructure is not properly set up. We're using the free GitHub Runners and an external caching mechanism to be able to perform builds. While we're waiting for the Eclipse Webmaster to provide larger GitHub Runners (needed for Yocto), we can use the time to set up automation scripts for setting up the needed infrastructure.

This would also help other teams to setup a Yocto-based build infrastructure and reproduce it on their end.

The rough idea is:

  • Have the basic infrastructure (GitHub Actions Runners and/or plain BitBake workers) on cloud-liked infrastructure (e.g. ephemeral VMs, docker, kubernetes etc.). Throw-away VMs is a good concept to get reproducable builds and minimize side effects.
  • Have the persistent infrastructure on separate, attachable storage (e.g. a cloud or local storage attached to the VMs on demand). As BitBake/Yocto requires large amount of disk, and it's very expensive to transfer that all the time, if makes much more sense to put branch workspaces into specific network attached storage or VM disk storage and attach it to workers on demand.
  • Have the team infrastructure setup once, e.g. setup a remote sstate-cache, hash equivalence servers etc. per team/orgunit and have the Terraform setup as a public repository which can be reused by others.

A few requirements which come to my mind:

  • independent of actual cloud provider, so some sort of abstraction should be possible (e.g. azure vs openshift)
  • standardized way to setup devops environment (eg sth like terraform would be more appropriate than bash scripts)
  • storage and build needs to be supported long-term (e.g 10-15 years) with the ability to update software components continuously with low maintenance costs

This is a conceptual discussion / poc.

Default mosquitto configuration

@stoyan-zoubev reported that the mosquitto.conf currently being used does not allow connections from local clients any more.

For the time being, we've been using a configuration which allows local, anonymous clients to connect to the local mosquitto. For a production environment, this has to be changed to enable authentication of course.

If this is not working any more, it may be due to the switch from k3s to kanto-cm, as kanto metalayer has dependency to mosquitto and brings its own configuration.

Original report:

Some changes needed to be done to the configuraiton of mosquitto in order to get it working. It is better that these changes are applied directly when building leda/owasys image.

Modify /etc/mosquitto/mosquitto.conf to contain the following lines:

listener 1883 0.0.0.0
allow_anonymous true

These settings allow connections from components inside containers to mosquitto broker, running as native service.
Actually the default mosquitto.conf contains no real entries, only commented lines, so you can directly replace its contents.
The mosquitto.conf that comes from meta-kanto contains /etc/mosquitto/suite-connector.d/suite-connector.conf. I do not think this is needed for eclipse-leda, it can be safely deleted (not included).

When mosquitto.conf is modified manually, changes are applied only after restarting the native service.
systemctl restart mosquitto.service

sdv-motd is confused by wlan interfaces

Describe the bug
The sdv-motd shows a confused output for WIFI/WLAN network devices, as it tries to parse output of another tool.

To Reproduce
Steps to reproduce the behaviour:

  1. Connect Raspi with leda image to Wifi only (no ethernet)
  2. Output of sdv-motd (e.g. after login) shows Device "wlan0 metric 10 " does not exist. For certain setups, it also shows Device "" does not exist
  3. There is no IP address being displayed, even if wifi is connected and DHCP assigned IP address. Probably caused by parsing mistake.

Expected behaviour
The parsing of the IP address works properly also for combinations of connected network interfaces, e.g. wifi only, wifi + eth, eth only. Maybe also needs testing with additional wifi usb connectors plugged into USB, as it may create additional confusion with "wlan1" etc.

Leda Version (please complete the following information):

  • Version: 0.1.0-M1, 0.0.6
  • Machine: raspberrypi4-64
  • Connectivity: wifi

Additional context
The Device "" does not exist error comes up even if ethernet is linked (connected to a switch, so there is no IP assigned).
ip a shows eth0 and wlan0 available (UP) but without IP addresses.

Example images for distributed system

Vehicle E/E architectures differ very much from each other. There may be one or multiple devices in the vehicle suitable for the installation of certain components.

One architecture may install all the SDV components on some kind of "high-performance compute module".

Other architecture may run the cloud connector and self-update components on a "connectivity" device, whereas all other components (eg databroker, central VUM etc.) are localted on a main device.

As we don't know each and every architecture beforehand, we should at least have one example with a distributed architecture.

Missing/Outdated Kuksa databroker-cli

Describe the bug
databroker-cli seems to be missing on the current image

Expected behaviour
databroker-cli is updated and preinstalled
databroker-cli can connect to running databroker

Recipe to build self-update-agent as yocto-oci-container

We should be able to build containerized SDV core components also using Yocto.

  1. Implement a separate recipe for the component (similar to cyclonedds-example-container)
  2. Build recipe in GitHub Workflow
  3. Publish OCI container as GitHub Packages in the leda-distro repository.

We should be doing this for these components:

  • Self Update Agent
  • Vehicle Update Manager
  • Container Update Agent (future component)
  • Kuksa.VAL Data Broker
  • Kuksa.VAL Seat Service Example
  • Kuksa.VAL HVAC Example

Background:
The initial idea of SDV was to use containerized applications and use official "Docker" containers for applications and services. However, in automotive and in regard to the long-term support requirements, it's pretty obvious that embedded components need to be built by the company who puts the (vehicle) into market, as they have the full responsibility of the whole system. As long as these teams are not separated, there is a high likelyhood that a full system image, including the SDV containers, is being built by the same teams, which tend to use the same CI/build infrastructure.

References:

Include a larger subset of the python 3.10 standard library in sdv-image-full

Is your feature request related to a problem? Please describe.
By default sdv-image-all installs the basic poky python package, which includes a very minimal subset of the python standard library. While nice for the final image size, this limits what scripts running natively (outside of containers with full python installations) on the image can do (e.g. random, pydoc, socket are missing).

Describe the solution you'd like
According to How to work with Python applications and modules in Yocto Project the full python standard library can be installed by installing the python3-modules package. It however, depends on GPL3.0-licensed packages which is incompatible with the Leda-distro license (Apache-2.0).

Describe alternatives you've considered

  • A very useful subset of the python standard library, that does not depend on GPL3.0-licensed components can be installed through python3-misc (includes random and socket for example)
  • A .bbappend which excludes GPL3.0-licensed from the python3-modules package can be prepared. (best solution)

Additional context
None

`sdv-provision` produces an empty `Device ID`

Describe the bug
sdv-provision produces an empty Device ID and exits afterward.

root@qemux86-64:~# ping 8.8.8.8
64 bytes from 8.8.8.8: seq=1 ttl=115 time=2.368 ms
64 bytes from 8.8.8.8: seq=2 ttl=115 time=2.343 ms

root@qemux86-64:~# sdv-provision
Checking Eclipse Leda Device Provisioning configuration...
- Certificates directory exists
Checking Device ID
- Based on network device: enp0s2
- Device ID:
Checking whether either IdScope or ConnectionString is configured
- Id Scope file found: /data/var/certificates/azure.idscope
- Id Scope configured in cloudconnector deployment descriptor /data/var/containers/manifests/cloudconnector.json
Checking device certificates
- All device certificates are present
- Primary device certificate: /data/var/certificates/device.crt
- Primary device private key: /data/var/certificates/device.key
- Secondary device certificate: /data/var/certificates/device2.crt
- Secondary device private key: /data/var/certificates/device2.key
Fingerprints (add these to the Azure IoT Hub Device)
- Primary thumbprint: 7CBAA9FBD0AB6CDD82DBDAA482BC90827F4ED265
- Secondary thumbprint: 3F6FD1CFB8E7DB38CD768C19FA8D8F1E5590F198

root@qemux86-64:~# cat /etc/deviceid

To Reproduce
Steps to reproduce the behaviour:

  1. ./run-leda.sh
  2. sdv-provision

Expected behaviour
A valid Device ID shall be generated?

Screenshots / Logfiles
N/A

Leda Version (please complete the following information):

  • Version: 2022, 0.0.4
  • Machine: qemux86_64
  • Connectivity: corporate network

Additional context
@mikehaller I have followed this doc https://eclipse-leda.github.io/leda/docs/device-provisioning/script-provisioning/. Do I need to create an account here: https://azure.microsoft.com/free/iot

Seat Service Example with CAN-Bus missing candump and incorrect container descriptor

Describe the bug
When using the Seat Service example container, it currently has the following bugs:

  • Container descriptor misconfiguration: To be able to use the CAN interface, the container must run on the host network in use privileged: true. The example container descriptor is not usable. Workaround: User has to modify the container descriptor and redeploy.
  • Container descriptor misconfiguration: Application environment variables are missing (SC_CAN=can0 and CAN=can0). They should be added to the container.
  • Missing candump inside of container: tools/ecu-reset.sh is using candump to receive CAN messages. candump is not available inside of the container and the script throws an error message. Workaround: Copy the ecu-reset.sh script from Kuksa.VAL Seat Service Example repository, modify the shebang to use /bin/sh instead of /bin/bash and run it on the Leda host system.

It may be an idea to have two separate seatservice.json files available (one enabled, the other disabled). So that a user can easily switch between simulated VCAN and physical CAN0.

Leda Version (please complete the following information):

  • Version: 0.0.6
  • Machine: raspberrypi4-64

RAUC can not install in stream-mode on Raspberry Pi

Describe the bug
RAUC fails if a local bundle or a bundle from a file server in verity format shall be installed.
The kernel config to enable this is missing even the config file is contained.

To Reproduce
Preparation: Flash wic file on a Raspi4 from eclipse-leda-raspberrypi.tar.xz.zip and boot the device

Local bundle:

  1. Copy a Rauc bundle in verity-format directly on Raspi. This can be directly one of the bundles in eclipse-leda-raspberrypi.tar.xz.zip, eg. sdv-rauc-bundle-rescue-raspberrypi4-64.raucb
  2. Getting info out of bundle succeeds when you call
    rauc info sdv-rauc-bundle-rescue-raspberrypi4-64.raucb
  3. Installation of local bundle fails:
    rauc install sdv-rauc-bundle-rescue-raspberrypi4-64.raucb

Bundle from file server:

  1. Connect Raspi to Internet
  2. Copy a Rauc bundle in verity-format on a file server. This can be directly one of the bundles in eclipse-leda-raspberrypi.tar.xz.zip, or an already existing one, e.g.: https://olavherbst.de/bin/sdv-sua/sdv-rauc-bundle-rescue-raspberrypi4-64-20230131180730.raucb
  3. Getting info out of bundle succeeds when you call
    rauc info https://olavherbst.de/bin/sdv-sua/sdv-rauc-bundle-rescue-raspberrypi4-64-20230131180730.raucb
  4. Installing the streamed bundle fails:
    rauc install https://olavherbst.de/bin/sdv-sua/sdv-rauc-bundle-rescue-raspberrypi4-64-20230131180730.raucb

Expected behaviour
Installation of bundles in verity format locally or streamed from file server should succeed like in qemuarm64.
The kernel should be built with the files, which are already prepared for this stream-feature, in
meta-leda/meta-leda-bsp/dynamic-layers/raspberry-pi/:
linux-raspberrypi_%.bbappend
files/leda-bsp-rauc.cfg
The file must be appended to recipe linux-raspberrypi*.bb or be moved to be found and included according to
meta-leda/meta-leda-bsp/conf/layer.conf
(For qemux86_64 this file is included into kernel config: meta-rauc/recipes-kernel/linux/linux-yocto/rauc.cfg)

Screenshots / Logfiles
Installation of local bundle:
# rauc install sdv-rauc-bundle-rescue-raspberrypi4-64.raucb
... 100% Installing failed.
LastError: Failed mounting bundle: Failed to load dm table: Invalid argument
Installing '/media/sdv-sua/sdv-rauc-bundle-rescue-raspberrypi4-64.raucb' failed
Stream-installation from file server:
# rauc install https://olavherbst.de/bin/sdv-sua/sdv-rauc-bundle-rescue-raspberrypi4-64-20230131180730.raucb
... 100% Installing failed.
LastError: Failed mounting bundle: failed to resolve 'nbd' netlink family - BLK_DEV_NBD not enabled in kernel?
Installing 'https://olavherbst.de/bin/sdv-sua/sdv-rauc-bundle-rescue-raspberrypi4-64-20230131180730.raucb' failed

Leda Version:
Version: Commit 62e53c5
Machine: raspberrypi4-64
Connectivity: -

Hackathon feedback: use of Raspi camera

During a hackathon, some teams wanted to use more from the Raspberry Pi ecosystem, such as the Raspberry Pi camera, using OpenCV for gesture recognition.

Unfortunately, the Leda Quickstart image is tailored in a way that it doesn't let the user simply use the Raspberry PI ecosystem, e.g. a GUI is missing, the drivers are missing, the native python installation is crippled, apt-get is missing etc.

All that would have to be containerized before it can be deployed to Leda - which is our goal anyway, as we want to stay "as near as possible" to how it would be on a production-series vehicle. It is very unlikely to have "apt-get" & co available on a production car.

Nonetheless, for hackathons and POCs, it would be great if Leda can deploy a "Raspberry Pi Tools" container, so that the ecosystem can be easily used: Hook up the Raspberry Pi camera, deploy the "Raspi Tools" container and the camera is up and running, feeding the data to the clients.

Notes:

  • (Getting started with PiCamera)[https://projects.raspberrypi.org/en/projects/getting-started-with-picamera]
  • libcamera-hello
  • leda-raspi-camera-container should be able to: take still picture on request, record video sequence on request, start/stop sending video stream to multiple clients (via CycloneDDS, Iceoryx, ROS, ...?)

Report and document size of SDV-EDGE stack disk usage

Our goal is to have the complete minimal SDV.EDGE stack to use less than 100MiB of disk space.

It would be great to display this somewhere (github repo badge, documentation website, ...) and keep track of it over time or over builds and releases.

There are various input factors on how to exactly calculate that number, as it depends on the point of view whether to include certain components or not. In general, we could start with:

  1. Exclude the size of the core OS (Linux Kernel, modules, general drivers, general system services)
  2. Include the SDV.EDGE stack components and its direct dependencies: containerd, mosquitto, kanto-cm, rauc and the required SDV containers (sua, vum, cua, databroker)

The first can be calculated by defining something like sdv-image-empty and not install any SDV packagegroups. Downside of this approach is that we have to build this, even though we're not using it. Maybe a couple hundred MB additionally for the build.

The second is the "used disk" stat of the sdv-image-minimal.rootfs.ext3 file. As we statically define the size of the rootfs partition in the WKS files, we cannot just take the filesize. We would have to check the "used disk space" at runtime I guess, otherwise we would count empty disk space at the end of the partition.

Other approaches like summing up individual binary sizes could also work, however they're prone to human errors. If we forget to add a file to the list, it won't get counted, so I tend to opt for a more automated approach.

Use Oniro's compliance toolchain infrastructure

The Eclipse Oniro team is currently migrating their Compliance Toolchain to Eclipse Foundation infrastructure.

The compliance toolchain supports Yocto-based build processes, which would be a really good fit to use it for Leda-Distro.
We're currently trying to use ORT (OSS Review Toolkit), which does not really support BitBake as a package manager.

Task: Once Oniro's infrastructure is up and running on EF, we may need to add a build workflow on GitLab for the integration into the Eclipse IP process.

References:

Container with Framebuffer overlay for Leda / SDV / VSS events

Is your feature request related to a problem? Please describe.
During hackathons, when there is a display and keyboard attached to the Raspi and the user is on the shell, he needs to switch back and forth between tools (kantui, databroker-cli, logs of containers) to see whether something is working or not.

It would be great to have a way to display something on the screen while the user is on the terminal/shell, like an overlay.
It should not be obstructing too much and still give him access and control over the shell.

Describe the solution you'd like

  • A tiny application running inside, which listens for Leda / SDV events, such as "Container being deployed" or "Vehicle Signal has changed" or "Application registered datapoints" or "OTA Desired State request received" and displays them in the upper right corner of the screen.
  • Maybe another tiny application container, which displays values of VSS signals (numeric, graphical)
  • A container with access to the framebuffer device may be sufficient
  • A native text user interface interface (like kantui), but which shows Kuksa.VAL Databroker information in "realtime", maybe with a VSS browser and query editor. Like a more advanced version of the "databroker-cli"

Describe alternatives you've considered

  • A NodeRed dashboard which displays some SDV information, but that requires to run X server + window manager + browser in kiosk mode etc. Sounds too heavyweight.

Additional notes
Having a container access to the display or providing a GUI would be an interesting experiment anyway.

Script-based Leda-Setup

Is your feature request related to a problem? Please describe.
I want to setup the components provided by the leda-distro on an existing (but likely compatible) operating system (e.g. ubuntu server) instead of replacing the complete OS which might have specific configurations and other components.

Describe the solution you'd like
A 'provisioning script' which installs the 'leda-stack' on a supported operating system. (Downloading needed base components, setup needed configurations, setup services).

Describe alternatives you've considered

  • Using docker images of leda on an existing operating system but this comes with additional overhead and docker images for architectures like arm64 are currently not available

Additional context
Such a script would decouple the dependency to the very specific operating system and allows are more integrative approach for existing systems and could also speed up the 'initial setup'-process (no flashing needed --> Only execution of install script).

multiple occurrences of sudo in run-leda.sh

Describe the bug
There are multiple occurrences of sudo in run-leda.sh which are undocumented. It is not transparent what this script is doing/could do with the machine after acquiring sudo privileges.

To Reproduce
Steps to reproduce the behaviour:

$ grep sudo ./run-leda.sh
TUNCTL="sudo tunctl"
IFCONFIG="sudo ip"
IPTABLES="sudo iptables"
    sudo bash -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
    sudo bash -c "echo 1 > /proc/sys/net/ipv4/conf/$TAP/proxy_arp"
        echo "Do you wish to install $package using sudo?"
                    sudo apt-get install -y $package;
sudo qemu-system-x86_64 \

Expected behaviour
no sudo commands

Screenshots / Logfiles
If applicable, add screenshots to help explain your problem.

Leda Version (please complete the following information):

  • Version: 2022, 0.0.4
  • Machine: qemux86_64
  • Connectivity: no connectivity

Additional context
no

Image contains unused Kanto and Hawkbit components

The sdv-image-full contains unused components, which are not used in the context of SDV.

The following Kanto components should be removed from all images (or at least from the minimal image)

  • file-backup
  • file-upload
  • software-update
  • suite-connector
  • system-metrics

The following other components should be removed from all images:

  • HawkBit client for Rauc

The following components should not be installed on the sdv-image-minimal:

  • /usr/bin/containerd-ctr (Command line utility, should only be on sdv-image-full)
  • skopeo (Command line utility, should only be on sdv-image-full)

sdv-rauc-bundle missing VERSION_ID

Describe the bug
The rauc bundle information is missing the version identifier, instead the value ${VERSION_ID} is seen in the bundle metadata.

To Reproduce
Steps to reproduce the behaviour:

$ rauc --cert=../../../../../examples/example-ca/development-1.cert.pem --key=../../../../../examples/example-ca/private/development-1.key.pem --keyring=../../../../../examples/example-ca/ca.cert.pem info sdv-rauc-bundle-qemux86-64.raucb 
rauc-Message: 14:10:52.375: Failed to resolve realpath for '/dev/disk/by-uuid/c88d12c3-41c9-4695-b552-e0f40acc7dfe'
rauc-Message: 14:10:52.375: Reading bundle: /workspaces/leda-distro-fork/build/tmp/deploy/images/qemux86-64/sdv-rauc-bundle-qemux86-64.raucb
rauc-Message: 14:10:52.405: Verifying bundle signature... 
rauc-Message: 14:10:52.713: Verified detached signature by 'O = Test Org, CN = Test Org Development-1'
Compatible:     'Eclipse Leda'
Version:        '${VERSION_ID}'
Description:    'sdv-rauc-bundle version 1.0-r0'
Build:          '20230119140324'
Hooks:          ''
Bundle Format:  plain

Expected behaviour
The rauc bundle metadata should show the leda version as Version identifier, e.g. 0.0.3-40-gaeb06a3

It seems that the sdv-rauc-bundle.bb recipe is using a variable (VERSION_ID) which is not properly populated anywhere.
In the past, it was using git to populate the value, which didn't work properly, so this needs more investigation.

git submodules are not available

There is meta-sdv in .gitmodule can not find.

$ git clone https://github.com/eclipse-leda/leda-distro.git
$ cd leda-distro
$ git submodule init
$ git submodule update

After executing git submodule update can not find any git submodule (eg: poky, meta-virtualization...). Look like missing commit & push the git submodule directory

SDV Example Use Case: Vehicle wallet for automated payments

During a hackathon event, another idea of an SDV example use case came up: a vehicle wallet service for automated payments.

Real world use cases:

  • Automated payment of parking lot fees, road tolls or energy charging stations
  • Automated identification (authentication, authorization) of vehicle and driver, e.g. for closed, proprietary parking areas

Eclipse Leda / Hackathon show case:

  • To show a working Vehicle-to-X communication example with support for actual hardware, so that it can be shown / used physically during hackathons

Involved technologies and components

  • Distributed Ledger (DLT), IOTA (see IOTA Blog)
  • Implementation for communication protocol between vehicle and Point-of-Sale, such as (ISO 15118)[https://en.wikipedia.org/wiki/ISO_15118] for charging stations
  • A Vehicle Service encapsulating the service functionalities, exposing a northbound API for interaction with GUI apps or cloud services.
  • Hardware: a cheap NFC extension for the Raspberry Pi to simulate the V2X communication

sdv-image-minimal missing SDV edge components

Describe the bug
The sdv-image-minimal image should contain the minimal SDV.EDGE core components. Compare to sdv-image-full, which contains additional developer tooling and convenient tools, the minimal image should showcase the minimum size for a "production" setup which still offers all of the features on a service level (e.g. for OTA Updates, cloud-connector and self-update-agent etc.)

To Reproduce
Steps to reproduce the behaviour:

  1. Boot the second partition "SDV Minimal"
  2. Check sdv-health or manually like systemctl status kanto-cm

Expected behaviour

  • Required services are installed and running
  • sdv-health shows all the required services and required containers as up and running
    • Service: containerd
    • Service: mosquitto
    • Service: kanto-cm
    • Service: rauc
    • cloud-connector
  • Optional services and optional containers are shown as WARNING and are not installed on the image (to save disk space on rootfs)
  • Note: actually, sdv-health should not even be installed on the rootfs of sdv-image-minimal
  • Note: sshd is discussable, i suggest to have it installed on the minimal image for troubleshooting purposes for now.

Leda Version (please complete the following information):

  • Version: 2022 0.0.3
  • Machine: qemux86_64

Additional context
Add any other context about the problem here.

Problem with manifest files in 0.0.5 Pi Image

Describe the bug
I recognized that databrooker and feeder are not visible in kantui. So I tried to deploy them. And that gives me the information that "wrong json in *" all files that are in manifests. A bit strange but alle files that are in manifests_dev are deployed right.

The first difference that I noticed was:

feedercan

    "container_id": "feedercan",
    "container_name": "feedercan",

seatservice (that working):

    "id": "seatservice-example",
    "name": "seatservice-example",

To Reproduce
Steps to reproduce the behaviour:

  1. Check if the containers in kantui visible
  2. kanto-auto-deployer /data/var/containers/manifests

Expected behaviour
I would expect that the images in manifests are deployed on the first start.

Screenshots / Logfiles

root@ledapi4-64:/data/var/containers# kanto-auto-deployer manifests/
Reading manifests from [manifests/]
Wrong json in [manifests/databroker.json]
Wrong json in [manifests/feedercan.json]
Wrong json in [manifests/hvac.json]
Wrong json in [manifests/sua.json]
Wrong json in [manifests/vum.json]

Leda Version (please complete the following information):

  • Version: 0.0.5
  • Machine: raspberrypi4-64
  • Connectivity: transparent internet access

NodeRed container and example flows

A pre-defined container manifest for deploying Node-Red (with Node-Red Dashboard UI) and possibly some pre-defined action nodes to interact with MQTT / SDV components.

For hackathons and development reasons, it would be great to have a way to quickly interact with vehicle applications without having to go through a backend. Directly sending MQTT messages, e.g. to the self-update-agent, or directly invoking gRPC endpoints from kuksa-databroker through a Node-Red flow would increase productivity a lot.

Examples:

  • Seat Adjuster: Use a Node-Red flow + dashboard buttons to trigger a seat position adjustment (via the seatservice-example)
  • HVAC Example: Use a Node-Red flow + dashboard buttons to enable/disable HVAC service
  • Ambient Light Service: use Node-Red flow + dashboard buttons to simulate human driver approaches vehicle
  • "Local triggering" of OTA Updates: Use a Node-Red flow + dashboard buttons to trigger the deployment of certain use cases (fleet management, hackathon setup, driving score, ...)

First automated smoke test using Docker setup

There is currently no automated integration test setup in the Leda Distro build workflow.

By running qemu and the smoketests (in tests/), we can run some simple first tests whether the system boots up and the services are started.

By using the docker-compose setup, we can also run robot tests in a system-test, which includes the download of update bundles from a "remote" location. See https://github.com/eclipse-leda/leda-distro/tree/main/resources/docker

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.