GithubHelp home page GithubHelp logo

eclipse / kuksa.val.services Goto Github PK

View Code? Open in Web Editor NEW
15.0 9.0 17.0 1.02 MB

Repository for Vehicle Service Related implementations for Eclipse SDV

License: Apache License 2.0

Dockerfile 1.40% Shell 9.33% Python 47.08% CMake 6.55% CSS 4.59% C++ 24.22% C 6.82%

kuksa.val.services's Introduction

KUKSA.VAL.services

Use of this repository is deprecated

Parts of this repository has been migrated to the eclipse-kuksa Github organization as described below. This repository is planned to be archived in the long term. Note that not everything in this repository has been migrated, some parts are considered obsolete and are not planned for migration. If you are missing something please add an issue here or in the new repositories.

Service New Location
HVAC Service https://github.com/eclipse-kuksa/kuksa-incubation/hvac_service
Seat Service https://github.com/eclipse-kuksa/kuksa-incubation/seat_service
Mock Service https://github.com/eclipse-kuksa/kuksa-mock-provider

Content

Overview

This repository provides you a set of example vehicle services showing how to define and implement these important pieces of the Eclipse KUKSA Vehicle Abstraction Layer (VAL). KUKSA.val is offering a Vehicle API, which is an abstraction of vehicle data and functions to be used by Vehicle Apps. Vehicle data is provided in form of a data model, which is accessible via the KUKSA.val Databroker. Vehicle functions are made available by a set of so-called vehicle services (short: vservice).

You'll find a more detailed overview here.

This repository contains examples of vservices and their implementations to show, how a Vehicle API and the underlying abstraction layer could be realized. It currently consists of

More elaborate or completely differing implementations are target of "real world grown" projects.

Contribution

For contribution guidelines see CONTRIBUTING.md

If you want to define and implement your own vehicle services, there are two guidelines/how-tos available:

Build Seat Service Containers

👷‍♀️ 🚧 This section may be a bit outdated. So, please take care! 🚧 👷‍♂️

NOTE: Following steps are tested on Ubuntu 20.04 and x86_64. Building on aarch64 host is not supported. For other options check Seat Service README.md

  • From the terminal, make the seat_service as your working directory:

    cd seat_service
  • When you are inside the seat_service directory, create binaries and pack them to tar.gz:

    ./build-release.sh x86_64 --pack
    
    #Use following commands for aarch64
    ./build-release.sh aarch64 --pack
  • To build the image execute following commands from root directory as context.

    docker build -f seat_service/Dockerfile -t seat_service:<tag> .
    
    #Use following command if buildplatform is required
    DOCKER_BUILDKIT=1 docker build -f seat_service/Dockerfile -t seat_service:<tag> .

The image creation may take around 2 minutes.

Running Seat Service / Data Broker Containers

To directly run the containers following commands can be used:

  1. Seat Service container

    By default the container will execute the ./val_start.sh script, that sets default environment variables for seat service. It needs CAN environment variable with special value cansim (to use simulated socketcan calls) or any valid can device within the container.

    # executes ./va_start.sh
    docker run --rm -it -p 50051:50051/tcp seat-service

    To run any specific command in the container, just append you command (e.g. bash) at the end.

    docker run --rm -it -p 50051:50051/tcp seat-service <command>

For accessing data broker from seat service container there are two ways of running the containers.

  1. The simplest way to run the containers is to sacrifice the isolation a bit and run all the containers in the host's network namespace with docker run --network host

    #By default the container will execute the ./vehicle-data-broker command as entrypoint.
    docker run --rm -it --network host -e 'RUST_LOG=info,vehicle_data_broker=debug' databroker
    #By default the container will execute the ./val_start.sh command as entrypoint
    docker run --rm -it --network host seat-service
  2. There is a more subtle way to share a single network namespace between multiple containers. So, we can start a sandbox container that will do nothing but sleep and reusing a network namespace of an this existing container:

    #Run sandbox container
    docker run -d --rm --name sandbox -p 55555:55555 alpine sleep infinity
    #Run databroker container
    docker run --rm -it --network container:sandbox -e HOST=0.0.0.0 -e PORT=55555 databroker
    #Run seat-service container
    docker run --rm -it --network container:sandbox -e HOST=0.0.0.0 -e PORT=55555 -e PORT=50051  seat-service
  3. Another option is to use <container-name>:<port> and bind to 0.0.0.0 inside containers

KUKSA.val and VSS version dependency

The service examples and related tests in this repository use VSS signals. VSS signals may change over time, and backward incompatible changes may be introduced as part of major releases. Some of the tests in this repository relies on using latest version of KUKSA.val Databroker and KUKSA.val DBC Feeder. Some code in the repository (like Proto definitions) have been copied from KUKSA.val.

This means that regressions may occur when KUKSA.val or KUKSA.val Feeders are updated. The intention for other KUKSA.val repositories is to use the latest official VSS release as default. There is a script for manual updating of KUKSA.val proto files: update-protobuf.sh.

cd integration_test/
./update-protobuf.sh --force

Seat Service currently supports 2 modes: (VSS 3.X and 4.0). As part of VSS 4.0 the instance scheme for seat positions was changed to be based on DriverSide/Middle/PassengerSide rather than Pos1, Pos2, Pos3.

By default Seat Service uses VSS 4.0 seat position, but for older dependencies it can be changed to VSS 3.X compatible by setting Environment variable VSS=3 for seat service container / cmdline.

Known locations where an explicit VSS or KUKSA.val version is mentioned

  • In integration_test.yml Uncomment the following line to force VSS 3.X version support in databroker. # KDB_OPT: "--vss vss_release_3.1.1.json"

  • In run-databroker.sh The script gets both VSS3 and 4 json files from KUKSA.val master and starts databroker with the correct version, based on environment variable USE_VSS3=1:

    wget -q "https://raw.githubusercontent.com/eclipse/kuksa.val/master/data/vss-core/vss_release_3.0.json" -O "$DATABROKER_BINARY_PATH/vss3.json"
    wget -q "https://raw.githubusercontent.com/eclipse/kuksa.val/master/data/vss-core/vss_release_4.0.json" -O "$DATABROKER_BINARY_PATH/vss4.json"
  • In prerequisite_settings.json hardcoded versions are mentioned for KUKSA.val Databroker, KUKSA.val DBC Feeder and KUKSA.val Client.

  • In test_val_seat.py. Tests for proper seat position datapoint, according to environment variable USE_VSS3.

  • In Seat Service main.cc: 2 different Datapoint sets are registered, based on environment variable VSS (3 or 4).

NOTE: Above mentioned locations should be checked for breaking VSS changes on master.

kuksa.val.services's People

Contributors

int0x27 avatar lukasmittag avatar erikbosch avatar bjoernatbosch avatar dependabot[bot] avatar argerus avatar sebastianschildt avatar doosuu avatar eriksven avatar eclipsewebmaster avatar marcelbochtler avatar d-s-e avatar

Stargazers

Eason LIU avatar  avatar pi1ot avatar  avatar Bilal Parvez avatar Markus Petke avatar Stefan Gloutnikov avatar Long Dao avatar Nicolas Vuillamy avatar Suchinton avatar  avatar Sercan Uzun avatar  avatar  avatar  avatar

Watchers

James Cloos avatar Christopher Guindon avatar  avatar Mark Hüsers avatar Robert Hoettger avatar Philippe Krief avatar Ahmad Banijamali avatar Johannes Kristan avatar  avatar

kuksa.val.services's Issues

Archiving planned for 2024-12-31?

All content where there is interest for continued maintenance have been migrated to https://github.com/eclipse-kuksa

So shall we keep this repo alive until 2024-12-31 and until then only accept fixes/patches?
Or shall we do a more aggressive approach and basically remove everything from "main" to assure that no one continue to use "main" branch by mistake, but only released/tagged versions?

FYI @SebastianSchildt

Seat Service CI

Seat Service CI is a mess, see also #47

We need a working CI that

  • builds the seat service, and produces working dockers for amd64 and arm

"Speed" is secondary. Don't care it takes one hour, if it works.... It should also roughly "do the same" we require from users to build locally (i.e. it is a mystery to me, why using the container in toplevel "tools" directory to build, seems to work, when CI does not )

Mockservice - dapr installation

Objective: To launch mockservice for controlling actuators and sensors.

Issue:
run-mockservice script is failing at dapr cli installation. Please take a look at the attached the log file. I am using Debian 12.0.

dapr_insfail.txt

#######################################################

Ensure dapr

#######################################################

  • Detected x86_64 architecture

Installing dapr latest version: v1.11.0
Getting the latest Dapr CLI...
Your system is linux_amd64
Installing Dapr CLI...

Installing v1.11.0 Dapr CLI...
Downloading https://github.com/dapr/cli/releases/download/v1.11.0/dapr_linux_amd64.tar.gz ...
Failed to install dapr
Failed to install Dapr CLI
For support, go to https://dapr.io

Query: Is there any container image for mockservice?

Port mock-service to use kuksa-client lib

Currently Mock is not very maintainable as it uses its homespun GRPC wrappers. This is a lot of code dupblication, instead it should use the well/better tested kuksa-client lib

Potential issue: It might rely on pending "discoverability" issue in the val.v1 API. In any case, I suggest we start moving as mich calls as possible over to the API now, and follow up with the rest once upstream updated kuksa-client and databroker API

Mock Service default works only with VSS 3.1.1

Mock Service default mock.py works only with the VSS 3.1.1 VSS 4.0 creates the following error:

ERROR:base_service:Failed to register datapoints
Traceback (most recent call last):
  File "/home/imu6fe/kuksa.val.services/.venv/lib/python3.8/site-packages/lib/baseservice.py", line 106, in _on_broker_connectivity_change
    self.on_databroker_connected()
  File "mockservice.py", line 74, in on_databroker_connected
    loader_result = PythonDslLoader().load(self._vdb_metadata)
  File "/home/imu6fe/kuksa.val.services/.venv/lib/python3.8/site-packages/lib/loader.py", line 140, in load
    mocked_datapoints = self._load_mocked_datapoints(vdb_metadata)
  File "/home/imu6fe/kuksa.val.services/.venv/lib/python3.8/site-packages/lib/loader.py", line 65, in _load_mocked_datapoints
    metadata = vdb_metadata[datapoint["path"]]
KeyError: 'Vehicle.Cabin.Seat.Row1.Pos1.Position'

Seat Service documentation and build

Seat Service documentation is really bad and partially wrong. We need to describe

  • WHAT are the different build scripts doing, and what is the expected output
  • If we keep them dependant on the local environment, list the dependencies (currently you need to run them, check error message, and solve it, i.e. by installing something and then try again.

What I observed for example is, even though I could run build_release.sh after some fiddling, build_docker.sh that also builds outside docker first (is this clever), the (presumable) same stuff, it fails. Why is that?

Also the build created files

bin_vservice-seat_aarch64_release.tar.gz

that contain only licenses and .proto files (why?) but not binaries

This needs to be fixed, we need

  • Need correct documentation
  • Can the build be less dependent on local environment. e.g. build mor on docker? Maybe the dev container is that, but that is not clearly stated

Also devcontainer fails with

> => transferring dev_containers_feature_content_source: 326.07kB        0.1s
 => ERROR [dev_container_auto_added_stage_label 2/4] RUN apt-get update &  0.5s
------
 > [dev_container_auto_added_stage_label 2/4] RUN apt-get update &&   apt-get install -qqy curl wget zip &&   apt-get install -qqy git &&   apt-get install -qqy bash &&   apt-get install -qqy xz-utils &&   apt-get install -qqy apt-transport-https:
#0 0.316 standard_init_linux.go:228: exec user process caused: exec format error
------
Dockerfile-with-features:28
--------------------
  27 |     # Install basic utils needed inside devcontainer
  28 | >>> RUN apt-get update && \
  29 | >>>   apt-get install -qqy curl wget zip && \
  30 | >>>   apt-get install -qqy git && \
  31 | >>>   apt-get install -qqy bash && \
  32 | >>>   apt-get install -qqy xz-utils && \
  33 | >>>   apt-get install -qqy apt-transport-https
  34 |       
--------------------
ERROR: failed to solve: process "/bin/sh -c apt-get update &&   apt-get install -qqy curl wget zip &&   apt-get install -qqy git &&   apt-get install -qqy bash &&   apt-get install -qqy xz-utils &&   apt-get install -qqy apt-transport-https" did not complete successfully: exit code: 1

Bug in Dockerfile for mock service

CMD arguments in the dockerfile not detecting mockservice.py since it is present in another folder "mock" which is in the same level as Dockerfile.

Mock Service create_set_action() works only once when setting to a fixed value on trigger

mock_datapoint(
    path="Vehicle.Body.Windshield.Front.Wiping.System.TargetPosition",
    initial_value=0,
    behaviors=[
        create_behavior(
            trigger=EventTrigger(EventType.ACTUATOR_TARGET),
            action=create_set_action(0),
        )
    ],
)

seems like it runs once if the target value is set to something

INFO:base_service:Connecting to Data Broker [127.0.0.1:55555]
INFO:base_service:Using gRPC metadata: None
INFO:base_service:[127.0.0.1:55555] Connectivity changed to: ChannelConnectivity.IDLE
INFO:base_service:Connected to data broker
INFO:mock_service:Subscribing to mocked datapoints...
INFO:mock_service:Feeding 'Vehicle.Body.Windshield.Front.Wiping.System.Mode' with value STOP_HOLD
INFO:mock_service:Feeding 'Vehicle.Body.Windshield.Front.Wiping.System.TargetPosition' with value 0
INFO:base_service:[127.0.0.1:55555] Connectivity changed to: ChannelConnectivity.READY
INFO:behavior:Running behavior for Vehicle.Body.Windshield.Front.Wiping.System.TargetPosition
INFO:mock_service:Feeding 'Vehicle.Body.Windshield.Front.Wiping.System.TargetPosition' with value 12
INFO:behavior:Running behavior for Vehicle.Body.Windshield.Front.Wiping.System.TargetPosition
INFO:behavior:Running behavior for Vehicle.Body.Windshield.Front.Wiping.System.TargetPosition

what I did in kuksa-client:

Test Client> setTargetValue Vehicle.Body.Windshield.Front.Wiping.System.TargetPosition 100
OK

Test Client> getValue Vehicle.Body.Windshield.Front.Wiping.System.TargetPosition
{
    "path": "Vehicle.Body.Windshield.Front.Wiping.System.TargetPosition",
    "value": {
        "value": 12.0,
        "timestamp": "2023-07-12T09:48:44.249621+00:00"
    }
}

Test Client> setValue Vehicle.Body.Windshield.Front.Wiping.System.TargetPosition 100
OK

Test Client> setTargetValue Vehicle.Body.Windshield.Front.Wiping.System.TargetPosition 10
OK

Test Client> getValue Vehicle.Body.Windshield.Front.Wiping.System.TargetPosition
{
    "path": "Vehicle.Body.Windshield.Front.Wiping.System.TargetPosition",
    "value": {
        "value": 100.0,
        "timestamp": "2023-07-12T09:49:30.348992+00:00"
    }
}

maybe not the use case we're aiming to mock for, more a report. So a low priority issue.

Support ARM64 for dev container

Problem:

The current dev container for the project is only available for x86_64 which forces arm64 users to run the dev container via QEMU. As expected this is very slow and dapr does not work from within docker-in-docker in a QEMU environment.

Question about seat_service behaviour

Hello,

I am trying to run seat_service in a container but it looks like it is stuck in trying to connect to the databroker:

DataBrokerFeederImpl: Connecting to data broker [databroker:55555] ...
DataBrokerFeederImpl: Connecting to data broker [databroker:55555] ...
DataBrokerFeederImpl: Connecting to data broker [databroker:55555] ...
DataBrokerFeederImpl: Connecting to data broker [databroker:55555] ...
DataBrokerFeederImpl: Connecting to data broker [databroker:55555] ...
DataBrokerFeederImpl: Connecting to data broker [databroker:55555] ...
DataBrokerFeederImpl: Connecting to data broker [databroker:55555] ...
DataBrokerFeederImpl: Connecting to data broker [databroker:55555] ...
DataBrokerFeederImpl: Connecting to data broker [databroker:55555] ...
DataBrokerFeederImpl: Connecting to data broker [databroker:55555] ...
...

I have the following environment variables set in the container:

CAN=cansim
DAPR_GRPC_PORT=52002
BROKER_ADDR=databroker:55555
RUST_LOG="info,databroker=info,vehicle_data_broker=info"

I am also using the following image: ghcr.io/eclipse/kuksa.val.services/seat_service:v0.1.0

I tried to use telnet and I was able to stablish a connection to the databroker.

Are those env vars enough or is my setup missing something?

Thanks!

Needed content for release

We have not had any "real" releases for this repo in a long time, so I am trying to look on what better should be done. Looking at our latest release at https://github.com/eclipse/kuksa.val.services/releases/tag/v0.2.0 we have the following artifacts:

image

That is:

  • Some generic stuff (LICENSE and NOTICE)
  • Seat binaries and seat documentation
  • Hvac and Seat containers

Somewhat inconsistent, and now we also have mock service. Some questions:

My idea is to refactor this one so it:

  • Does not upload or include ghcr images, upload can happen directly from lower level workflows. But the release may trigger those lower level workflows so that a release only is created if they succeed.
  • Always use tag as version, i.e. it should not be possible to specify tag manually?

Opinions?

Seat service: Container ecu-reset and missing candump

Context
The Seat Service Example container can be either configured to use a simulated CAN (default) or can be reconfigured to use a physical CAN via environment variables.

We are using the Seat Service container as-is with a physical CAN, connected to an actual ECU via CAN with a real physical seat.

However, the ECU requires calibration on first use and for that purpose, there is a script packaged into the container: tools/ecu-reset.sh

Bug description
Unfortunately, the script errors out when run inside of a container, as the container is missing a required dependency: candump from can-utils.

Steps to reproduce

  • Configure the container to use SC_CAN=can0 and CAN=can0 environment variables
  • Start container -> application may print an error ECU is not calibrated, consider running ecu-reset.sh or alike (it's hard to copy&paste from raspi terminal)
  • Create a shell inside of the container (on Leda, sdv-ctr-exec -n seatservice-example /bin/bash), and execute script tools/ecu-reset.sh -s can0
  • Scripts errors with candump not found

Tried alternatives / Workaround

  • Copied the script from GitHub Source Repository to the Leda device
  • Run the script manually (after fixing the shebang, as there is no bash on Leda)
  • ECU motor was properly reset

Suggested fix

  • Easiest: Install can-utils into the seat-service example container, so that the script can be run from within the container.
  • Advanced: Include the ecu-reset logic into the seat service itself. Imho, a service should be doing that without the user having to run arbitrary scripts somewhere. If there is already a warning and a recommendation in the code, why not just execute it. Or provide an MQTT interface to trigger the ECU reset, that would be much easier to use than opening shells.)
  • Minor improvement: fix the ecu-reset.sh script to use #!/bin/sh as it seems to be sufficient, not requiring bash.

Migrating kuksa.val.services to eclipse-kuksa

The Kuksa project has a long term goal to migrate all assets to the https://github.com/eclipse-kuksa organization. This issue intend to cover the work and to be used for discussion. It is possible that the content of this repository may be splitted to allow for independent update of the services/tests

Current proposal:

Content Proposed New Repo
mock_service kuksa-mock-provider
hvac_service kuksa-incubation (no security)
seat_service kuksa-incubation (no security)
integration_test kuksa-integration-test

Mock Service only uses the first condition that is fullfilled others ignored

We need a discussion if we want to support multiple conditions fullfilled at once or if that should not be possible. For example one will be triggered if the target value is set and Vehicle.Speed is 0 and one will be triggered if the target value is set and Vehicle.Width is 10. What do we want to do with that? Will we allow it. Current state is that the first defined gets executed and the other one gets ignored.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.