GithubHelp home page GithubHelp logo

stellio-hub / stellio-context-broker Goto Github PK

View Code? Open in Web Editor NEW
25.0 7.0 11.0 24.17 MB

Stellio is an NGSI-LD compatible context broker

Home Page: https://stellio.readthedocs.io

License: Apache License 2.0

Kotlin 99.84% Shell 0.03% Dockerfile 0.13%
ngsi-ld iot etsi fiware context-broker

stellio-context-broker's Introduction

Stellio context broker

FIWARE Core Context Management License: Apache-2.0 Status NGSI-LD badge SOF support badge
Quay badge Docker badge
Documentation badge Build Quality Gate Status CodeQL CII Best Practices

Stellio is an NGSI-LD compliant context broker developed by EGM. NGSI-LD is an Open API and Datamodel specification for context management published by ETSI).

Stellio is a FIWARE Generic Enabler. Therefore, it can be integrated as part of any platform “Powered by FIWARE”. FIWARE is a curated framework of open source platform components which can be assembled together with other third-party platform components to accelerate the development of Smart Solutions. For more information check the FIWARE Catalogue entry for Core Context. The roadmap of this FIWARE GE is described here.

You can find more info at the FIWARE developers website and the FIWARE website. The complete list of FIWARE GEs and Incubated FIWARE GEs can be found in the FIWARE Catalogue.

NGSI-LD Context Broker Feature Comparison

The NGSI-LD Specification is regularly updated and published by ETSI. The latest specification is version 1.8.1 which was published in April 2024.

  • An Excel file detailing the current compatibility of the development version of the Stellio Context Broker against the features of the 1.8.1 specification can be downloaded here
📚 Documentation 🐳 Docker Hub 🎯 Roadmap

Overview

Stellio is composed of 2 business services:

  • Search service is in charge of managing the information context and handling the temporal (and geospatial) queries, it is backed by a TimescaleDB database
  • Subscription service is in charge of managing subscriptions and subsequent notifications, it is backed by a TimescaleDB database

It is completed with:

  • An API Gateway module that dispatches requests to downstream services
  • A Kafka streaming engine that decouples communication inside the broker (and allows plugging other services seamlessly)

The services are based on the Spring Boot framework, developed in Kotlin, and built with Gradle.

Quick start

A quick way to start using Stellio is to use the provided docker-compose.yml file in the root directory (feel free to change the default passwords defined in the .env file):

docker-compose up -d && docker-compose logs -f

It will start all the services composing the Stellio context broker platform and expose them on the following ports:

  • API Gateway: 8080
  • Search service: 8083
  • Subscription service: 8084

Please note that the environment and scripts are validated on Ubuntu and macOS. Some errors may occur on other platforms.

We also provide a configuration to deploy Stellio in a k8s cluster. For more information, please look in the stellio-k8s project

Docker images tagging

Starting from version 2.0.0, a new scheme is used for tagging of Docker images:

  • Releases are tagged with the version number, e.g., stellio/stellio-search-service:2.0.0
  • latest tag is no longer used for releases as it can be dangerous (for instance, triggering an unwanted major upgrade)
  • On each commit on the develop branch, an image with the latest-dev tag is produced, e.g., stellio/stellio-search-service:latest-dev

The version number is obtained during the build process by using the version information in the build.gradle.kts file.

Development

Developing on a service

Requirements:

  • Java 21 (we recommend using sdkman! to install and manage versions of the JDK)

To develop on a specific service, you can use the provided docker-compose.yml file inside each service's directory, for instance:

cd search-service
docker-compose up -d && docker-compose logs -f

Then, from the root directory, launch the service:

./gradlew search-service:bootRun

Running the tests

Each service has a suite of unit and integration tests. You can run them without manually launching any external component, thanks to Spring Boot embedded test support and to the great TestContainers library.

For instance, you can launch the test suite for entity service with the following command:

./gradlew search-service:test

Building the project

To build all the services, you can just launch:

./gradlew build

It will compile the source code, check the code quality (thanks to detekt) and run the test suite for all the services.

For each service, a self executable jar is produced in the build/libs directory of the service.

If you want to build only one of the services, you can launch:

./gradlew search-service:build

Committing

Commits follow the Conventional Commits specification.

Code quality

Code formatting and standard code quality checks are performed by Detekt.

Detekt checks are automatically performed as part of the build and fail the build if any error is encountered.

  • You may consider using a plugin like Save Actions that applies changed code refactoring and optimized imports on a save.

  • You can enable Detekt support with the Detekt plugin.

Working locally with Docker images

To work locally with a Docker image of a service without publishing it to Docker Hub, you can follow the below instructions:

  • Build a tar image:
./gradlew search-service:jibBuildTar
  • Load the tar image into Docker:
docker load --input search-service/build/jib-image.tar
  • Run the image:
docker run stellio/stellio-search-service:latest

Releasing a new version

  • Merge develop into master
git checkout master
git merge develop
  • Update version number in build.gradle.kts (allprojects.version near the bottom of the file)
  • Commit the modification using the following template message
git commit -am "chore: upgrade version to x.y.z"
  • Push the modifications
git push origin master

The CI will then create and publish Docker images tagged with the published version number in https://hub.docker.com/u/stellio.

Usage

To start using Stellio, you can follow the API quick start.

Minimal hardware requirements needed to run Stellio

The recommended system requirements may vary depending on factors such as the scale of deployment, usage patterns, and specific use cases. That said, here are the general guidelines for the minimum computer requirements:

  • Processor: Dual-core processor or higher
  • RAM: 4GB or higher (1.8GB is needed to just run it)
  • Storage: At least 4GB of free disk space (3.8GB is needed to just run it)
  • Operating System: Linux (recommended), macOS (also recommended), or Windows

Please note that these requirements may vary based on factors such as the size of your dataset, the number of concurrent users, and the overall complexity of your use case.

Further resources

For more detailed explanations on NGSI-LD or FIWARE:

License

Stellio is licensed under APL-2.0.

It mainly makes use of the following libraries and frameworks (dependencies of dependencies have been omitted):

Library / Framework Licence
Spring APL v2
Titanium JSON-LD APL v2
Reactor APL v2
Jackson APL v2
JUnit EPL v2
Mockk APL v2
JsonPath APL v2
WireMock APL v2
Testcontainers MIT

© 2020 - 2024 EGM

stellio-context-broker's People

Contributors

agaldemas-eridanis avatar ahabid avatar bobeal avatar dependabot[bot] avatar francklg avatar github-actions[bot] avatar gpoujol avatar houcemkacem avatar jason-fox avatar philippestoltz avatar poulominandy avatar ranim-n avatar vraybaud avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

stellio-context-broker's Issues

Finalize integration of Testcontainers in all modules

Spring provides some test support with embedded DBs (e.g. for neo4j) or with some mockked abstractions (e.g. for Kafka), but they are not so easy to integrate and prone to break.

As it is now validated, use the docker-compose integration provided by Testcontainers, that allows us to reuse our existing configs without duplication effort.

Finally, make sure the integration is consistent between all the modules (this could even be - at least partially - in the shared lib).

Migrate and update the quick guide

Currently, the quick guide referenced in the README.md points to the one located in https://github.com/easy-global-market/ngsild-api-data-models.

This is prone to error as we may (and we did!) forget to update it when there are changes to the API and is also confusing for users that have to jump to another site to quick start using the project.

So the purpose of this issue is to bootstrap a new quick guide in this repository, with samples data in a samples directory, based on beekeeping data model.

Make something oriented like a real use-case (and not a showcase of requests like the existing one). For instance:

  • Create context information (beekeeper, beehive, ...), atomic and / or in batch
  • Perform 2 or 3 simple queries
  • Create a subscription on update to values of properties of a beehive (and explain how to get notified)
  • Update values of properties of a beehive
  • Query the temporal evolution of the properties of a beehive

Improve entity handling process and introduce minimal validation

  • Ensure an NGSI-LD entity has an id and a type at creation time
  • Avoid an extra JSON parsing to bootstrap the Entity node
  • Remove useless (and error prone) entity type checking on patch / update operations
  • Move entity compaction method to ExpandedEntity class
  • Rename neo4jservice to entityService

Find why the use of a jarcache for JSON-LD Java raises a ClassNotFoundException

When setting up a jarcache.json in the classpath, it is seen as expected by JSON-LD Java.

However, its use raises a ClassNotFoundException from the library as it does not find some shaded com.google... classes (to be reproduced to have the exact stacktrace).

Not blocking but not optimal for performance (even if the library has a local cache when running).

Fix http client dependency conflicts

The liquigraph added in the following commit 8297c11 to manage neo4j migrations results version conflicts of the apache http client.
This raises method not found error when using jsonldjava.

Refactor and improve the batch entity creation process

Summary of refactorings to be done in next sprint:

  • Move new methods introduced in ParsingUtils into NgsiLdParsingUtils (they are parsing some NGSI-LD payload)
  • Improve readability of methods signatures and code (e.g. for complex pairs containing a map and a list)
  • Rename ValidationUtils to something more adapted to what it is doing (indeed it is not doing any validation). Maybe in a EntityOperationService ?
  • Improve the processing of the graph of entities being created

Quick thoughts about processing of entities:

  • Basically we are working with a (potentially circular) graph of entities
    • First step is to build it
    • It can be done manually via something like an adjency list or via the use of a library (for instance, Guava has such a datastructure) : to be decided together
  • In this graph, we have to identify:
    • Referenced entities provided in the payload (they will have to be created first, starting from the leaves)
    • Referenced entities already existing in the DB (ok, similar to a leaf)
    • Referenced entities that do not exist (this is a 400 for the owning entity)
    • Circular dependencies : an entity for which there is a path leading to herself, marked at the higher (closest to the root) level possible in the graph
  • Starting from the leaves, we can create entities that have no dependencies to a not-yet created entity from the payload
    • Then going up in the tree until the root entities
    • Postponing the creation of the incoming relationship on circular entities until all entities are created

Fix temporal values injection

related to #1
The temporal values injection in the search service fails with the following error:
class java.time.OffsetDateTime cannot be cast to class java.time.ZonedDateTime (java.time.OffsetDateTime and java.time.ZonedDateTime are in module java.base of loader 'bootstrap')

As in the error detail, there is a type mismatch.
More precisely, the attributes observed_at values are returned as OffsetDateTime from the Postgres DB and should be casted correctly to ZonedDateTime.

Fix temporal properties data type

Currently the temporal properties (createdAt, modifiedAt and observedAt) data type is OffsetDateTime that should be changed to ZonedDateTime to support timezone information.

Fix error response when creating an entity having non-existent relationship object

When creating an entity having a non-existent relationship object, the expected error response is a 400 with the detail (Target entity $objectId does not exist, create it first).

However, the error response is a 500 with the following payload

{
    "detail": "There has been an error during the operation execution",
    "title": "There has been an error during the operation execution",
    "type": "https://uri.etsi.org/ngsi-ld/errors/InternalError"
} 

Wrap JSON-LD expanded entity into a class

Instead of manipulating clumsy structures like a Pair(Map<String,Any>,List<String>), wrap such a structure into a class so that methods contracts and associated manipulations become more straightforward.

Implement all rules related to HTTP request pre-conditions

Some of the rules are already implemented, the task consists in implementing all the missing ones, all the code should go into the shared library.

For reference in the specification:

  • API operations definitions: 5.5.2, 5.5.3
  • HTTP binding: 6.3.2, 6.3.3, 6.3.4

Search service fails to create new temporal property

The search service fails to create new temporal properties (Implements 6.20.3.1) for the following reasons:

  • The call to addAttributeInstances() service function in the TemporalEntityHandler should be flatMapped and not mapped.
  • We are storing the attributeName in an expanded format, but using the short type to call the getForEntityAndAttribute() service function.
    -> No temporalEntityAttributeUuid is returned.
  • In the addAttributeInstances service function, we are calling the getPropertyValueFromMapAsDateTime() util function with the EGM_OBSERVED_BY parameter (https://ontology.eglobalmark.com/egm#observedBy), but I saw that it was expanded as (https://uri.etsi.org/ngsi-ld/observedAt)
return body
            .flatMapMany {
                Flux.fromIterable(expandJsonLdFragment(it, contextLink).asIterable()) ------> here expanded as https://uri.etsi.org/ngsi-ld/observedAt
            }
            .flatMap {
                temporalEntityAttributeService.getForEntityAndAttribute(entityId, it.key.extractShortTypeFromExpanded()) ------> here called with short type
                    .map { temporalEntityAttributeUuid ->
                        Pair(temporalEntityAttributeUuid, it) }
            }
            .map {
                attributeInstanceService.addAttributeInstances(it.first,
                    it.second.key.extractShortTypeFromExpanded(), ------> here called with short type
                    expandValueAsMap(it.second.value))
            }
            .collectList()
            .map {
                ResponseEntity.status(HttpStatus.NO_CONTENT).build<String>()
            }

Format all files aligned with ktlint and share the format settings

All files are not formatted the same way.
Currently we don't have any format configuration shared for the developers to matter the minimum about formatting.

We should format all files and share an Intellij configuration for developers to contribute better on the project.

Internal error when BadRequest exception occurs

When doing that request :

POST /ngsi-ld/v1/entities HTTP/1.1

{
  "id": "urn:ngsi-ld:Vehicle:A12388",
  "type": "Vehicle",
  "speed": {
    "type": "Property"
  },
  "@context": [
    "https://raw.githubusercontent.com/easy-global-market/ngsild-api-data-models/master/shared-jsonld-contexts/egm.jsonld",
    "https://raw.githubusercontent.com/easy-global-market/ngsild-api-data-models/master/aquac/jsonld-contexts/aquac.jsonld",
    "http://uri.etsi.org/ngsi-ld/v1/ngsi-ld-core-context.jsonld"
  ]
}

The response should be an error BadRequestData (400 http) like that, helping the user figure out what's wrong in the request :

{
    "detail": "Key $NGSILD _PROPERTY_VALUE not found in $propertyValues").
    "type": "https://uri.etsi.org/ngsi-ld/errors/BadRequestData",
    "title": "The request includes input data which does not meet the requirements of the operation"
}

But today we get this : (500 http)

{
    "detail": "There has been an error during the operation execution",
    "type": "https://uri.etsi.org/ngsi-ld/errors/InternalError",
    "title": "There has been an error during the operation execution"
}

Investigation :
This happens because the error occurs in a function with @transactional annotation which seems to not pass correctly the exception.

Add @context in events sent to Kafka

Consumers may need to expand the operation payload received in an event.

Currently, they have to extract it from the entity payload, which implies the JSON parsing of the entity and that's not efficient.

A possible solution is to add the @context as a (mandatory) field in the event message.

This is now implemented as part of the Authorization PR.

As a consequence, remove the @context information from the payload field in the message (not from the entity one)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.