GithubHelp home page GithubHelp logo

eclipse-ditto / ditto Goto Github PK

View Code? Open in Web Editor NEW
610.0 34.0 208.0 105.21 MB

Eclipse Ditto™: Digital Twin framework of Eclipse IoT - main repository

Home Page: https://eclipse.dev/ditto/

License: Eclipse Public License 2.0

Shell 0.05% Java 98.46% Scala 0.15% HTML 0.34% JavaScript 0.18% FreeMarker 0.01% Groovy 0.01% Dockerfile 0.01% SCSS 0.01% Mustache 0.01% TypeScript 0.78%
eclipseiot iot internet-of-things digital-twin eclipse-ditto ditto hacktoberfest java mongodb akka

ditto's Introduction

Ditto Logo dark Ditto Logo light

Eclipse Ditto™

Join the chat at https://gitter.im/eclipse/ditto Build Status Maven Central Docker pulls License Lines of code

Eclipse Ditto™ is a technology in the IoT implementing a software pattern called “digital twins”.
A digital twin is a virtual, cloud based, representation of his real world counterpart (real world “Things”, e.g. devices like sensors, smart heating, connected cars, smart grids, EV charging stations, …).

An ever growing list of adopters makes use of Ditto as part of their IoT platforms - if you're as well using it, it would be super nice to show your adoption here.

Documentation

Find the documentation on the project site: https://www.eclipse.dev/ditto/

Eclipse Ditto™ explorer UI

Find a live version of the latest explorer UI: https://eclipse-ditto.github.io/ditto/

You should be able to work with your locally running default using the local_ditto environment - and you can add additional environments to also work with e.g. with a deployed installation of Ditto.

Star History

Star History Chart

Getting started

In order to start up Ditto via Docker Compose, you'll need:

  • a running Docker daemon
  • Docker Compose installed
  • for a "single instance" setup on a local machine:
    • at least 2 CPU cores which can be used by Docker
    • at least 4 GB of RAM which can be used by Docker

You also have other possibilities to run Ditto, please have a look here to explore them.

Start Ditto

In order to start the latest built Docker images from Docker Hub, simply execute:

cd deployment/docker/
docker-compose up -d

Check the logs after starting up:

docker-compose logs -f

Open following URL to get started: http://localhost:8080
Or have a look at the "Hello World"

Additional deployment options are also available, if Docker Compose is not what you want to use.

Development Guide

If you plan to develop extensions in Ditto or to contribute some code, the following steps are of interest for you.

⚠️ If you just want to start/use Ditto, please ingore the following sections!

Build and start Ditto locally

In order to build Ditto, you'll need:

  • JDK >= 17
  • Apache Maven >= 3.8.x installed.
  • a running Docker daemon

In order to first build Ditto and then start the built Docker images.

1. Build Ditto with Maven

mvn clean install

Skip tests:

mvn clean install -DskipTests

2. Build local Ditto Docker snapshot images

./build-images.sh

If your infrastructure requires a proxy, its host and port can be set using the -p option like for example:

./build-images.sh -p 172.17.0.1:3128

Please note that the given host and port automatically applies for HTTP and HTTPS.

3. Start Ditto with local snapshot images

cd ../deployment/docker/
# the "dev.env" file contains the SNAPSHOT number of Ditto, copy it to ".env" so that docker compose uses it:
cp dev.env .env
docker-compose up -d

Check the logs after starting up:

docker-compose logs -f

You have now running:

  • a MongoDB as backing datastore of Ditto (not part of Ditto but started via Docker)
  • Ditto microservices:
    • Policies
    • Things
    • Things-Search
    • Gateway
    • Connectivity
  • an nginx acting as a reverse proxy performing a simple "basic authentication" listening on port 8080

ditto's People

Contributors

abalarev avatar alstanchev avatar brvbosch avatar bs-jokri avatar ctron avatar danielfesenmeyer avatar derschwilk avatar dguggemos avatar dimabarbul avatar erikescher avatar ffendt avatar ghandim avatar ihsansensoy avatar jbartelh avatar jokraehe avatar julianfeinauer avatar kaizimmerm avatar marianne-klein avatar neottil avatar pranshu-g avatar silviageorgievalyoteva avatar stmaute avatar thfries avatar thjaeckle avatar vadimgue avatar vladica avatar vvasilevbosch avatar w4tsn avatar yannic92 avatar yufei-cai avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ditto's Issues

GDPR compliance for project site

As from the mail from 04.05.

requirements:

  1. All project web pages must include a footer that prominently links back to key pages, and a copyright notice. The following minimal set of links must also be included on the footer for all pages in the official project website:
    a. Main Eclipse Foundation website (http://www.eclipse.org);
    b Privacy policy (http://www.eclipse.org/legal/privacy.php);
    c. Website terms of use (http://www.eclipse.org/legal/termsofuse.php);
    d. Copyright agent (http://www.eclipse.org/legal/copyright.php); and
    e. Legal (http://www.eclipse.org/legal).
  2. Approved Eclipse logos are available on the Eclipse Logos and Artwork page: https://eclipse.org/artwork/
  3. A user must be requested to give their consent, and explicit consent must be given by the user before a project website can start using cookies. This requirement also includes cookies used by 3rd party services such as, but not limited to: Google Analytics, Google Tag Manager, and social media widgets.
  4. Project websites must not collect and/or store and/or display personal information.
  5. Project websites using 3rd party services such as, but not limited to, google analytics must be explicit about which company or companies have access to the data collected. For example, the project website must identify on their website the individuals or organizations who have access to google analytics data.
  • 1. All project web pages must include a footer that prominently links back to key pages, and a copyright notice.
  • 2. Approved Eclipse logos are available on the Eclipse Logos and Artwork page -> update to new Eclipse foundation logo
  • 3. A user must be requested to give their consent, and explicit consent must be given by the user before a project website can start using cookies.
  • 4. Project websites must not collect and/or store and/or display personal information.
  • 5. Project websites using 3rd party services such as, but not limited to, google analytics must be explicit about which company or companies have access to the data collected.

Inconsistencies between things-service and search-index because of data which is too long to be indexed

MongoDB has a limit of 1024 bytes per index entry: https://docs.mongodb.com/manual/reference/limits/#indexes
In the search-db, we create lots of index entries (e.g. for attributes). In things-db we do not create such indexes, because we do not need extensive search functionality there.
If a thing-event (or whole thing in case of sync) contains a single value which is too big to index, the whole update-operation of the search-updater will fail. This leads to data inconsistencies.

Therefore, we should truncate values which exceed the MongoDB limit.

Enhance existing AMQP-bridge with AMQP 0.9.1 connectivity

Currently the amqp-bridge can connect to AMQP 1.0 endpoints (like for example Eclipse Hono).
We also want to support connecting to AMQP 0.9.1 brokers (e.g. the commonly used RabbitMQ) in order to add another interface for interacting with Ditto via the Ditto Protocol.

Update to Kamon 1.0.0

Kamon 1.0.0 was released recently: http://kamon.io/teamblog/2018/01/18/kamon-1.0.0-is-out/

We are still using version 0.6.x for monitoring in the Ditto services.
Update to 1.0.0 brings the huge benefit that we can use the Kamon Prometheus reporter: http://kamon.io/documentation/1.x/reporters/prometheus/
which leads to a more predictable "pull approach" for metrics in the cluster.
Currently, when running under high load, the metrics sent via UDP even make the networking load higher.

Connet Ditto and Hono on AMQP protocol

Hello. I'm trying to use hono and ditto. Hono and ditto already running on my computer. But i can't find any document Amqp connection. I found devops command but i can't access this api. Because it's in sandbox. I start docker on ditto/docker file and open my browser. I'm writing;
-localhost/8081 opened
-localhost/8081/api opened
-localhost/8081/devops not found
I start docker on ditto/docker/sandbox file and open my browser. I'm writing;
-localhost/8081
-localhost/8125
-ditto.eclipse.org
they can't opened.
how can i connect hono and ditto on amqp protocol and talking each other?

Allow placeholders in the authorization context of a connection

Currently the subjects of an authorization context are defined as a fixed string, which means you have to grant this subject access to all devices of a connection. To provide more flexibility we want to introduce placeholders in the authorization subject which are replaced before the signal is processed in Ditto.
For example you can define the authorization context of a connection as:

...
"authorizationContext": ["ditto:{{ header:device-id }}"]
...

The placeholder {{ header:device-id }} is then replaced by the value of the device-id header . If a placeholder cannot be resolved, e.g. because the specified header is missing, the message is rejected.

Minimal json 1.9.5

Hi, I see that you use minimal json 1.9.5. Can you point me to the CQ so that our project can piggyback and maybe get mj 1.9.5 in Orbit.

Reduce network load for cache-sync of authorization-data

We currently use Akka Distributed Data for synchronizing a authorization-cache (thing-cache, policy-cache) between all services which either read (search, gateway) or update (things, policies) this data.
This works fine, but creates unnecessary network load because the whole cache is distributed to all shards, even if a shard does not need the data. Another disadvantage of the current approach is that the "Akka Distributed Data" cache cannot be cleared, which may cause memory problems in the long run.

In this task, we should at least address the following issues:

  • remove policy-cache from search: sync via events and full-sync fallback (without cache) has been tested and works fine
  • replace "Akka Distributed Data" with a local cache per shard, holding only the data relevant for this shard
    • implement the cache as an LRU cache with a configurable maximum size
    • the cache-entries should be lazy-loaded on first access
    • update the cache whenever the cached data changes (e.g. by Events via PubSub), the things-service and policy-service should no longer write to the cache actively
    • to make sure that the cache is eventually consistent even when events get lost, let the cache entries expire after a configurable time: a re-load will be necessary on next access
    • to further optimize memory usage, remove entries from the cache which have not been accessed for a configurable time

Caffeine seems to be a good choice for a local cache implementation: https://github.com/ben-manes/caffeine

Provide basic documentation

We should think about what and how to provide basic documentation for Eclipse Ditto.
Currently we have a small "Readme" and a "Getting started" (both as .md files).

The documentation should contain chapters about:

  • scope
  • architecture
  • build/installation
  • api
  • tutorials / examples

The format of the documentation should be:

  • "mainstream", so its well known in oss and everyone is simply able to contribute
  • usable as source for a webpage (hosted on eclipse.org)
  • powerful enough to support tables and simple maintenance of links

My proposal is to start with a list of GitHub flavored markdown files in a folder structure within "documentation" and use the GitHub repo for rendering / navigation.

Remove JavaScript libraries of documentation from repo

Use a CDN instead (if even possible):

documentation/src/main/resources/docson/lib/*
documentation/src/main/resources/docson/docson.js
documentation/src/main/resources/docson/docson-swagger.js
documentation/src/main/resources/docson/widget.js

Build failure

with a fresh clone of the initial check-in, I get the following compilation error:

[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:35 min
[INFO] Finished at: 2017-10-06T17:26:31+02:00
[INFO] Final Memory: 104M/1130M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.5.1:compile (default-compile) on project ditto-signals-commands-batch: Compilation failure
[ERROR] /Users/kartben/Repositories/ditto/signals/commands/batch/src/main/java/org/eclipse/ditto/signals/commands/batch/ExecuteBatch.java:[155,45] unreported exception X; must be caught or declared to be thrown
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :ditto-signals-commands-batch

Support Hono's command&control in Ditto connectivity

Basically the "target" configuration for AMQP 1.0 endpoints should already work,
Hono however defines that the AMQP target address is in the form control/${tenant_id}/${device_id}.

Therefore Ditto has to support dynamically determining the device_id which should be placed inside the address.
E.g. by applying some kind of "variable" replacement where following things could be referenced:

  • namespace
  • local thing id
  • header fields

AMQP message reject send from hono to ditto

Hi All,

I connected between hono and ditto on AMQP bridge.

But I cant send AMQP message ( add thing etc.) from hono to ditto using hono.vertx.example.HonoTelemetrySender (I modifed json object).
I took following exception.

amqp-bridge_1 | 2018-04-04 12:40:23,616 INFO [ID:AMQP_NO_PREFIX:TelemetrySenderImpl-14] o.e.d.s.a.m.CommandProcessorActor akka://ditto-cluster/system/sharding/amqp-connection/28/honoverysecrethono/pa/amqpCommandProcessor-honoverysecrethono - Got DittoRuntimeException 'things:thing.notmodifiable' when command via AMQP was processed: The Thing with ID 'appstacle:xdk_14' could not be modified as the requester had insufficient permissions (WRITE is required).

Policy information:
}
"entries": {
"owner": {
"subjects": {
"nginx:ditto": {
"type": "nginx basic auth user"
}
},
"resources": {
"thing:/": {
"grant": ["READ", "WRITE","ADMINISTRATE"],
"revoke": ["READ", "WRITE","ADMINISTRATE"]
},
"policy:/": {
"grant": ["READ", "WRITE","ADMINISTRATE"],
"revoke": []
},
"message:/": {
"grant": ["READ", "WRITE","ADMINISTRATE"],
"revoke": ["READ", "WRITE","ADMINISTRATE"]
}
}
}
},
"modified" : true
}

sender code :
messageSender.send(HonoExampleConstants.DEVICE_ID, null, "{"topic": "appstacle/xdk_"+value+"/things/twin/commands/create/","headers": {},"path": "/","value": {"__schemaVersion": 2,"__lifecycle": "ACTIVE","_revision": 1,"namespace": "appstacle","thingId": "appstacle:xdk"+value+"","policyId": "appstacle:policy_deneme14","attributes": {"location": {"latitude": 44.673856,"longitude": 8.261719}},"features": {"accelerometer": {"properties": {"x": 3.141,"y": 2.718, "z": 1,"unit": "g"}}}}}", "application/json",
token, capacityAvail -> {
capacityAvailableFuture.complete(null);
}).map(delivery -> {
nrMessageDeliverySucceeded.incrementAndGet();
messageDeliveredFuture.complete(null);
return (Void) null;
}).otherwise(t -> {
System.err.println("Could not send message: " + t.getMessage());
nrMessageDeliveryFailed.incrementAndGet();
result.completeExceptionally(t);
return (Void) null;
});

Regards.

Distinguish between "desired" and "twin" Feature properties

In order to add another important Digital Twin feature to Ditto it requires to distinguish between different state perspectives.

We already have the "twin" and "live" perspective being an important differentiation between "cached state" and "actual live state".
However if a user wants to configure on a Digital Twin the desired state (e.g. "I want that this light is switched on") no matter if it is currently receiving the desired change or not, Ditto needs a place to store that.

I would suggest that

  • the "twin" perspective would always track the "reported" state of the device
  • the "live" perspective stays as it is: by using "live", a device is directly addressed
    • if it cannot receive the command (e.g. in order to retrieve the "on" state), the user of the API gets a timeout.
  • we add a new perspective "desired" (name to be discussed) which Ditto saves separately

I would also like to propose to have a "common API" which handles the "twin/live/desired" under the hood.
E.g. that a user may trigger API calls in which she defines:

  • give me the current value of the "on" property (first trying via "live", falling back to "twin" after a timeout when live doesn't answer in time)
  • change the current value of the "on" property to true (first trying via "live", falling back to "desired" after a timeout when live doesn't answer in time)

A device may retrieve the state is has still to apply in order to be "in sync" with the Digital Twin.
E.g.:

  • device comes online and knows it has state revision 42
  • devices asks Ditto: "hey, I am on revision 42, give me the delta of "desired" property changes I missed)
  • Ditto answers with a diff/delta

What is your opinion on that?

Support mapping arbitrary message payloads in AMQP-bridge

The AMQP-bridge currently expects that all messages it receives are already in Ditto Protocol (JSON).
That means that if Ditto connects for example to Eclipse Hono, the AMQP-bridge can only make sense of data sent in our defined protocol.

That is a problem for various reasons:

  • the Ditto Protocol is not designed to be lightweight - as JSON and with the included data it is quite verbose
  • devices connecting to Eclipse Hono and sending their data e.g. via MQTT should not be aware of the Ditto Protocol - they should just send "what" data should be changed to "which" value

Ditto therefore wants to support a first easy "mapping" of those payloads:

  • new mappings should be added dynamically at runtime (without the need to restart the AMQP-bridge)
  • the mapping should follow the mantra: "keep simple things simple, make hard things possible"
  • mappings must be executed "sandboxed": as the mappings could be defined in a programming language, it must be ensured that they don't interfere with the operation of the AMQP-bridge service or any other mappings

Our idea is to define mappings in JavaScript which is executed inside the JVM with as much sandboxing as possible.

Any other ideas?

Provide Ditto sandbox at ditto.eclipse.org

Containing:

  • running Ditto "single instance" (each service scaled to 1)
  • MongoDB as backing DB
  • nginx as reverse proxy
  • graphite, grafana as monitoring
  • ELK stack for logging

JWT based authentication with Google OAuth2.0 would be the best way to provide login.

It would be cool to have some kind of "landing" page containing:

  • quick start "guide" (e.g. how-to login via swagger and try out HTTP API)
  • the currently running Ditto version
  • live statistics (total things, hot things, etc.)
  • charts (from grafana) about current HTTP requests/second, etc.

Use Akka artery remoting as replacement for Netty 3.10

We cannot use Netty 3.10 due to licensing issues (as mentioned in CQ 16316).
As Netty 3.10 is used by default by Akka and that won't change (as mentioned in this Akka issue) we have to use the UDP (or TCP)-based Aeron remoting as a replacement (which is stable in our used Akka version).

Netty 3.10 must then be excluded from our dependencies.

Decouple jwt issuer from policy subject issuer

If a jwt issuer changes, policies using the old issuer are no longer working as intended because the authorization subjects provided in JwtAuthenticationDirective have the new issuer as a prefix which might differ from the one used when the policy was created.

Add HTTP API for "live" commands

Ditto's "live" channel is currently only available via the WebSocket binding.
Which means that via HTTP it is currently not possible to address a device listening on the "live" channel and answering with its live state or changing its live state.

The HTTP API for that could be really simple:

  • just add a query parameter channel=live to all existing /things HTTP routes
  • omitting the channel=live query parameter would use the default twin perspective (the cached value) which should still be the default as the value is the cached (persisted) one

Example: Finding out whether a "lamp" is on:

# retrieve the last reported value:
GET /api/2/things/org.eclipse.ditto:fancy-lamp-1/features/lamp/properties/on
returns: true

# retrieve the live value from the device itself:
GET /api/2/things/org.eclipse.ditto:fancy-lamp-1/features/lamp/properties/on?channel=live
returns: false
# someone must have manually switched it off in the meantime

Inconsistencies between API versions when modifying a Thing

I encountered an Exception when changing API version of the request to modify a Thing (PUT /things/{thingId}):

2018-01-29 07:26:46,057 ERROR [48c77585-47b9-470b-96e8-4f16a055f18a] o.e.d.s.t.u.a.ThingUpdater akka://ditto-cluster/system/sharding/search-updater/3/org.eclipse.ditto%3A8f63f01a-1fc0-4968-a2b1-9cf5ed9de08a - Thing to update in search index had neither a policyId nor an ACL: ImmutableThing [thingId=org.eclipse.ditto:8f63f01a-1fc0-4968-a2b1-9cf5ed9de08a, namespace=org.eclipse.ditto, acl=null, policyId=null, attributes={"a1":"v1"}, features=ImmutableFeatures [features=[ImmutableFeature [featureId=f1, properties={"p1":{"fpk1":"otherValue"}}]]], lifecycle=null, revision=2, modified=null]

I was able to trace this error back to inconsistent behavior between things and things-search, as things-search sometimes

  1. is not able to update a Thing (see aforementioned Exception)
  2. updates parts of the authorization that is not updated in the things service

To reproduce the problem, you can create a new Thing using API x and update the Thing using API y, which results in the following outcome:

API x (create) API y (update) Result in things Result in things-search
1 1 👍 👍
1 2 👍
1 2 (with ACL in body) 👍
1 2 (with policyId in body) 👍 👍
2 1 👍
2 1 (with ACL in body) 👍 👍
2 2 👍
2 2 (with ACL in body ) 👍
2 2 (with policyId in body) 👍 👍

The implementation should be changed to result in a consistent and predictable behavior. Since policies in v2 offer way more configuration possibilities, it should also be forbidden to change back from policies to ACL. I would suggest the following outcome:

API x API y Result in things Result in things-search
1 1 👍 👍
1 2 -
1 2 (with ACL in body) -
1 2 (with policyId in body) 👍 👍
2 1 👍 (automatically adds policyId to emitted ThingModified Events) 👍
2 1 (with ACL in body) 👍 (removes ACL and automatically adds policyId to emitted ThingModified Events) 👍
2 2 👍 (automatically adds policyId to emitted ThingModified Events) 👍
2 2 (with ACL in body ) -
2 2 (with policyId in body) 👍 👍

Update dependency of japicmp-maven-plugin

Currently ditto uses version 0.9.3. The most recent version is 0.11.0. Since version 0.9.4 the plugin is marked with @threadSafe which prevents the following warning during multi thread Maven build:

[WARNING] * Your build is requesting parallel execution, but project      *
[WARNING] * contains the following plugin(s) that have goals not marked   *
[WARNING] * as @threadSafe to support parallel building.                  *
[WARNING] * While this /may/ work fine, please look for plugin updates    *
[WARNING] * and/or request plugins be made thread-safe.                   *
[WARNING] * If reporting an issue, report it against the plugin in        *
[WARNING] * question, not against maven-core                              *
[WARNING] *****************************************************************
[WARNING] The following plugins are not marked @threadSafe in Eclipse Ditto :: Signals :: Commands :: Batch:
[WARNING] com.github.siom79.japicmp:japicmp-maven-plugin:0.9.3
[WARNING] Enable debug to see more precisely which goals are not marked @threadSafe.
[WARNING] *****************************************************************

Add possibility to connect to a Hono instance via AMQP 1.0

  • consume messages (telemetry and events) in Ditto Protocol via Hono
  • add new connections on the fly containing credentials for authentication at Hono

Goal is to have another microservice running in the ditto-cluster which uses an AMQP 1.0 client library in order to connect to an Eclipse Hono instance (e.g. the sandbox running on hono.eclipse.org)

Enhance subscribing for change notifications by optional filter

When currently subscribing for Events/change notifications via

  • WebSocket
  • SSE
  • Connectivity

the consumer always gets all change notifications it is allowed to see.

On SSE this can be reduced by providing specific thingIds and fields so that changes are only published if the thingId match or a change was affected by a specified field.

The subscription should and could be more fine-grained.
The idea is to support adding an optional filter defined via Ditto's RQL syntax it already uses for the search.

That way the following subscription rules could be applied:

  • notify me of a change only if the thingId starts with org.eclipse.ditto:* : like(thingId,"org.eclipse.ditto:*")
  • notify me of a change only if the feature temperature was affected by this change: exists(feature/temperature)
  • notify me of a change only if in one change the temperature was greater 25: gt(feature/temperature/properties/value,25)
  • notify me of changes in the namespace org.eclipse.ditto:* affecting the temperature: and(like(thingId,"org.eclipse.ditto:*"),exists(feature/temperature))

Add "definitions" field to Ditto features pointing to Vorto functionblocks

First step towards Eclipse Vorto integration:

  • add an optional field "definitions" in features aside of "properties" containing a list of string identifiers targeting Vorto Functionblocks

Format:

"features": {
  "my-lamp": {
    "definition": [ "org.eclipse.example:Lamp:1.0.0" ],
    "properties": {
      ....
    }
  }
}

Extend README.md of ditto-json

The README should give more details about using the library e. g. document idioms for creating a JSON object or using JsonFieldDefinitions etc.

docker-compose configuration

Using docker version 18.03.0-ce and docker-compose 1.21.00 the docker-compose.yml "command" configuration is not overwritten.
This cause a wrong startup sequence of services and crash of all ditto services.

The "command" line have to be replaced with "entrypoint" configuration:

command: sh -c "sleep 10; java -jar /starter.jar"
entrypoint: sh -c "sleep 15; java -jar /opt/ditto/starter.jar"

Logo for Eclipse Ditto

Does Ditto have a logo? If not, let me know if you need my/EF's help setting up a crowd sourcing campaign to get one designed.
It would be nice to have a Ditto logo for the upcoming I4.0 white paper :-)

Ditto Things Snaps

Hello Ditto team,
I would like to know what a role of things_snaps table. There is very few document about that.
My data is grown too fast, in less than 24 hours, it was increase 70GB, almost data is store in things_snaps table.

> db.getCollection('things_snaps').stats(1024 * 1024 * 1024)
{
        "ns" : "things.things_snaps",
        "size" : 225,
        "count" : 38962,
        "avgObjSize" : 6215555,
        "storageSize" : 91
}

Does any configuration and a rule to restrict it?

I'm using Ditto version 0.3.0-M1

Thanks in advance

Support emitting partial change notifications based on Policy READ permission

Currently we only deliver change notifications via the WS or SSE for subjects which have unrestricted "READ" permission for changed Things (via the so-called "read-subjects").

For ACL that was and is sufficient.

But when using a Policy for a Thing in which I only have "READ" permission for a single Feature property I would expect to get a WS message if this property is changed (or also if the complete Feature/Thing is changed and the property was part of that change).

That is however a little difficult as we would have to build "views" of each emitted Event for each WS session.

As we do not want to instantiate the PolicyEnforcer for all used policies again in the Websocket sessions I suggest serializing this kind of information (which subject is allowed to read which parts of the Thing affected by an emitted ThingEvent) as header field.

The format of the header could look like:

{
  "sub1": [
    "/features/lamp1/properties/illuminance",
    "/attributes"
  ],
  "sub2": [
    "/"
  ]
}

This format could be calculated by the PolicyEnforcerActor for an incoming ModifyCommand - and fill the header based on the actually changed Json fields inside.

The Websocket/SSE would "only" have to use a small interpreter for this format in order to return only a partial Event.

Failover is not working correctly for AMQP 1.0 connections

The failover parameter of the JMS client should be additionally configured with at least failover.initialReconnectDelay and failover.reconnectDelay. Otherwise it will reconnect after 10ms and the connection attempt fails as it takes more than 10ms.

Add Openshift Template for ditto

It would be great to have the possibility to make ditto run in openshift/kubernetes. As fas as I can see docker-compose is used. It might makes sense to provide a yaml template for kubernetes/openshift like e.g. Kapua.

Enforce data integrity of Features with a Definition

In #60 the Feature Definition was added to the Ditto model.
#60 however did not yet enforce that the Properties of a Feature are enforced to follow the types defined in a Definition.

The idea of this issue is to add another Ditto microservice responsible for validating that the JSON of a Feature's Properties follow its defined Definition.
The Definition is interpreted as coordinates to an Eclipse Vorto "Function Block".

The Ditto team already contributed a generator to Eclipse Vorto which generates JsonSchema from a Vorto Function Block.

The new Ditto microservice could look up the Function Block at the public Vorto repo "http://vorto.eclipse.org", use its "generator HTTP API" in order to let the Vorto repo generate the JsonSchema and validate changes to a Feature against that JsonSchema.

In order to not always generate the JsonSchema, the generated schemas should be persisted into Ditto's database.

Fix broken documentation links

The following internal documentation links are broken:
basic-connections.md

  • [Manage connections](/connectivity-manage-connections.html)
  • [Payload Mapping Documentation](/connectivity-mapping.html)

connectivity-manage-connections.md:

  • [payload mapping](/connectivity-mapping.html)
  • [Connections](/basic-connections.html)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.