GithubHelp home page GithubHelp logo

enmasseproject / enmasse-workshop Goto Github PK

View Code? Open in Web Editor NEW
12.0 5.0 7.0 45.4 MB

Workshop stuff about using EnMasse

License: Apache License 2.0

Java 72.83% Shell 27.17%
mqtt amqp messaging internet-of-things openshift enmasse

enmasse-workshop's Introduction

EnMasse Workshop

In this workshop you will deploy EnMasse, Apache Spark and an IoT sensors simulator. You gain insight into deploying and operating an EnMasse cluster, and connect it to a Spark cluster for analyzing the sensors data.

Prerequisites

This tutorial can be run from scratch where you install OpenShift, EnMasse and Spark. You might also have an environment setup for you with these components, in which case you can skip the parts marked optional. When installing from scratch, tutorial requires Ansible to deploy components to OpenShift.

To build the java code, you need Maven already installed on the machine. If you don't have that, there is the official installation guide for doing that. Finally, the OpenShift client tools is used for several operations.

Overview

In this workshop we will be working with 5 different components:

  • An EnMasse messaging service
  • A Spark application containing the analytics code
  • A Thermostat application performing command & control of devices
  • One or more IoT device simulators

The first will be deployed directly to OpenShift and may be already setup for you. The spark and thermostat applications will be built and deployed to OpenShift from your laptop, and the device IoT simulator will be running locally on your laptop.

deployment

(Optional) Installing OpenShift

Downloading and installing minishift

If you don't have an OpenShift cluster available, you can use minishift to run OpenShift locally on your laptop. Minishift supports all major OS platforms. Go to https://github.com/minishift/minishift/releases and select the latest version and the download for your OS.

Starting minishift

For this workshop, we are going to use the Service Catalog which is part of the experimental features group at time of writing and for this reason needs to be explicitly enabled.

export MINISHIFT_ENABLE_EXPERIMENTAL=y

In this wat, the --service-catalog flag can be used on starting minishift in order to enable the Service Catalog. Then you need at least 4GB of RAM for your minishift instance since we're running both EnMasse and Spark on a local OpenShift cluster.

minishift start --cpus 2 --memory 4096 --service-catalog true

Once this command completes, the OpenShift cluster should be ready to use.

In order to run the Ansible playbook used for deploying EnMasse, the logged user needs admin rights. It should be satisfied by the cluster administrator but using minishift it's not true from the beginning. For this reason, it's needed to assign cluster-admin rights to the user (i.e. "developer").

oc login -u system:admin
oc adm policy add-cluster-role-to-user cluster-admin developer
oc login -u developer -p developer

Exploring the console

Take a few minutes to familiarize yourself with the OpenShift console. If you use minishift, you can run minishift dashboard which will open a window in your web browser. With minishift, you can login with username developer and password developer.

Getting OC tools

In order to execute commands against the OpenShift cluster, an oc client tool is needed. Go to OpenShift Origin client tools releases and download the latest stable release (3.7.2 as of time of writing). Unpack the release:

tar xvf openshift-origin-client-tools-v3.7.2-282e43f-linux-64bit.tar.gz

Then add the folder with the oc tools to the PATH :

PATH=$PATH:openshift-origin-client-tools-v3.7.2-282e43f-linux-64bit.tar.gz

(Optional) Installing EnMasse

EnMasse is an open source messaging platform, with focus on scalability and performance. EnMasse can run on your own infrastructure or in the cloud, and simplifies the deployment of messaging infrastructure.

For this workshop, all messages will flow through EnMasse in some way.

The EnMasse version used in this workshop can be found in the enmasse directory. We will use an Ansible playbook to install EnMasse and have a look at its options.

Running the playbook

This workshop will use the following playbook:

- hosts: localhost
  vars:
    namespace: enmasse-system
    multitenant: true
    enable_rbac: true
    service_catalog: true
    keycloak_admin_password: admin
    authentication_services:
      - standard
  roles:
    - enmasse

This playbook instructs Ansible to install EnMasse to the enmasse-system namespace in OpenShift. We will use the service catalog integration to make it easy to provision the messaging service. We will also use Keycloak for authentication. If your OpenShift cluster is on a public network, please change the keycloak_admin_password to what you prefer.

You can modify the settings to your liking, but the rest of the workshop will assume the above being set.

To install EnMasse, first log in to your OpenShift cluster, then run the playbook:

oc login -u developer -p developer https://localhost:8443 
ansible-playbook enmasse/ansible/playbooks/openshift/workshop.yml

Startup

You can observe the state of the EnMasse cluster using oc get pods -n enmasse-system. When all the pods are in the Running state, the cluster is ready. While waiting, go to the OpenShift console.

In the OpenShift console, you can see the different deployments for the various EnMasse components. You can go into each pod and look at the logs. If we go to the address controller log, you can see that its creating a 'default' address space.

IoT Application

The main part of this workshop is to setup an end-2-end IoT application.

  1. Login to the OpenShift console:

    OpenShiftLogin

  2. Create a workspace for your project:

    OpenShift1 OpenShift1

This project will be used to deploy the IoT application.

Provisioning messaging

We now provision messaging infrastructure to use with the application.

  1. In the OpenShift Service Catalog overview, select the "EnMasse (standard)" service:

    Provision1

  2. Select the 'unlimited-standard' plan:

    Provision2

  3. Select the project previously created and enter a name for the address space:

    Provision3

  4. Skip creating the binding:

    Provision4

  5. The address space will be provisioned and may take a few minutes. Jump to the project page:

    Provision5

Once the provisioning is complete you should be able to see the dashboard link which we will later use to access the messaging console and create the addresses we need for the workshop.

Provision6

But first, and introduction to some EnMasse concepts.

Address spaces and addresses

In EnMasse, you have the concepts of address spaces and addresses. When you provision a messaging service like above, you effectively create an address space.

An address space is a group of addresses that can be accessed through a single connection (per protocol). This means that clients connected to the endpoints of an address space can send messages to or receive messages from any address it is authorized to send messages to or receive messages from within that address space. An address space can support multiple protocols, which is defined by the address space type.

Each messaging service provisioned in the service catalog creates a new address space. Conceptually, an address space may share messaging infrastructure with other address spaces.

An address is part of an address space and represents a destination used for sending and receiving messages. An address has a type, which defines the semantics of sending messages to and receiving messages from that address. An address also has a plan, which determines the amount of resources provisioned to support the address.

In the 'standard' address space, we have 4 types of addresses.

  • multicast : 'direct' one-to-many
  • anycast : 'direct' peer-2-peer
  • queue : queue
  • topic : pub/sub

Creating addresses for this workshop

  1. Click on the dashboard link to get to the messaging console:

    Provision6

    You will be redirected to a login screen.

  2. Click the 'OpenShift' button to login with your OpenShift credentials:

    Login1

  3. On the side of the login form, you can see a button named "OpenShift". Click on that to authenticate your user using your OpenShift credentials:

    Login2

  4. Allow keycloak to read user info:

    AuthAccess1

  5. Once logged in, click on the "Addresses" link and click "create" to create addresses:

    Create1

  6. Create the temperature address with type topic. This address is used by devices to report their temperature:

    CreateTemp1

  7. Select the sharded-topic plan (NOTE this is required for MQTT to work. You can choose pooled-topic if using AMQP only):

    CreateTemp2

  8. Create the max address with type anycast. This address is used by the spark app to send messages to the thermostat app:

    CreateMax1

  9. Select the standard plan:

    CreateMax2

  10. Create the per-device addresses. I.e. control/device1. This is used by the thermostat app to send control messages to the device:

    Createdev1

  11. Select the sharded-topic plan (NOTE this is required for MQTT to work. You can choose pooled-topic if using AMQP only):

    Createdev2

Once the addresses have been created, they should all be marked ready by the green 'tick' in the address overview:

AddrOverview1

Authentication and Authorization

In this workshop we aim to setup a secure-by-default IoT solution, so we need to define the applications and what addresses they need to access. Before we create the bindings we need, lets define the mapping of what addresses each component needs to access:

  • deviceX :
    • recv: control/deviceX
    • send: temperature
  • spark :
    • recv: temperature
    • send: max
  • thermostat :
    • recv: max
    • send: control*

We will create the bindings to each of the applications as we deploy them.

Deploying the "Temperature Analyzer" Spark application

The spark application is composed of 2 parts:

  • Apache Spark cluster
  • Apache Spark driver

Deploying a Spark Cluster

  1. Login to the cluster using the command line:

    oc login https://localhost:8443 -u user1
    oc project user1
    
  2. Deploy the cluster using the template provided in this tutorial:

    oc process -f spark/cluster-template.yaml MASTER_NAME=spark-master | oc create -f -
    

    This will deploy the spark cluster which may take a while. In your project overview you should see the deployments:

    Spark1

Deploying the Spark driver

The iot/spark-driver directory provides the Spark Streaming driver application and a Docker image for running the related Spark driver inside the cluster. The spark-driver is deployed by building and running it on the OpenShift cluster. The spark-driver uses the fabric8-maven-plugin to create a docker image, an OpenShift deployment config, and deploy the spark-driver into OpenShift.

  1. Build the spark driver

    cd iot/spark-driver
    mvn clean package fabric8:resource fabric8:build fabric8:deploy -Dspark.master.host=spark-master.<user>.svc
    

    This command will package the application and build a Docker image deployed to OpenShift. You can watch the status by looking at the build:

    Spark2

    Once the driver has been deployed, you should see it in the project overview:

    Spark3

  2. We now need to create a binding with the permissions we defined above. Click on "Create binding" to open the dialog to create a binding:

    SparkBinding1

  3. Set sendAddresses to max and recvAddresses to temperature:

    SparkBinding2

  4. The binding should get created successfully, close the dialog:

    SparkBinding3

  5. Go to the secret that was created:

    SparkBinding4

  6. Click "Add to application":

    SparkBinding5

    This will allow you modify your application deployment to mount the secret so that the example application can use it.

  7. Select the spark-driver as application and the option to mount the secret and enter /etc/app-credentials as the mount point:

    SparkBinding6

The spark-driver will now redeploy and read the credentials from the binding.

SparkBinding7

Deploying the "Thermostat" application

The thermostat application uses the fabric8-maven-plugin to create a docker image, an OpenShift deployment config, and deploy the thermostat into OpenShift.

  1. Build the application as a Docker image and deploy it to the OpenShift cluster:

    cd iot/thermostat
    mvn package fabric8:resource fabric8:build fabric8:deploy -Dfabric8.mode=openshift
    

    You can see the builds by going to the builds menu again:

    Thermostat1

    Eventually, the thermostat is deployed:

    Thermostat2

    Once the thermostat has been deployed, we need to create a binding with the permissions we defined above.

  2. Click on "Create binding" to open the dialog to create a binding:

    Thermostat3

  3. Set sendAddresses to control* and recvAddresses to max:

    Thermostat4

  4. Go to the secret that was created:

    Thermostat5

  5. Click "Add to application":

    Thermostat6

    This will allow you modify your application deployment to mount the secret so that the example application can use it.

  6. Select the thermostat as application and the option to mount the secret and enter /etc/app-credentials as the mount point:

    Thermostat7

The thermostat will now redeploy and read the credentials from the binding:

Thermostat8

Running the IoT simulated devices

Heating simulated devices are provided for simulating data sent to the IoT system and receiving messages. The devices supports two protocols, AMQP and MQTT, which are configurable. The Heating device application :

  • get temperature values from a simulated DHT22 temperature sensor sending them to the temperature address periodically
  • receive commands for opening/closing a simulated valve on the control/$device-id address

The console application can be configured using a device.properties file which provides following parameters :

  • service.hostname : hostname of the EnMasse messaging/mqtt service to connect (for AMQP or MQTT)
  • service.port : port of the EnMasse messaging service to connect
  • service.temperature.address : address on which temperature values will be sent (should not be changed from the temperature value)
  • service.control.prefix : prefix for defining the control address for receiving command (should not be changed from the control value)
  • device.id : device identifier
  • device.username : device username (from binding) for EnMasse authentication
  • device.password : device password (from binding) for EnMasse authentication
  • device.update.interval : periodic interval for sending temperature values
  • device.transport.class : transport class to use in terms of protocol. Possible values are io.enmasse.iot.transport.AmqpClient for AMQP and io.enmasse.iot.transport.MqttClient for MQTT
  • device.transport.ssl.servercert : server certificate file path for accessing EnMasse using a TLS connection
  • device.dht22.temperature.min : minimum temperature provided by the simulated DHT22 sensor
  • device.dht22.temperature.max : maximum temperature provided by the simulated DHT22 sensor

Configuring device

  1. Create another binding for the device:

    Device1

  2. This time, we want the device to read control messages for its address. We also want it to be able to send to the temperature address. Most importantly, we want to get hold of the external hostnames that the device can connect to.

    NOTE However, use '*' and '*' for both sendAddresses and recvAddresses, as there is a known limitation around authorization and MQTT in EnMasse.

    Make sure 'externalAccess' is set:

    Device2

  3. Once created, view the device secret:

    Device3

  4. Reveal the secret:

    Device4

  5. Copy the values for the externalMessagingHost and externalMessagingPort and configure the service.hostname and service.port fields in iot/clients/src/main/resources/device-amqp.properties. For MQTT, use the values externalMqttHost and externalMqttPort and write them to iot/clients/src/main/resources/device-mqtt.properties instead.

    Store the value of the messagingCert.pem field in a local file and update the device.transport.ssl.servercert field in iot/clients/src/main/resources/device-amqp.properties. The messagingCert and mqttCert fields contains the certificates needed by the AMQP and MQTT clients respectively.

    Device5

  6. Finally, copy the values for the username and password into device.username and device.password in the device properties file:

    Device6

An example final configuration:

# service related configuration
service.hostname=messaging-enmasse-user1.192.168.1.220.nip.io
service.port=443
service.temperature.address=temperature
service.control.prefix=control
# device specific configuration
device.id=device1
device.username=user-8fc43b14-98ab-4f70-940b-2fcbb681bdf7
device.password=qpNWm/zEc+H5V5oadG9jh7WwkySZXRTOEDy/MtgqrlQ=
device.update.interval=1000
device.transport.class=io.enmasse.iot.transport.AmqpClient
device.transport.ssl.servercert=messagingCert.pem
# device sensors specific configuration
device.dht22.temperature.min=20
device.dht22.temperature.max=30

Run device using pre-built JARs

The provided heating-device.jar can be used for starting a simulated heating device with the following command for AMQP (replace with device-mqtt.properties for MQTT):

java -jar iot/clients/jar/heating-device.jar iot/clients/src/main/resources/device-amqp.properties

The console application needs only one argument which is the path to the device.properties file with the device configuration.

Example output:

Device7

If you go to your messaging console, you should see the different clients connected sending messages:

Device8

Run device using Maven

In order to run the HeatingDevice application you can use the Maven exec plugin with the following command from the clients directory.

cd iot/clients
mvn package
mvn exec:java -Dexec.mainClass=io.enmasse.iot.device.impl.HeatingDevice -Dexec.args=<path-to-device-properties-file>

You can run such command more times in order to start more than one devices (using different Keycloak users and device-id for them). The provided device-amqp.properties and device-mqtt.properties files can be used as starting point for AMQP and MQTT device configuration.

enmasse-workshop's People

Contributors

ppatierno avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

enmasse-workshop's Issues

Adding registering receiver on address

Add to the base Client (and then to AmqpClient and MqttClient implementations) a way for registering to receive messages on a specific address (so using an AMQP receiver and MQTT subscribe feature) in order to handle command & control eventually.

Tuning cores and memory needed by Spark executors

In order to share the Spark cluster between more spark-driver applications we need to tune the parameters for the spark-submit command related to cores per executors (--executor-cores, total-executor-core, --executor-memory...). The example start a one node Spark cluster with 8 cores.
The same should be considered for memory.

In the current status, the first spark-driver gets all 8 available cores and another one cannot run.

EnMasse console asks for credentials

@lulf with latest update to 0.19.0 trying to access to the EnMasse console asks me credentials (it didn't happen before but the authentication page with OpenShift button was showed).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.