Inspired by this awesome paper of Uthayanath Suthakar, Luca Magnoni, David Ryan Smith and Akram Khan.
The abstract architecture with used technologies is shown as below: The architecture includes:
-
Kafka: The message broker, where services or devices data are sent to. Those data will then be pushed to HDFS for later batch processing, sent to Streaming Layer for immediate processing and result.
-
HDFS: Distributed, fault tolerant file system. Kafka raw messages (raw data) & batch processing result are sent to here. This technology is chosen because it fits perfectly with Apache Spark.
-
Batch Layer: Apache Spark SQL. This layer is scheduled to load all raw data from HDFS, dedupes and processes them periodically. The result will be sent to a known folder on HDFS and will replace all the old data in that folder. These data will then be used to correct the result produced by Streaming Layer.
-
Streaming Layer: Apache Spark Streaming. Raw data from Kafka will be sent to this layer as a continuous stream and will be processed as minibatches. After process a minibatch, the layer will check if there are new data at the known folder in HDFS. If there are, the merging process will happen, that merges result data from Batch Layer, Streaming Layer and update the Serving Layer. This will ensure that the data in Serving Layer is eventually consistent.
-
Serving Layer: Result data are stored in this layer. The dashboard application will get data from TimescaleDB or Redis to visualize statistics. Admin can use JDBC Client (like DBeaver), Redis Client (like redis-cli) to query stats from databases directly.
Overall workflow of the system is described as follow: