GithubHelp home page GithubHelp logo

azure-samples / streaming-at-scale Goto Github PK

View Code? Open in Web Editor NEW
231.0 27.0 97.0 24.12 MB

How to implement a streaming at scale solution in Azure

License: MIT License

Shell 25.07% Python 0.28% C# 57.92% Scala 6.19% Dockerfile 0.03% TSQL 1.06% Java 5.82% HCL 3.56% Mustache 0.06%
streaming cosmosdb eventhubs serverless kappa-architecture lambda-architecture azuresqldb azurestreamanalytics streamanalytics azure-stream-analytics

streaming-at-scale's Introduction

page_type languages products statusNotificationTargets description
sample
azurecli
csharp
json
sql
scala
azure
azure-container-instances
azure-cosmos-db
azure-databricks
azure-data-explorer
azure-event-hubs
azure-functions
azure-kubernetes-service
azure-sql-database
azure-stream-analytics
azure-storage
azure-time-series-insights
How to setup an end-to-end solution to implement a streaming at scale scenario using a choice of different Azure technologies.

Streaming at Scale

License

The samples shows how to setup an end-to-end solution to implement a streaming at scale scenario using a choice of different Azure technologies. There are many possible way to implement such solution in Azure, following Kappa or Lambda architectures, a variation of them, or even custom ones. Each architectural solution can also be implemented with different technologies, each one with its own pros and cons.

More info on Streaming architectures can also be found here:

Here's also a list of scenarios where a Streaming solution fits nicely

A good document the describes the Stream Technologies available on Azure is:

Choosing a stream processing technology in Azure

The goal of this repository is to showcase a variety of common architectural solutions and implementations, describe the pros and the cons and provide you with a sample script to deploy the whole solution with 100% automation.

Ingestion Processing Serving

Running the samples

Please note that the scripts have been tested on Ubuntu 18 LTS, so make sure to use that environment to run the scripts. You can run it using Docker, WSL or a VM:

Just do a git clone of the repo and you'll be good to go.

Each sample may have additional requirements: they will be listed in the sample's README.

Streamed Data

Streamed data simulates an IoT device sending the following JSON data:

{
    "eventId": "b81d241f-5187-40b0-ab2a-940faf9757c0",
    "complexData": {
        "moreData0": 57.739726013343247,
        "moreData1": 52.230732688620829,
        "moreData2": 57.497518587807189,
        "moreData3": 81.32211656749469,
        "moreData4": 54.412361539409427,
        "moreData5": 75.36416309399911,
        "moreData6": 71.53407865773488,
        "moreData7": 45.34076957651598,
        "moreData8": 51.3068118685458,
        "moreData9": 44.44672606436184,
        [...]
    },
    "value": 49.02278128887753,
    "deviceId": "contoso-device-id-000154",
    "deviceSequenceNumber": 0,
    "type": "CO2",
    "createdAt": "2019-05-16T17:16:40.000003Z"
}

Duplicate event handling

Event delivery guarantees are a critical aspect of streaming solutions. Azure Event Hubs provides an at-least-once event delivery guarantees. In addition, the upstream components that compose a real-world deployment will typically send events to Event Hubs with at-least-once guarantees (i.e. for reliability purposes, they should be configured to retry if they do not get an acknowledgement of message reception by the Event Hub endpoint, though the message might actually have been ingested). And finally, the stream processing system typically only has at-least-once guarantees when delivering data into the serving layer. Duplicate messages are therefore unavoidable and are better dealt with explicitly.

Depending on the type of application, it might be acceptable to store and serve duplicate messages, or it might desirable to deduplicate messages. The serving layer might even have strong uniqueness guarantees (e.g. unique key in Azure SQL Database). To demonstrate effective message duplicate handling strategies, the various solution templates demonstrate, where possible, effective message duplicate handling strategies for the given combination of stream processing and serving technologies. In most solutions, the event simulator is configured to randomly duplicate a small fraction of the messages (0.1% on average).

Integration tests

End-to-end integration tests are configured to run. You can check the latest closed pulled requests ("View Details") to navigate to the integration test run in Azure DevOps. The integration test suite deploys each solution and runs verification jobs in Azure Databricks that pull the data from the serving layer of the given solution and verifies the solution event processing rate and duplicate handling guarantees.

Available solutions

At present time the available solutions are

Implement a stream processing architecture using:

  • Kafka on Azure Kubernetes Service (AKS) (Ingest / Immutable Log)
  • Azure Databricks (Stream Process)
  • Cosmos DB (Serve)

Implement stream processing architecture using:

  • Event Hubs (Ingest)
  • Event Hubs Capture (Store)
  • Azure Storage (Azure Data Lake Storage Gen2)
  • Azure Databricks (Stream Process)
  • Delta Lake (Serve)

Implement a stream processing architecture using:

  • Event Hubs (Ingest / Immutable Log)
  • Azure Databricks (Stream Process)
  • Azure SQL (Serve)

Implement a stream processing architecture using:

  • Event Hubs (Ingest / Immutable Log)
  • Azure Databricks (Stream Process)
  • Cosmos DB (Serve)

Implement a stream processing architecture using:

  • Event Hubs (Ingest / Immutable Log) with Kafka endpoint
  • Azure Databricks (Stream Process)
  • Cosmos DB (Serve)

Implement a stream processing architecture using:

  • Event Hubs (Ingest / Immutable Log)
  • Azure Databricks (Stream Process)
  • Delta Lake (Serve)

Implement a stream processing architecture using:

  • Event Hubs (Ingest / Immutable Log)
  • Azure Functions (Stream Process)
  • Azure SQL (Serve)

Implement a stream processing architecture using:

  • Event Hubs (Ingest / Immutable Log)
  • Azure Functions (Stream Process)
  • Cosmos DB (Serve)

Implement a stream processing architecture using:

  • Event Hubs (Ingest / Immutable Log)
  • Stream Analytics (Stream Process)
  • Cosmos DB (Serve)

Implement a stream processing architecture using:

  • Event Hubs (Ingest / Immutable Log)
  • Stream Analytics (Stream Process)
  • Azure SQL (Serve)

Implement a stream processing architecture using:

  • Event Hubs (Ingest / Immutable Log)
  • Stream Analytics (Stream Process)
  • Event Hubs (Serve)

Implement a stream processing architecture using:

  • HDInsight Kafka (Ingest / Immutable Log)
  • Flink on HDInsight or Azure Kubernetes Service (Stream Process)
  • HDInsight Kafka (Serve)

Implement a stream processing architecture using:

  • Event Hubs Kafka (Ingest / Immutable Log)
  • Flink on HDInsight or Azure Kubernetes Service (Stream Process)
  • Event Hubs Kafka (Serve)

Implement a stream processing architecture using:

  • Event Hubs Kafka (Ingest / Immutable Log)
  • Azure Functions (Stream Process)
  • Cosmos DB (Serve)

Implement a stream processing architecture using:

  • HDInsight Kafka (Ingest / Immutable Log)
  • Azure Databricks (Stream Process)
  • Azure SQL Data Warehouse (Serve)

Implement a stream processing architecture using:

  • Event Hubs (Ingest / Immutable Log)
  • Azure Data Explorer (Stream Process / Serve)

Implement a stream processing architecture using:

  • Event Hubs (Ingest / Immutable Log)
  • Microsoft Data Accelerator on HDInsight and Service Fabric (Stream Process)
  • Cosmos DB (Serve)

Implement a stream processing architecture using:

  • Event Hubs (Ingest / Immutable Log)
  • Time Series Insights (Stream Process / Serve / Store to Parquet)
  • Azure Storage (Serve for data analytics)

Implement a stream processing architecture using:

  • IoT Hub (Ingest)
  • Azure Functions (Stream Process)
  • Azure SQL (Serve)

Implement a stream processing architecture using:

  • Azure Storage (Azure Data Lake Storage Gen2) (Ingest / Immutable Log)
  • Azure Databricks (Stream Process)
  • Delta Lake (Serve)

Implement a stream processing architecture using:

  • IoT Hub (Ingest)
  • Azure Digital Twins (Model Management / Stream Process / Routing)
  • Time Series Insights (Serve / Store to Parquet)
  • Azure Storage (Serve for data analytics)

Note

Performance and Services change quickly in the cloud, so please keep in mind that all values used in the samples were tested at them moment of writing. If you find any discrepancies with what you observe when running the scripts, please create an issue and report it and/or create a PR to update the documentation and the sample. Thanks!

streaming-at-scale's People

Contributors

alfeuduran avatar algattik avatar azeltov avatar benjguin avatar carlbrochu avatar chetanmsft avatar dependabot[bot] avatar dubansal avatar guilhermeslucas avatar jcocchi avatar kaizimmerm avatar kasun04 avatar linkinchow avatar mamccrea avatar marcelaldecoa avatar mtrilbybassett avatar nzthiago avatar priyaananthasankar avatar quanuw avatar rasavant-ms avatar supernova-eng avatar wantedfast avatar yorek avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

streaming-at-scale's Issues

Deployment of eventhubs-streamanalytics-azuresql fails

Deployment of eventhubs-streamanalytics-azuresql fails in Azure Cloud Shell with:

Error:
***** [D] Setting up DATABASE
retrieving storage connection string
deploying azure sql
. server: myiottestsql
. database: streaming
creating logical server
enabling access from Azure
deploying database db
creating file share
uploading provisioning scripts
uploading provision.sh...
Finished[#############################################################] 100.0000%
uploading db/provision.sql...
Finished[#############################################################] 100.0000%
running provisioning scripts in container instance
../components/azure-sql/create-sql.sh: line 54: uuidgen: command not found
There was an error, execution halted
Error at line 54

Databricks provisioning fails when creating the solution

az cli not se to return json results

User could have configured AZ CLI not to return JSON results by default. This conflict with jq usage as it expects a JSON input. Make sure to explicitly use the -o json option every time AZ CLI is used and its output is piped into jq

Add deduplication in cosmosdb sink

Use cosmos db's merge functionality to perform upsert in case of event duplication

eventhubs-databricks-cosmosdb
eventhubs-functions-cosmosdb
eventhubs-streamanalytics-cosmosdb
eventhubskafka-databricks-cosmosdb

[QUERY] Not able to read AVRO file (following the tutorial for event-hubs + apache drill)

Query/Question
It's regarding streaming-at-scale/eventhubs-capture/.

I tried it with by sending basic data such as described in the MSDN doc (https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-dotnet-standard-getstarted-send). However, when I try to read the avro files, it simply fails. I tried to download the avro file locally and I have the same issue.

Is there any particular configuration that are required on the event hub in order to make Apache Drill to work with the content of Event Hubs?

apache drill> select * from dfs.`temp/16.avro`;
Error: UNSUPPORTED_OPERATION ERROR: 'complex union' type is not supported

Column: value
Fragment: 0:0

[Error Id: c0f67d7f-a43d-4efc-b643-b1ff48660a4b on host.docker.internal:31010] (state=,code=0)

Note: I also tried to convert to JSON, but the issue remain the same.

Environment:

  • Windows 10 - Local install, Docker and WSL produce the same issue.

ERROR: Service principal 'http://bie3i8adx-reader' doesn't exist

Streaming at Scale with Azure Data Explorer

Steps to be executed: CIDTM

Configuration:
. Resource Group => bie3i8
. Region => eastus
. EventHubs => TU: 2, Partitions: 2
. Data Explorer => SKU: Standard_D11_v2
. Simulators => 1

Deployment started...

***** [C] Setting up COMMON resources
creating resource group
. name: bie3i8
. location: eastus
creating storage account
. name: bie3i8storage

***** [I] Setting up INGESTION
creating eventhubs namespace
. name: bie3i8eventhubs
. capacity: 2
. capture: False
. auto-inflate: false
creating eventhub instance
. name: bie3i8in-2
. partitions: 2
creating consumer group
. name: dataexplorer

***** [D] Setting up DATABASE
checking Key Vault exists
creating KeyVault bie3i8spkv
checking service principal exists
creating service principal
WARNING: The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli
WARNING: 'name' property in the output is deprecated and will be removed in the future. Use 'appId' instead.
getting service principal
ERROR: Service principal 'http://bie3i8adx-reader' doesn't exist

What does that mean "Service principal 'http://bie3i8adx-reader' doesn't exist "?

Use IoT Hub instead of Event Hub

It's not necessary to use Locust if that is a problem with IoTHub. Just create an app that can be containerized and we'll use ACI to create many instances and add load to the test

Duplicate data is being generated

Duplicate data is being generated and as result, Azure SQL is (correctly :)) preventing it to be inserted:

System.Data.SqlClient.SqlException (0x80131904): Violation of PRIMARY KEY constraint 'PK__#BAEC272__7944C811B84F8B93'. Cannot insert duplicate key in object 'dbo.@payload'. The duplicate key value is (5afa9381-61c1-4d15-ab19-8b616acf65da).
The data for table-valued parameter "@payload" doesn't conform to the table type of the parameter. SQL Server error is: 3602, state: 30
The statement has been terminated.
   at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
   at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
   at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose)
   at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady)
   at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString)
   at System.Data.SqlClient.SqlCommand.CompleteAsyncExecuteReader()
   at System.Data.SqlClient.SqlCommand.EndExecuteNonQueryInternal(IAsyncResult asyncResult)
   at System.Data.SqlClient.SqlCommand.EndExecuteNonQuery(IAsyncResult asyncResult)
   at System.Threading.Tasks.TaskFactory`1.FromAsyncCoreLogic(IAsyncResult iar, Func`2 endFunction, Action`1 endAction, Task`1 promise, Boolean requiresSynchronization)
ClientConnectionId:43576d0a-f89c-43b9-b61f-4e82783d4c38
Error Number:2627,State:2,Class:14
ClientConnectionId before routing:8ed108b2-dcc0-4a47-bba7-ea13633358d6
Routing Destination:d7f1e4838529.tr2237.eastus1-a.worker.database.windows.net,11044

Fix processing speed in eventhubs-streamanalytics-cosmosdb

OutgoingMessages should keep up with IncomingMessages (600k per minute)

cd eventhubs-streamanalytics-cosmosdb
./create-solution.sh -d myrg -t 10 -l northeurope
[...]
starting to monitor locusts for 20 seconds... 
locust is sending 872 messages/sec
locust is sending 1621 messages/sec
locust is sending 2697.375 messages/sec
locust is sending 3455.9 messages/sec
locust is sending 5471.5 messages/sec
locust is sending 6810.2 messages/sec
locust is sending 7971.5 messages/sec
locust is sending 9053 messages/sec
locust is sending 9561.8 messages/sec
locust is sending 9753.8 messages/sec
monitoring done
locust monitor available at: http://52.155.179.29:8089

***** [M] Starting METRICS reporting
Event Hub capacity: 12 throughput units (this determines MAX VALUE below).
Reporting aggregate metrics per minute, offset by 1 minute, for 30 minutes.
                                IncomingMessages       IncomingBytes    OutgoingMessages       OutgoingBytes   ThrottledRequests
                                ----------------       -------------    ----------------       -------------  ------------------
                   MAX VALUE              720000           720000000             2949120          1440000000                   -
                                ----------------       -------------    ----------------       -------------  ------------------
    2019-07-08T08:41:51+0200                   0                   0                   0                   0                   0
    2019-07-08T08:42:00+0200                   0                   0                   0                   0                   0
    2019-07-08T08:43:00+0200               53900            53219651               49959            49327981                   0
    2019-07-08T08:44:00+0200              515056           508587368              496420           490177954                   0
    2019-07-08T08:45:00+0200              598890           591349724              578479           571203437                   0
    2019-07-08T08:46:00+0200              600572           593017681              561561           554499015                   0
    2019-07-08T08:47:00+0200              597500           589985683              563482           556393599                   0
    2019-07-08T08:48:00+0200              600960           593408050              513331           506880384                   0
    2019-07-08T08:49:00+0200              601521           593959016              593205           585749886                   0
    2019-07-08T08:50:00+0200              601282           593717804              500150           493861481                   0
    2019-07-08T08:51:00+0200              604751           597149192              514832           508366889                   0
    2019-07-08T08:52:00+0200              608397           600748827              472418           466475537                   0
    2019-07-08T08:53:00+0200              606160           598535776              474963           468996517                   0
    2019-07-08T08:54:00+0200              607886           600239871              477438           471441458                   0
    2019-07-08T08:55:00+0200              605485           597875502              471086           465163684                   0
    2019-07-08T08:56:00+0200              608527           600871623              524951           518354312                   0
    2019-07-08T08:57:00+0200              607039           599406517              399120           394100780                   0
    2019-07-08T08:58:00+0200              607360           599739819              471705           465777118                   0
    2019-07-08T08:59:00+0200              607220           599573247              450127           444471194                   0
    2019-07-08T09:00:00+0200              609149           601490846              452120           446435778                   0
    2019-07-08T09:01:00+0200              609889           602221035              480420           474384460                   0
    2019-07-08T09:02:00+0200              608722           601070463              416933           411690081                   0
    2019-07-08T09:03:00+0200              603686           596098118              473337           467389152                   0
    2019-07-08T09:04:00+0200              590519           583094755              444143           438559398                   0
    2019-07-08T09:05:00+0200              600124           592579242              426720           421356704                   0
    2019-07-08T09:06:00+0200              606179           598555994              461778           455984029                   0
    2019-07-08T09:07:00+0200              601860           594294746              428422           423034922                   0
    2019-07-08T09:08:00+0200              603490           595902290              474911           468950668                   0
    2019-07-08T09:09:00+0200              602521           594955778              481977           475913196                   0
    2019-07-08T09:10:00+0200              604696           597083899              385932           381093727                   0

***** Done

Evaluate use of blobs as storage

Hey! Also from a discussion that started during oneweek, maybe having blobs as part of the samples should be nice, since some people also use that for storing raw data as logs. What do you think about that?

Adding scenarios that each sample fits best

Hey! So as we were discussing during oneweek, It would be nice to put on the docs some scenarios that best fit each of the samples. What do you think about it? Thanks a lot!

Permission Denied on 05-report-throughput.sh

I am running through the eventhubs-streamanalytics-cosmosdb sample and get this error below when I run the create-solution.sh script. Any idea how I can fix?

CC @yorek

thanks.

***** [M] Starting METRICS reporting
./create-solution.sh: line 208: ./05-report-throughput.sh: Permission denied
There was an error, execution halted
Error at line 208

Add checkpointing and deduplication in eventhubs-databricks-azuresql

Stopping and restarting the job results in

org.apache.spark.SparkException: Job aborted due to stage failure: Task 7 in stage 8.0 failed 4 times, most recent failure: Lost task 7.3 in stage 8.0 (TID 158, 10.139.64.9, executor 1): com.microsoft.sqlserver.jdbc.SQLServerException: Violation of PRIMARY KEY constraint 'pk__rawdata'. Cannot insert duplicate key in object 'dbo.rawdata'. The duplicate key value is (2284bd09-4619-49ea-93d9-0015d9372ab2, 2).

Curious why no Python samples here?

Hello,

I'm curious why there are no Python samples in this repo.

  • Speed?
  • Complexity?
  • Just haven't made any Python samples yet?
  • Other?

Thank you

akskafka-databricks-cosmosdb fails with helm3

configmap/container-azm-ms-agentconfig created
deploying Helm
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created
Error: unknown command "init" for "helm"

Did you mean this?
lint

Run 'helm --help' for usage.

Fix rate of generated events

Rate of generated events sometimes varies from target. E.g. below, IncomingMessages should be 600k per minute but is actually around 400k

cd eventhubs-streamanalytics-eventhubs
./create-solution.sh -d algatkssc29 -t 10 -l westeurope
[...]
starting to monitor locusts for 20 seconds... 
locust is sending 956.6666666666666 messages/sec
locust is sending 1697.8 messages/sec
locust is sending 2426.714285714286 messages/sec
locust is sending 3016.8888888888887 messages/sec
locust is sending 4457.5 messages/sec
locust is sending 5731.3 messages/sec
locust is sending 6912.6 messages/sec
locust is sending 7900.7 messages/sec
locust is sending 8935.6 messages/sec
locust is sending 9352.6 messages/sec
monitoring done
locust monitor available at: http://51.105.174.161:8089

***** [M] Starting METRICS reporting
Event Hub capacity: 10 throughput units (this determines MAX VALUE below).
Reporting aggregate metrics per minute, offset by 1 minute, for 30 minutes.
                                IncomingMessages       IncomingBytes    OutgoingMessages       OutgoingBytes   ThrottledRequests
                                ----------------       -------------    ----------------       -------------  ------------------
                   MAX VALUE              600000           600000000             2457600          1200000000                   -
                                ----------------       -------------    ----------------       -------------  ------------------
    2019-07-08T07:34:27+0200                   0                   0                   0                   0                   0
    2019-07-08T07:35:00+0200                   0                   0                   0                   0                   0
    2019-07-08T07:36:00+0200              296922           596714414              291020           287368004                   0
    2019-07-08T07:37:00+0200              435748           881065804              415067           409844384                   0
    2019-07-08T07:38:00+0200              522704          1062592640              508476           502081594                   0
    2019-07-08T07:39:00+0200              496339          1007033882              481689           475630885                   0
    2019-07-08T07:40:00+0200              375336           744465210              347935           343561428                 487
    2019-07-08T07:41:00+0200              390234           776925168              365353           360760676                 115
    2019-07-08T07:42:00+0200              412583           831936433              396489           391500889                  25
    2019-07-08T07:43:00+0200              476078           955091213              451125           445464583                   0

Drill sample not working

Selecting data based on sample queries produces errors:

jdbc:drill:zk=local> select * from azure.`algatssp01ingest/algatssp01ingest-2/2019_07_07_21_11_49_0.avro` limit 10;
Error: INTERNAL_ERROR ERROR: Avro union type must be of the format : ["null", "some-type"]

More info on sample

Hi,

We are using the Flink Table API,
is that compatible with the Flink ApplicationInsightsReporter sample?

Or is this meant to be used with a job that uses Streaming API and implements RichFunction?
For example using an instance of ApplicationInsightsReporter and inside the lets say the map function, sends the metric data using ApplicationInsightsReporter?

Regards
Suleiman

Add "processedAt" field

In the Databricks samples the "processedAt" field it is missing. It should contain the UTC datetime of when the row has been processed by the Streaming Engine

Eventhuub-Function-SQL Deployment issues in functions

Hello,

I tried to deploy, but keep getting this issue below. Appears to be issued while deploying function. Please let me know
Tried with Deployment command: ./create-solution.sh -d evfnsql -l westeurope

for Incremental deployment: ./create-solution.sh -d evfnsql -l westeurope -s PTMV
Error is in the script: https://github.com/Azure-Samples/streaming-at-scale/blob/main/eventhubs-functions-azuresql/StreamingProcessor-AzureSQL-Test0/StreamingProcessor-AzureSQL/Test0.cs
Output:

Deployment started...

***** [C] Setting up COMMON resources

***** [I] Setting up INGESTION

***** [D] Setting up DATABASE

***** [P] Setting up PROCESSING
creating app service plan
. name: evfnsqlprocessplan
creating function app
. name: evfnsqlprocess
WARNING: No functions version specified so defaulting to 2. In the future, specifying a version will be required. To create a 2.x function you would pass in the flag --functions-version 2
WARNING: Application Insights "evfnsqlprocess" was created for this Function App. You can visit https://portal.azure.com/#resource/subscriptions/........ to view your Application Insights component
building function app
. path: ./StreamingProcessor-AzureSQL-Test0/StreamingProcessor-AzureSQL
There was an error, execution halted
Error at line 25

Add CosmosDB via MongoDB API sample

As per the Event Hubs Kafka samples, please add support for CosmosDB via Mongodb API:

  • eventhubs-functions-cosmosdbmongodb
  • eventhubskafka-functions-cosmosdbmongodb

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.