GithubHelp home page GithubHelp logo

operationalizing-ml-in-azure's Introduction

Operationalizing Machine Learning in Azure

This project uses Bank Marketing dataset that is related with direct marketing campaigns of a Portugese banking institution. The classification goal is to predict if the client will subscribe a term deposit. Microsoft Azure is used to configure a cloud-based machine learning production model, deploy the best model and consume the model endpoints. Apart from using Azure ML Studio, Azure Python SDK is used to create, publish and consume a pipeline.

The projects is structured with the folowwing steps:

  1. Authentication
  2. Automated ML Experiment
  3. Deploy the best model
  4. Enable logging
  5. Swagger Documentation
  6. Consume model endpoints
  7. Create and publish a pipeline
  8. Documentation

Architectural Diagram


In this project both Python SDK and Azure Machine Learning Studio are used to perform the tasks. The architecture starts with authenticating to Azure Machine Learning Services, then create an AutoML experiment and deploy the best model. Next, logging is enabled so that important log information can be reviewed. After that, the model endpoints are consumed. Then a Azure pipeline is created, published and consumed. The final and important stage is the documentation which acts as a project report.

Key Steps

Step 1: Automated ML Experiment

At this point, security is enabled an authentication is completed. In this step, an experiment is created using Azure Automated ML. In order to run the experiment, a compute cluster must also be configured.

The following screenshot shows the datasets registered for the experiment:


The following screenshot shows the experiment status completed:


The following screenshot shows the list of different models used throughout the experiment:


The following screenshot shows the best model after the experiment is completed:


Step 2: Deploy the Best Model

The best model is set for deployment in the best model's Details tab. Deploying the Best Model will allow to interact with the HTTP API servie and interact with the model by sending data over POST requests.

The following screenshot shows the real-time endpoint created after the best model is deployed:


Step 3: Enable Logging

After the Best Model is deployed, Application Insights must be enabled to retrieve logs.

Initially, Application Insights is disabled in the endpoint. This option is enabled by executing the logs.py Python file. The following screenshot shows the Application Insights enabled in the Details tab of the endpoint after running logs.py:


The following screenshot shows the logs produced by running the provided logs.py scripts:


Step 4: Swagger Documentation

In this step, the deployed model is consumed using Swagger. Azure provided a Swagger JSON file for deployed models in the Endpoints section.

The following screenshots shows that swagger runs on localhost with the HTTP API methods and responses for the model:



Step 5: Consume Model Endpoints

Once the model is deployed, endpoint.py script is used to interact with the trained model with appropriate JSON payload.

The following screenshot shows the endpoint.py script running against the API producing the JSON output from the model:


Step 6: Create, Publish and Consume Pipeline

For this step, the Jupyter Notebook providede is used with appropriate keys, URI, dataset, cluster and model names created previously.

This step is completely performed using the Python SDK. So, config.json is required to be present the current directory of the notebook in order to properly configure the workspcae and accound subscription. The following screenshot shows the pipeline created after defining pipeline steps in the notebook:


The following screenshot shows the Pipeline Endpoint after successful deployment:


The following screenshot shows the Published Pipeline Overview with a REST Endpoint and status of ACTIVE:


The following screenshot shows the Run Details of the published pipeline:



Screen Recording

The screencast of the project demo can be viewed here.

Standout Suggestions

  1. Perform hyperparameter tuning by training a model with HyperDrive.
  2. Enable Deep Learning in Classification while creating the AutoML experiment.
  3. Perform data preprocessing such as feature selection by observing the influence of features on different models.

operationalizing-ml-in-azure's People

Contributors

annyang1963 avatar monicasai7 avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.