GithubHelp home page GithubHelp logo

ibm / watson-and-salesforce Goto Github PK

View Code? Open in Web Editor NEW
7.0 13.0 10.0 5.82 MB

In this code pattern we provide a full roadmap for how to sign up with the salesforce platform and access Watson APIs via the Watson Salesforce SDK.

Home Page: https://developer.ibm.com/patterns/integrate-watson-ai-into-salesforce-apps/

License: Apache License 2.0

salesforce watson-api ibmcode watson-tone-analyzer watson-visual-recognition watson-discovery watson-salesforce-sdk watson-sdk

watson-and-salesforce's Introduction

WARNING: This repository is no longer maintained ⚠️

This repository will not be updated. The repository will be kept available in read-only mode.

Using the Watson Salesforce SDK to leverage Watson APIs in your Salesforce app

In this code pattern we will be using the new Watson Salesforce SDK by interacting with various Watson APIs in Apex, a Salesforce specific programming language. The Watson Salesforce SDK currently supports:

  • IBM Conversation V1 (now known as Assistant)
  • IBM Discovery V1
  • IBM Language Translator V2
  • IBM Natural Language Classifier V1
  • IBM Natural Language Understanding V1
  • IBM Personality Insights V3
  • IBM Speech To Text V1
  • IBM Text To Speech V1
  • IBM Tone Analyzer V3
  • IBM Visual Recognition V3

When the reader has completed this code pattern, they will understand how to:

  • Create a Salesforce account.
  • Install the Watson Salesforce SDK.
  • Use the Watson Salesforce SDK to make calls to Watson Visual Recognition, Watson Discovery, and Watson Tone Analyzer APIs.
  • Use the Lightning UI debugger to view the results of the APIs.

Flow

  1. User logs into the Salesforce platform and brings up the Developer Console.
  2. Write Apex code into the Developer Console using the Watson Salesforce SDK.
  3. Execute the Apex code that calls the Watson APIs.
  4. The Watson API results are returned to the Salesforce Developer Console debugger.

Included components

  • Watson Visual Recognition: Visual Recognition understands the contents of images - visual concepts tag the image, find human faces, approximate age and gender, and find similar images in a collection.
  • IBM Watson Discovery: A cognitive search and content analytics engine for applications to identify patterns, trends, and actionable insights.
  • IBM Watson Tone Analyzer: Uses linguistic analysis to detect communication tones in written text.

Featured Technologies

  • Salesforce DX: An integrated, end-to-end platform designed for high-performance agile development.
  • Apex: A strongly typed, object-oriented programming language that allows developers to execute flow and transaction control statements on the Lightning Platform server, in conjunction with calls to the API.

Watch the Video

Steps

Prerequisites

Configure the SFDX CLI

NOTE: This step can be run locally.

To use the Watson Salesforce SDK we have to upload it's contents to the Salesforce DX platform. To accomplish this we first clone the repository, then convert the contents to a specific format (Metadata API) that the Salesforce DX platform uses. Then finally push the converted contents to our authenticated user session. Note, that ideally a specific version should be used when cloning the repository but for this code pattern we will use the master branch.

First we clone the repo and change into it's directory.

git clone https://github.com/watson-developer-cloud/salesforce-sdk
cd salesforce-sdk

Use the sfdx force:auth:web:login command from a shell to authenticate with a specific user. The shell session now has a form of delegated authorization of that user. This command launches a browser pointed to the Salesforce DX platform and prompts the user to sign-in. Upon signing in successfully a message will be prompted to the shell.

$ sfdx force:auth:web:login -s
Successfully authorized [email protected] with org ID 00D6A000001izOFUAY
You may now close the browser

Now that we have the SDK cloned locally and authenticated as a user we can convert the SDK content to the Metadata API format that is expected by the Salesforce DX platform using the sfdx force:source:convert command.

$ sfdx force:source:convert -d mdapioutput
Source was successfully converted to Metadata API format and written to the location: /Users/stevemar/workspace/salesforce-sdk/mdapioutput

Lastly we use the sfdx force:mdapi:deploy to deploy the converted contents to our authenticatd session.

➜  salesforce-sdk git:(master) sfdx force:mdapi:deploy -d mdapioutput/ -w 100
Job ID | 0Af4R00000J7YZPSA3
MDAPI PROGRESS | ████████████████████████████████████████ | 95/95 Components

Though not necessary to run it is handy, especially when debugging, to view the organizations tied to a specific account using the sfdx force:org:list command.

$ sfdx force:org:list
     ALIAS   USERNAME           ORG ID              CONNECTED STATUS
───  ──────  ─────────────────  ──────────────────  ────────────────
(D)  DevHub  [email protected]  00D0Y000001LvYSUA0  Connected

Now that our SFDX CLI is set up we can proceed to the next section where we allow our Salesforce account to call out to Watson APIs.

Allow remote calls from Salesforce DX

Before we start making API calls to Watson we need to configure one final thing in our Salesforce DX platform. We need to allow remote calls to the Watson API URLs. To do this, we visit the Remote Site Details section of our Salesforce DX platform. To get there, click on the Gear icon in the top-right corner of your screen. From there, click on setup. Next, from the left-side bar, go down to Security. Click on Security to expand the menu. From there click on Remote Site Settings.

There should be a single entry for making calls to the ApexDevNet URL http://www.apexdevnet.com. We will add to this list by adding two URLs that are specific to Watson APIs. Below is the completed image and for completeness we will show screenshots of the details for both entries.

Add two new entries using the New Remote Site button. The Remote Site Name field for the entries is not important but the Remote Site URL fields must match the following list:

  • https://gateway.watsonplatform.net
  • https://api.us-south.discovery.watson.cloud.ibm.com
  • https://api.us-south.tone-analyzer.watson.cloud.ibm.com
  • https://iam.cloud.ibm.com

Here is an example of adding a remote site.

Once all remote sites are added we are ready to start making API calls!

Using the Watson Salesforce SDK

REMINDER: Ensure the Watson services you intend to use are provisioned in IBM Cloud before proceeding!

Go to the Developer Console

Log into Salesforce DX by going to https://login.salesforce.com and clicking the gear icon on the top right and selecting the Developer Console option. This will bring up a new window that, among many other tasks, allows a user to run snippets of Apex code in a debugger.

Now we're ready to run some code! From the Developer Console click the Debug menu and the Open Execute Anonymous Windows to bring up a new window that allows a user to execute Apex code. From here follow along in the sections below and click the Execute button.

A few tips:

  • Be sure to select the Open Log checkbox.
  • Be sure to remove any old code between the sections below.

Watson Discovery News

We'll start by making a few Watson Discovery calls to the Discovery News service. Copy the following code block into the Apex editor and update the username and password variables with the credentials from your own provisioned Watson Discovery service and execute the code.

The code block below does the following: 1) It creates a new Watson Discovery object. 2) It updates said object with the specified username and password variables. 3) It makes a call to the Discovery News service about IBM. And 4) it logs the response as a debug content.

IBMWatsonAuthenticator authenticator = new IBMWatsonIAMAuthenticator('msOrPGdhQxxxxxxxxxx4W2q74');
IBMDiscoveryV1 discovery = new IBMDiscoveryV1('2019-04-30', authenticator);

// configuring options for listing environments
IBMDiscoveryV1Models.ListEnvironmentsOptions options =
  new IBMDiscoveryV1Models.ListEnvironmentsOptionsBuilder()
    .build();

// list discovery environments
IBMDiscoveryV1Models.ListEnvironmentsResponse environmentList = discovery.listEnvironments(options);
System.debug(environmentList);

// query about IBM in the discovery news data set
IBMDiscoveryV1Models.QueryOptions query_options 
  = new IBMDiscoveryV1Models.QueryOptionsBuilder()
    .environmentId('system')
    .collectionId('news')
    .naturalLanguageQuery('IBM')
    .count(5)
    .build();
IBMDiscoveryV1Models.QueryResponse response = discovery.query(query_options);
System.debug(response);

The results should look like the screenshot below. We can see the first result is an article about an Offering Management Lead for Automotive and Connected Vehicles for IBM's Watson IoT Business Unit. As well as a few other responses from the server about publication_date, sentiment and other fields from Discovery.

To keep ourselves honest we made the same call with the tooling available on IBM Cloud. Note the same results are returned in the screenshot below.

Watson Visual Recognition (Deprecated)

**Note: Watson Visual Recognition is discontinued. Existing instances are supported until 1 December 2021, but as of 7 January 2021, you can't create instances. Any instance that is provisioned on 1 December 2021 will be deleted. Please view the Maximo Visual Inspection trial as a way to get started with image classification.

Now we'll make a simple Watson Visual Recognition call using an image available online. Copy the following code block into the Apex editor and update the api_key variable with the value from your own provisioned Watson Visual Recognition service and execute the code.

The code block below is similar to our Watson Discovery example with a few minor changes. It does the following: 1) It creates a new Watson Visual Recognition object. 2) It updates said object with the specified api_key value. 3) It makes a call to the Watson Visual Recognition service using an image of an apple. And 4) it logs the response as a debug content.

IBMWatsonAuthenticator authenticator = new IBMWatsonIAMAuthenticator('pS0nVDU8Nxxxxxxxxx35L4WEuhO');
IBMVisualRecognitionV3 visualRecognition = new IBMVisualRecognitionV3('2018-03-19', authenticator);

// classify an image, an apple
IBMVisualRecognitionV3Models.ClassifyOptions options = new IBMVisualRecognitionV3Models.ClassifyOptionsBuilder()
  .url('https://upload.wikimedia.org/wikipedia/commons/f/f4/Honeycrisp.jpg')
  .build();
IBMVisualRecognitionV3Models.ClassifiedImages resp = visualRecognition.classify(options);
System.debug('IBMVisualRecognitionV3FTest.testClassify(): ' + resp);

The results should look like the screenshot below. We can see that the Watson Visual Recognition service believe the object in the image to be an apple with 0.957 confidence (out of 1). We also see a few other results returned from the server with varying levels of confidence.

To keep ourselves honest we can try the same image with a free online demo of Watson Visual Recognition available at: https://visual-recognition-demo.ng.bluemix.net. Note the same results are returned as the Watson Salesforce SDK.

Watson Tone Analyzer

Alright, one last example using Watson's Tone Analyzer. Copy the code block below into the Apex editor and update the username and password variables with the value from your own provisioned Watson Tone Analyzer service and execute the code.

The code block below is similar to our others examples with a few minor changes. It does the following: 1) It creates a new Watson Tone Analyzer object. 2) It updates said object with the specified username and password values. 3) It makes a call to the Watson Tone Analyzer service using the phrase We have a better product. We need to do better selling. And 4) it logs the response as a debug content.

IBMWatsonAuthenticator authenticator = new IBMWatsonIAMAuthenticator('twCyzte3u4xxxxxxxxxxx8mPu72FjW4');
IBMToneAnalyzerV3 toneAnalyzer = new IBMToneAnalyzerV3('2017-09-21', authenticator);

IBMToneAnalyzerV3Models.ToneOptions options = new IBMToneAnalyzerV3Models.ToneOptionsBuilder()
    .text('We have a better product. We need to do better selling')
    .addTones('social')
    .sentences(false)
    .contentLanguage('en')
    .acceptLanguage('en')
    .build();

IBMToneAnalyzerV3Models.ToneAnalysis resp = toneAnalyzer.tone(options);
System.debug('IBMToneAnalyzerV3FTest.testTone(): ' + resp);

The results should look like the screenshot below. We can see that the Watson Tone Analyzer service believes the phrase to have a tone of analytical and with a score of 0.8298 (out of 1).

To keep ourselves honest we can try the same image with a free online demo of Watson Tone Analyzer available at: https://tone-analyzer-demo.ng.bluemix.net. Note the same results are returned as the Watson Salesforce SDK.

Summary

Congratulations! Now go on and make awesome Salesforce apps that use IBM Watson APIs! We hope that the new SDK will make it easy to integrate Watson into your Salesforce apps by offering a simple, consistent interface.

If you're interested in exploring further or would like some resources to reference in the future see the References section below.

References

Labs

Blogs

Troubleshooting Salesforce

Learn more

  • Artificial Intelligence Code Patterns: Enjoyed this Code Pattern? Check out our other AI Code Patterns.
  • AI and Data Code Pattern Playlist: Bookmark our playlist with all of our Code Pattern videos
  • With Watson: Want to take your Watson app to the next level? Looking to utilize Watson Brand assets? Join the With Watson program to leverage exclusive brand, marketing, and tech resources to amplify and accelerate your Watson embedded commercial solution.

License

This code pattern is licensed under the Apache Software License, Version 2. Separate third party code objects invoked within this code pattern are licensed by their respective providers pursuant to their own separate licenses. Contributions are subject to the Developer Certificate of Origin, Version 1.1 (DCO) and the Apache Software License, Version 2.

Apache Software License (ASL) FAQ

watson-and-salesforce's People

Contributors

dolph avatar horeaporutiu avatar imgbot[bot] avatar jamaya2001 avatar ljbennett62 avatar stevemar avatar stevemart avatar tqtran7 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.