GithubHelp home page GithubHelp logo

dialogflow / dialogflow-apple-client Goto Github PK

View Code? Open in Web Editor NEW
244.0 24.0 65.0 5.01 MB

iOS SDK for Dialogflow

Home Page: http://api.ai

License: Apache License 2.0

Objective-C 89.18% C 0.91% Swift 4.82% Ruby 0.54% Objective-C++ 3.26% C++ 1.29%

dialogflow-apple-client's Introduction

DEPRECATED Objective-C(Cocoa) SDK for api.ai

Deprecated
This Dialogflow client library and Dialogflow API V1 have been deprecated and will be shut down on October 23th, 2019. Please migrate to Dialogflow API V2.

Build Status Version License Platform


Overview

The API.AI Objective-C(Cocoa) SDK makes it easy to integrate speech recognition with API.AI natural language processing API on Apple devices. API.AI allows using voice commands and integration with dialog scenarios defined for a particular agent in API.AI.

Prerequsites

Running the Demo app

  • Run pod update in the ApiAiDemo project folder.

  • Open ApiAIDemo.xworkspace in Xcode.

  • In ViewController -viewDidLoad insert API key.

    configuration.clientAccessToken = @"YOUR_CLIENT_ACCESS_TOKEN";
    

    Note: an agent in api.ai should exist. Keys could be obtained on the agent's settings page.

  • Define sample intents in the agent.

  • Run the app in Xcode. Inputs are possible with text and voice (experimental).

Integrating into your app

1. Initialize CocoaPods

  • Run pod install in your project folder.

  • Update Podfile to include:

    pod 'ApiAI'
    
  • Run pod update

2. Init the SDK.

In the AppDelegate.h, add ApiAI.h import and property:

#import <ApiAI/ApiAI.h>

@property(nonatomic, strong) ApiAI *apiAI;

In the AppDelegate.m, add

  self.apiAI = [[ApiAI alloc] init];

  // Define API.AI configuration here.
  id <AIConfiguration> configuration = [[AIDefaultConfiguration alloc] init];
  configuration.clientAccessToken = @"YOUR_CLIENT_ACCESS_TOKEN_HERE";

  self.apiAI.configuration = configuration;

3. Perform request.

...
// Request using text (assumes that speech recognition / ASR is done using a third-party library, e.g. AT&T)
AITextRequest *request = [apiai textRequest];
request.query = @[@"hello"];
[request setCompletionBlockSuccess:^(AIRequest *request, id response) {
    // Handle success ...
} failure:^(AIRequest *request, NSError *error) {
    // Handle error ...
}];

[_apiAI enqueue:request];

How to make contributions?

Please read and follow the steps in the CONTRIBUTING.md.

License

See LICENSE.

Terms

Your use of this sample is subject to, and by using or downloading the sample files you agree to comply with, the Google APIs Terms of Service.

This is not an official Google product.

dialogflow-apple-client's People

Contributors

artemgoncharuk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dialogflow-apple-client's Issues

Demo Watch OS Swift not compiling

Got that error when I try to compile in XCode 7.3.1

/api-ai-ios-sdk-master/ApiAIDemoWatchOSSwift/Pods/Headers/Private/ApiAI/AIVoiceRequestButton.h:32:35: Cannot find interface declaration for 'UIControl', superclass of 'AIVoiceRequestButton'

Does this have anything with the bridging header? I noticed there was none and tried to fix it but without success. :(

Can't build ApiAIDemo

Hello.

I have followed to instructions in README, and have run "pod update"In the folder, but XCode fails to compile with error
"unknown type name 'AIResponse'".
I see that the reason is, that AIResponse.h file simply is not included in ApiAI.h, namely it's wrapped with _#if _has_include("AIResponse.h") , which is false, for some reason.
Could you, please, clarify this situation, and why import of this header is wrapped with this if, what is the purpose?

How to Account linking in API.AI framework and get user email ID using iOS framework?

Hi I am currently working on google AI for smart home application, where I am in the need of getting the user detail to persist the user latest request in cloud.

Its working pretty well in Google Assistant app. But while integrating API.AI, I am not able to find any way to link google account to my current accessing skill.

Can anyone help me out to find a way to solve this requirement?

Thanks in advance,

Apple store reject application with ipv6

Hi
i uploaded ios application after made some changing and got belowed rejection error.
"Please run your app on a device while connected to an IPv6 network (all apps must support IPv6) to identify any issues, then revise and resubmit your app for review."

Any solution?

iOS Demo application is giving error "Could not recognize the text from the voice data" everytime

Hi.

I have two questions here for API-AI iOS demo application.

  1. Regarding compilation issue :
    I downloaded the iOS API-AI sdk from github and updated to 0.4.6 using (pod update). I faced a compilation error "- (AIVoiceRequest *)voiceRequest" as not declared
    due to the value of flag "AI_SUPPORT_VOICE_REQUEST" is 0. I fixed it by changing the flag to 1 and compiled it.

Does it mean, the API-AI iOS SDK is not supporting voice requests in this release ?

  1. Regarding recognition results:

After running the demo, for "Simple Voice request" , everytime getting response "Could not recognise the text from the voice data".

Please let me know, what is the quality of iOS ASR for API-AI.

Response as below:

{
id = "9bffc17a-b717-4d49-8afa-003bce8d3054";
status = {
code = 400;
errorDetails = "Could not recognize the text from the voice data.";
errorType = "bad_request";
};
timestamp = "2015-12-09T06:37:22.128Z";
}

How to integrate this api manually in latest Xcode?

Can you please suggest me, how to integrate this framework to latest Xcode. My issue is, I want to add objective-c and swift apis into same project, And the project is in Objective-C, if I add both of them in pods, its getting conflicts, can you please give steps to integrate manually to latest xcode.
Thanks!

Swift Demo Project

When attempting to use the "Voice Button Request" nothing happens and there is an error logged into console:

2017-01-02 01:45:02.102002 ApiAIDemoSwift[917:151587] [] nw_endpoint_handler_add_write_request [1.1 52.222.241.69:443 failed socket-flow (satisfied)] cannot accept write requests
2017-01-02 01:45:02.102678 ApiAIDemoSwift[917:151588] [] __tcp_connection_write_eof_block_invoke Write close callback received error: [22] Invalid argument
2017-01-02 01:45:06.285527 ApiAIDemoSwift[917:151589] [] nw_endpoint_handler_add_write_request [2.1 52.222.241.69:443 failed socket-flow (satisfied)] cannot accept write requests
2017-01-02 01:45:06.286643 ApiAIDemoSwift[917:151589] [] nw_endpoint_handler_add_write_request [2.1 52.222.241.69:443 failed socket-flow (satisfied)] cannot accept write requests
2017-01-02 01:45:06.288061 ApiAIDemoSwift[917:151586] [] __tcp_connection_write_eof_block_invoke Write close callback received error: [22] Invalid argument```

Assuming Voice has been deprecated - it might be useful to remove `Voice button Request` from demo

How to pass parameters like location latitude, longitude and original request.

@matthewayne @sstepashka @artemgoncharuk I updated recently this framework, and its got AIOriginalrequest class, I want to send location latitude, longitude and AIOriginalrequest along with API AI request for conversation, can anyone guide me how to do that, as I am already sending conversation text to server of API AI via request. Now, I want to pass latitude, longitude and AIOriginalrequest (what kind of data type it is) to request.

Build but won't run in simulator

I'm getting this project to build successfully but the simulator doesn't seem to want to open and it's not recognizing my iPhone 5s as a device, are there special requirements or limitations I should be aware of in general?

Voice to text convertion not working

Hi ..

i run the sample code with my client token ... text request working good but speech to text giving "Error Domain=NSURLErrorDomain Code=-1017 "cannot parse response" UserInfo=0x1700f3700 {NSUnderlyingError=0x17004efd0 "The operation couldn’t be completed. (kCFErrorDomainCFNetwork error -1017.)", NSErrorFailingURLStringKey=https://api.api.ai/v1/query%3Fv=20150910, NSErrorFailingURLKey=https://api.api.ai/v1/query%3Fv=20150910, _kCFStreamErrorDomainKey=4, _kCFStreamErrorCodeKey=-1, NSLocalizedDescription=cannot parse response}
"

Some times :- Error code 403 , Api.Ai speech recognition is going to deprecated soon.

Note :- Xcode 7.1 .. checking in iPad Air 8.0

Please help me ... Thanks in advance

Updated instructions for Swift 2.0 and iOS 9.0?

Hello,

Great product! It would be fantastic if you could update the installation instructions and the demo files (inside ApiAIDemoSwift) for Swift 2.0 and iOS 9.0. Namely, here are a few things which I noticed:

  • It's never mentioned that we should import AVFoundation in order for these two lines to work:
    AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryPlayAndRecord, error: nil)
    AVAudioSession.sharedInstance().setActive(true, error: nil)

  • Also in the current version of swift, the two methods above give errors because the parameters are no longer valid.
  • In the other instructions, it is only specified to put the code in AppDelegate.swift, but where in AppDelegate.swift?

let request = self.apiai.requestWithType(AIRequestType.Text) as AITextRequest

  • In the instruction above, requestWithType() is deprecated. What is the new method?
  • The Podfile is with old syntax and assumes iOS 7.0
  • Perhaps you should mention that we have to rename the Bundle Identifier in order to avoid errors with code signing.
  • Finally, an import ApiAI statement is required in many of the classes, as well as an import statement for MBProgressHUD

Thanks! :)

Pulse animation issue while recording voice

In demo app(iOS), I can see the pulse animation effect(In Voice Button Request screen see the effect around microphone image) but same I have implemented in my app but unable to get same effect. Can you please help me to sort the same. Also couple of times getting request time out issue.

Client access Token

Can you add instructions to get the Client access token - have looked all over the UI for the account and its not anywhere obvious..

Unauthorized

I'm trying out the demo app and voice requests return unauthorized. I inserted the sub key and client token into viewcontroller.m.

Speech Recognition Error

I have downloaded sdk for ios and import the ApiAiDemoSwift workspace into the xcode. I received this error "Error Domain=voice.detection.error Code=1 "(null)"" when i run Voice Button Request.
Other two modules named as "Simple Voice Request" and "Text Request" are working fine.

Getting Empty string in fulfillment response speech

Hi,

I am getting "" (empty string) in speech of fulfilment response.

In Dialogflow Console:
Here is the dialogflow request & response: http://www.mocky.io/v2/5ce6c3f73300001693731829

Screen Shot 2019-05-23 at 10 55 14 AM

In iOS Application:
Screen Shot 2019-05-23 at 11 07 44 AM

Just to compare the request and response, we observed the below details:

User says "add milk bread sugar"

On Mobile Request and Response:
http://www.mocky.io/v2/5ce6d2f2330000f28d731880

On Google dialogflow console:
http://www.mocky.io/v2/5ce6d3053300007282731881

APIAIDemoSwift is not compiling

XCODE : 8.2.1
OSX : 10.11.6

Here is what i am receive after pod install

```

Undefined symbols for architecture x86_64:
"OBJC_CLASS$_AIResponseParameter", referenced from:
objc-class-ref in TextRequestViewController.o
"OBJC_CLASS$_AIResponse", referenced from:
objc-class-ref in TextRequestViewController.o
"OBJC_CLASS$_ApiAI", referenced from:
objc-class-ref in AppDelegate.o
objc-class-ref in TextRequestViewController.o
"OBJC_CLASS$_MBProgressHUD", referenced from:
objc-class-ref in TextRequestViewController.o
"OBJC_CLASS$_AIDefaultConfiguration", referenced from:
objc-class-ref in AppDelegate.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)

Missing 'data' from webhook fulfillment

I cannot access the data property from the fulfillment in the webhook response. It only contains 'speech' and 'messages'. I forked the repo and added a parameter 'data' to the AIFulfillmentResponse.h and AIFulfillmentResponse.m accordingly. If you agree, I will submit the pull request.

@property(nonatomic, copy, readonly) NSDictionary *data;

ApiAI/ApiAI.h not found

Tried to run ApiAIDemo, however, the build causes the error on ApiAI/ApiAI.h

The header is in ../ApiAI/Classes, I simply copied these to ApiAIDemo/ApiAI,
It still does not see it

apiaidemoswift build error

screen shot 2017-04-05 at 6 24 18 pm
i was trying to run apiaidemoswift and got attached error. you have written in read me for run command pod update
when i run this command on terminal it never end. i have waited 1 hour and then close. but pod install work and when try to run it, given me error. please help me to run swiftdemo.
thanks

Using APIAI with Kony

Hi - Am looking for integrating this with Kony application. Is there a framework version of the same, instead of using pods?

ApiAi import in custom Framework: No such module 'ApiAI'

Hi @artemgoncharuk,
ApiAiDemoSwift works fine but I modified the project because I tried to create my custom framework (because i would like to use SIRI intent and not speech recognition).

this is my project:
screen shot 2017-08-22 at 10 15 01

in SharedFramework have this swift file:
screen shot 2017-08-22 at 10 16 54

and this is my problem in ShareCode.swift file:
screen shot 2017-08-22 at 10 17 58

In the project's Build Setting i have this path:
screen shot 2017-08-22 at 10 20 36

and in SharedFramework i have this linked library:
screen shot 2017-08-22 at 10 22 23

Does anyone know where is the problem?

Recommended approach for always listening?

Could you guys document the recommended approach of how to use this SDK for an always listening type of scenario. Where requests are periodically sent up silently in background.

Add Note in Readme about RecognitionEngine

The Android SDK says:

Currently, speech recognition is performed using Google's Android SDK, either on the client device or in the cloud. Recognized text is passed to the API.AI through HTTP requests. Also you can try Speaktoit recognition engine (Use AIConfiguration.RecognitionEngine.Speaktoit).

however there is no similar note for the iOS SDK. What is the engine used on iOS? Is it Speaktoit?

Voice recognition in iOS

Hi All,

There is no tutorial and way how to use voice to text conversion using api.ai in iOS . In sample app its showing only text speech not voice. Please let me know if this sdk support voice to text and how to enable it?

X-code build failed

when i run demo app x-code build is failing

here is the error

PhaseScriptExecution 📦\ Check\ Pods\ Manifest.lock /Users/akhil/Library/Developer/Xcode/DerivedData/ApiAIDemo-drfpihcqbhrzajezfuvzlakufbuo/Build/Intermediates/ApiAIDemo.build/Debug-iphonesimulator/ApiAIDemo.build/Script-C56E6EE44C68444733D450D2.sh
    cd /Users/akhil/Documents/api-ai-ios-sdk-master/ApiAIDemo
    /bin/sh -c /Users/akhil/Library/Developer/Xcode/DerivedData/ApiAIDemo-drfpihcqbhrzajezfuvzlakufbuo/Build/Intermediates/ApiAIDemo.build/Debug-iphonesimulator/ApiAIDemo.build/Script-C56E6EE44C68444733D450D2.sh

diff: /../Podfile.lock: No such file or directory
diff: /Manifest.lock: No such file or directory
error: The sandbox is not in sync with the Podfile.lock. Run 'pod install' or update your CocoaPods installation.

diagflow hotel booking iOS app

I tried this -
let request = ApiAI.shared().textRequest()

    if let text = self.messageField.text, text != "" {
        request?.query = text
    } else {
        return
    }
    
    request?.setMappedCompletionBlockSuccess({ (request, response) in
        let response = response as! AIResponse
        if let textResponse = response.result.fulfillment.messages {
            let textRespoArray = textResponse [ 0 ] as NSDictionary
            print(textResponse)
            self.speechAndText(text:textRespoArray.value(forKey: "speech") as! String)
        }
    }, failure: { (request, error) in
        print(error!)
    })
    
    ApiAI.shared().enqueue(request)
    messageField.text = ""

and it worked perfectly until a couple of days ago now anytime I make the request . it returns this error

finished with error - code: -1001

2019-01-18 17:16:28.189377+0000 Chip- Hotel Booking Chatbot[628:7511] Task .<1> HTTP load failed (error code: -999 [1:89])

Error Domain=NSURLErrorDomain Code=-1001 "The request timed out." UserInfo={NSUnderlyingError=0x604000245070 {Error Domain=kCFErrorDomainCFNetwork Code=-1001 "(null)" UserInfo={_kCFStreamErrorCodeKey=-2102, _kCFStreamErrorDomainKey=4}}, NSErrorFailingURLStringKey=https://api.api.ai/v1/query..., NSErrorFailingURLKey=https://api.api.ai/v1/query..., _kCFStreamErrorDomainKey=4, _kCFStreamErrorCodeKey=-2102, NSLocalizedDescription=The request timed out.}

any help?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.