alan-ai / alan-sdk-ionic Goto Github PK
View Code? Open in Web Editor NEWConversational AI SDK for Ionic to enable text and voice conversations with actions (React, Angular, Vue)
Home Page: https://alan.app
Conversational AI SDK for Ionic to enable text and voice conversations with actions (React, Angular, Vue)
Home Page: https://alan.app
Example: Turkish.
Is it possible to increase the time that Alan listens for a longer input?
For example, we have an intent to capture a larger chunk of free text. The intent is defined like this:
intent(vSupportScreen, 'My message is $(TICKET* (.+))', async p => {
p.play(`You've said: ${p.TICKET.value}`);
...
}
During testing we have observed that a lot of the time user will make a little pause in their talking, such as when ending one sentence and starting another. And in that pause Alan will stop listening and start processing the input. So we were wondering if it possible to increase the amount of time Alan listens, before it determines the end of input and starts processing.
Hello.
I have a question regarding handling production releases. What is your "best practice" or advice when releasing new production versions?
For example we release a mobile app with Alan SDK key xxxx/prod and we select a v1 version of Alan scripts on production environment. So now users can use the mobile app with v1 alan scripts.
Then we continue development and create a v2 of Alan scripts which is compatible with the next release/version of our mobile app. Then we release the new version of mobile app. In this moment we have users with old version of the app and users who have upgraded the app to new version. If we change Alan production environment to v2 scripts, the new app will work ok, but now the old version of the app will also use v2 alan scripts, which can be incompatible. For some time, until all users update their mobile app, we should support 2 versions of alan scripts (probably more if we don't force users to update app).
How do you recommend to tackle this issue? Should this be handled in scripts/code or with some project settings?
When we call callProjectApi() from ionic application to execute a command from Alan scripts, the callback doesn't get executed when running on Android or iOS device (or simulator). It does work when running in browser (ionic serve).
For example we call Alan's function greetUser
from ionic:
this.alanButtonRef.componentOnReady().then(async () => {
try {
if (!await this.alanButtonRef.isActive()) {
console.log('Calling: ' + 'await this.alanButtonRef.activate();');
await this.alanButtonRef.activate();
}
console.log('Calling: ' + 'this.alanButtonRef.callProjectApi(\'greetUser\', {}, ...');
this.alanButtonRef.callProjectApi('greetUser', {}, (error, result) => {
console.log(error, 'error');
console.log(result, 'result');
});
} catch (e) {
console.log('EXCEPTION in sendGreeting()');
console.log(e);
}
});
The callback function doesn't get called and those console logs don't ever get printed. Neither is there any caught exception. When testing on Android device we didn't see anything in the logs that would indicate what's the problem. But when running on iOS there is a log entry that could point to the problem:
To Native Cordova -> alanVoice isActive alanVoice1282791712 ["options": []]
⚡️ [log] - Calling: await this.alanButtonRef.activate();
To Native Cordova -> alanVoice activate alanVoice1282791713 ["options": []]
⚡️ [log] - Calling: this.alanButtonRef.callProjectApi('greetUser', {}, ...
To Native Cordova -> alanVoice callProjectApi alanVoice1282791714 ["options": [greetUser, {
}]]
⚡️ [log] - Error in Success callbackId: alanVoice1282791714 : TypeError: second argument to Function.prototype.apply must be an Array-like object (evaluating 'callback.apply(this, ...arguments)')
⚡️ [error] - {}
⚡️ ------ STARTUP JS ERROR ------
⚡️ TypeError: second argument to Function.prototype.apply must be an Array-like object (evaluating 'callback.apply(this, ...arguments)')
⚡️ URL: capacitor://localhost/polyfills-es2015.js
⚡️ polyfills-es2015.js:3737:36
⚡️ See above for help with debugging blank-screen issues
We have noticed that when our app is in landscape mode, a strange panel shows up on left side of the screen. The panel only shows when Alan button in enabled. I've attached screenshots. The panel can be scrolled a bit and it shows an animation of a dog in the top part. This only happens on iOS in simulator and on devices. It's not part of webview, but 'overlays' it.
Could this be some part of your SDK or libraries? I can't find the origin of the panel, but it's consistent with the fact that it only shows when Alan button is visible.
We have noticed that from today (21.4.), clearing context programaticaly no longer works. And an error message is displayed in Alan Studio - resolve() - globalContext can't be resolved
.
The code that we use is:
projectAPI.greetUser = async function(p, param, callback) {
p.play('Would you like me to give you a short overview of this month\'s statistics for your site?');
const answer = await p.then(getYesNo);
if (answer === 'yes') {
playStatistics(p);
} else if (answer === 'no') {
p.play('OK, never mind then.');
} else {
// Cancelled context.
}
callback(null, 'ok');
};
// Context that locks user into yes/no answer.
const getYesNo = context(() => {
title('Statistics overview');
intent('(yes|sure|please|yes please|ok)', p => p.resolve('yes'));
intent('(no|nope|never|cancel)', p => p.resolve('no'));
fallback('Please say yes or no if you would like to hear a short overview of statistics.');
});
projectAPI.clearContextGlobal = function(p, param, callback) {
try {
p.resolve('cancel'); // Cancel yes/no context if it was active.
} catch (e) {}
callback(null, 'ok');
}
From mobile app we call greetUser
function which locks user into a yes/no context. But if user performes some action in app, we must start another Alan dialog, so we call clearContextGlobal
function, to close the yes/no context.
This used to work in the past, but no longer. Now the context stays active. Has there been any change in the way contexts work? I've read through the documentation on contexts, but i can't find if anything must be done differently.
The error message in Alan studio (resolve() - globalContext can't be resolved
) happens at the same time as we call clearContextGlobal
function from mobile app, so i presume it's connected.
Sometimes when running project on Android device, the app will crash as soon as Alan button is initialized. It doesn't happen regularly - on the same device it will sometimes work and sometimes crash. This behaviour was observed when running ionic debug build from Android studio to physical Android device (NOKIA 8 TA-1004, running Android 9).
When crash happens, we have observed 2 different errors in Android Studio's run log.
If there's any other information we can provide that will help you locate the problem, please tell us.
We have encountered an issue where on specific Android devices the voice input is not recognized. When Alan is listening, nothing happens, as if the voice is not recorded. No amount of app re-installs or different builds have helped, the issue is always present.
Affected devices are:
On Samsung S20 FE we tested your reference app "Alan Playground" (installed from Google Play) and voice input also doesn't work in your app. We will also test your app on Note 10 device at a later time and we will update the issue once we get results.
I am attaching logcat from running our Ionic app on Samsung S20 FE. In the log i've marked the approximate location when user started speaking (line 416). A bit above that, on line 398 is app's log entry when button state callback fired and state was LISTEN. So somewhere around here Alan started listening.
samsung_s20_fe_dg1_app_logcat-edit.log
Ionic info:
Ionic:
Ionic CLI : 5.4.16
Ionic Framework : @ionic/angular 5.3.2
@angular-devkit/build-angular : 0.803.29
@angular-devkit/schematics : 8.3.29
@angular/cli : 8.3.29
@ionic/angular-toolkit : 2.3.3
Capacitor:
Capacitor CLI : 2.4.0
@capacitor/core : 2.4.0
Utility:
cordova-res : not installed
native-run (update available: 1.5.0) : 1.3.0
System:
NodeJS : v10.18.1
npm : 6.13.4
OS : macOS Big Sur
Is it possible to adjust at what score the intent is matched?
We have observed cases where there are multiple intents available and user says something completely different and it will get matched to some intent with a low score (ex. 0.6).
Another example, we have a situation in the scripts where we lock the user into a yes/no context (with fallback). At this point we expect a clear yes or no from the user and everything else should trigger fallback message. During testing we observed that saying random words/sentences will sometimes get matched to yes/no with low score. Could we set a score threshold for specific intents?
Can I hide button on particular screens in android app build using ionic-angular?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.