endoplasmic / google-assistant Goto Github PK
View Code? Open in Web Editor NEWA node.js implementation of the Google Assistant SDK
License: MIT License
A node.js implementation of the Google Assistant SDK
License: MIT License
When setting the encodingOut to 'MP3', the response are still in raw format.
The problem appears to be @ lines 32 and 34 of conversation.js...
When validating the parameter, the function is iterating over embeddedAssistant.AudioInConfig.Encoding, I believe this should be embeddedAssistant.AudioOutConfig.Encoding.
Would you like a pull request?
Is setting device location having issues? Notice that the code is commented:
// dialogStateIn.setDeviceLocation
https://github.com/endoplasmic/google-assistant/blob/master/components/conversation.js
When trying to install google assistant I have the following errors:
โ assistant-relay git:(master) npm install google-assistant
[email protected] install /home/cben0ist/assistant-relay/node_modules/grpc
node-pre-gyp install --fallback-to-build --library=static_librarynode-pre-gyp ERR! Tried to download(403): https://storage.googleapis.com/grpc-precompiled-binaries/node/grpc/v1.8.0/node-v64-linux-x64-glibc.tar.gz
node-pre-gyp ERR! Pre-built binaries not found for [email protected] and [email protected] (node-v64 ABI, glibc) (falling back to source compile with node-gyp)
node-pre-gyp ERR! Tried to download(undefined): https://storage.googleapis.com/grpc-precompiled-binaries/node/grpc/v1.8.0/node-v64-linux-x64-glibc.tar.gz
node-pre-gyp ERR! Pre-built binaries not found for [email protected] and [email protected] (node-v64 ABI, glibc) (falling back to source compile with node-gyp)
make: Entering directory '/home/cben0ist/assistant-relay/node_modules/grpc/build'
make: Entering directory '/home/cben0ist/assistant-relay/node_modules/grpc/build'
CXX(target) Release/obj.target/grpc/deps/grpc/src/core/lib/surface/init.o
CXX(target) Release/obj.target/grpc/deps/grpc/src/core/lib/surface/init.o
CXX(target) Release/obj.target/grpc/deps/grpc/src/core/lib/backoff/backoff.o
sed: can't read ./Release/.deps/Release/obj.target/grpc/deps/grpc/src/core/lib/surface/init.o.d.raw: No such file or directory
rm: cannot remove './Release/.deps/Release/obj.target/grpc/deps/grpc/src/core/lib/surface/init.o.d.raw': No such file or directory
grpc.target.mk:386: recipe for target 'Release/obj.target/grpc/deps/grpc/src/core/lib/surface/init.o' failed
make: *** [Release/obj.target/grpc/deps/grpc/src/core/lib/surface/init.o] Error 1
make: Leaving directory '/home/cben0ist/assistant-relay/node_modules/grpc/build'
gyp ERR! build error
gyp ERR! stack Error:make
failed with exit code: 2
gyp ERR! stack at ChildProcess.onExit (/usr/local/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:258:23)
gyp ERR! stack at ChildProcess.emit (events.js:182:13)
gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:237:12)
gyp ERR! System Linux 4.4.0-128-generic
gyp ERR! command "/usr/local/bin/node" "/usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "build" "--fallback-to-build" "--library=static_library" "--module=/home/cben0ist/assistant-relay/node_modules/grpc/src/node/extension_binary/node-v64-linux-x64-glibc/grpc_node.node" "--module_name=grpc_node" "--module_path=/home/cben0ist/assistant-relay/node_modules/grpc/src/node/extension_binary/node-v64-linux-x64-glibc"
gyp ERR! cwd /home/cben0ist/assistant-relay/node_modules/grpc
gyp ERR! node -v v10.2.1
gyp ERR! node-gyp -v v3.6.2
gyp ERR! not ok
node-pre-gyp ERR! build error
node-pre-gyp ERR! stack Error: Failed to execute '/usr/local/bin/node /usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js build --fallback-to-build --library=static_library --module=/home/cben0ist/assistant-relay/node_modules/grpc/src/node/extension_binary/node-v64-linux-x64-glibc/grpc_node.node --module_name=grpc_node --module_path=/home/cben0ist/assistant-relay/node_modules/grpc/src/node/extension_binary/node-v64-linux-x64-glibc' (1)
node-pre-gyp ERR! stack at ChildProcess. (/home/cben0ist/assistant-relay/node_modules/grpc/node_modules/node-pre-gyp/lib/util/compile.js:83:29)
node-pre-gyp ERR! stack at ChildProcess.emit (events.js:182:13)
node-pre-gyp ERR! stack at maybeClose (internal/child_process.js:961:16)
node-pre-gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:248:5)
node-pre-gyp ERR! System Linux 4.4.0-128-generic
node-pre-gyp ERR! command "/usr/local/bin/node" "/home/cben0ist/assistant-relay/node_modules/grpc/node_modules/.bin/node-pre-gyp" "install" "--fallback-to-build" "--library=static_library"
node-pre-gyp ERR! cwd /home/cben0ist/assistant-relay/node_modules/grpc
node-pre-gyp ERR! node -v v10.2.1
node-pre-gyp ERR! node-pre-gyp -v v0.6.39
node-pre-gyp ERR! not ok
Failed to execute '/usr/local/bin/node /usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js build --fallback-to-build --library=static_library --module=/home/cben0ist/assistant-relay/node_modules/grpc/src/node/extension_binary/node-v64-linux-x64-glibc/grpc_node.node --module_name=grpc_node --module_path=/home/cben0ist/assistant-relay/node_modules/grpc/src/node/extension_binary/node-v64-linux-x64-glibc' (1)
CXX(target) Release/obj.target/grpc/deps/grpc/src/core/lib/channel/channel_args.onstall script
make: *** No rule to make target '../deps/grpc/src/core/lib/channel/channel_stack.cc', needed by 'Release/obj.target/grpc/deps/grpc/src/core/lib/channel/channel_stack.o'. Stop.
make: Leaving directory '/home/cben0ist/assistant-relay/node_modules/grpc/build'
gyp ERR! build error
gyp ERR! stack Error:make
failed with exit code: 2
gyp ERR! stack at ChildProcess.onExit (/usr/local/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:258:23)
gyp ERR! stack at ChildProcess.emit (events.js:182:13)
gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:237:12)
gyp ERR! System Linux 4.4.0-128-generic
gyp ERR! command "/usr/local/bin/node" "/usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "build" "--fallback-to-build" "--library=static_library" "--module=/home/cben0ist/assistant-relay/node_modules/grpc/src/node/extension_binary/node-v64-linux-x64-glibc/grpc_node.node" "--module_name=grpc_node" "--module_path=/home/cben0ist/assistant-relay/node_modules/grpc/src/node/extension_binary/node-v64-linux-x64-glibc"
gyp ERR! cwd /home/cben0ist/assistant-relay/node_modules/grpc
gyp ERR! node -v v10.2.1
gyp ERR! node-gyp -v v3.6.2
gyp ERR! not ok
npm WARN [email protected] No repository field.npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] install:node-pre-gyp install --fallback-to-build --library=static_library
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] install script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
Any help is appreciated.
Thank you
Hi, I am using the console-input example
I am getting the error
TypeError: Cannot read property 'client_id' of undefined
Here is my auth file
{"key": { "client_id":"client_id", "project_id":"project_id", "auth_uri":"auth_uri", "token_uri":"token_uri", "auth_provider_x509_cert_url":"auth_provider_x509_cert_url", "client_secret":"client_secret" } }
here is the link to my config
const config = {
auth: {
keyFilePath: ${__dirname}/creds.json
,
savedTokensPath: ${__dirname}/tokens/tokens.js
,
},
audio: {
encodingIn: 'LINEAR16', // supported are LINEAR16 / FLAC (defaults to LINEAR16)
sampleRateOut: 24000, // supported are 16000 / 24000 (defaults to 24000)
},
};
Any help would be great
I recently updated to v0.4.1 and I'm getting an error upon loading:
Error: ENOENT: no such file or directory, open '[path-to-my-projects-folder]\lib\google\assistant\embedded\v1alpha2\embedded_assistant.proto'
It appears that protobuf and/or proto-loader aren't resolving the path correctly. I don't know anything about protobuf, but I'm just making an observation.
The actual path should be [path-to-my-projects-folder]\[project-folder]\node_modules\google-assistant\lib\google\assistant\embedded\v1alpha2\embedded_assistant.proto
Anyone with any hint as to how to fix this? Is it just me?
P.S. It was working before I updated.
Hi, thanks for building this great module as I use it for home automation. I was wondering if there is any plan to update this package to v1alpha2 in order to send commands to google devices such as Google Home. "Hey Google, play Jazz on [Device Name]". Thank you!
Hi, great job guys on the implementation of the google assistant service! loving it :)
I'm interested on both text and audio responses. I realized that certain types of question are populating the text in the response
event either partially or not at all.
Examples I found so far:
response
event.response
event only reports the last message.I understand that the supplemental_display_text
of the DialogStateOut
is not meant to be always the full transcript of the audio response, but I was wondering if there is something that we can do to get the full jokes as text.
For those that are coming totally empty (like traffic related questions) I could use the google cloud speech API to do a STT of the audio_data
, at the expense of some extra round time.
Any ideas guys around these two use cases? Thanks!!
Hello, I'm actually using this beautiful module.
Actually, I'm using it for a personal project.
So, I want to get the audio bytes from audio-data
event and stream it as a ReadableStream
.
I'm very new at NodeJS streams so if someone can help me ๐
And if someone can tell me how to convert a PCM stream to a proper Google format because I'm getting "Service Unavailbale" errors (seen here).
Thank you very much for reading!
Is it support local category ? Let say i am asking "any flower shops nearby".
I have set the device location but the response was empty result.
code setting details:
conversation: {
lang: 'en-US',
deviceLocation: {
coordinates: { // set the latitude and longitude of the device
latitude: 3.1390,
longitude: 101.6869
},
},
textQuery: 'any flower shops nearby',
},
We're running the following code to get the tokens. It used to work...
const config = {
auth: {
keyFilePath: path.resolve(__dirname, '../assets/google-client-secret.json'), // secret.json
savedTokensPath: path.resolve(__dirname, '../assets/google-access-tokens.json'), // resources/tokens.js
},
}
//const assistant = new GoogleAssistant(config)
// Not sure if this additional parameter is needed...
const assistant = new GoogleAssistant(config.auth, "MagicMirror")
assistant
.on('ready', () => {
assistant.start();
})
.on('started', function() {
console.log("Google Authentication OK! Press CTRL+C to Quit.")
})
.on('error', (error) => {
console.log('Google Authentication Error:', error)
})
But after pasting the token, we are now getting the following error:
Google Authentication OK! Press CTRL+C to Quit.
/home/pi/MagicMirror/modules/MMM-Assistant/node_modules/google-assistant/index.js:26
if (callback) callback(assistant);
^
TypeError: callback is not a function
at Auth.GoogleAssistant.auth.on (/home/pi/MagicMirror/modules/MMM-Assistant/node_modules/google-assistant/index.js:26:19)
at Auth.emit (events.js:180:13)
at saveTokens (/home/pi/MagicMirror/modules/MMM-Assistant/node_modules/google-assistant/components/auth.js:32:10)
at oauthClient.getToken (/home/pi/MagicMirror/modules/MMM-Assistant/node_modules/google-assistant/components/auth.js:76:7)
at /home/pi/MagicMirror/modules/MMM-Assistant/node_modules/google-assistant/node_modules/google-auth-library/lib/auth/oauth2client.js:154:5
at Request._callback (/home/pi/MagicMirror/modules/MMM-Assistant/node_modules/google-assistant/node_modules/google-auth-library/lib/transporters.js:106:7)
at Request.self.callback (/home/pi/MagicMirror/modules/MMM-Assistant/node_modules/request/request.js:186:22)
at Request.emit (events.js:180:13)
at Request.<anonymous> (/home/pi/MagicMirror/modules/MMM-Assistant/node_modules/request/request.js:1163:10)
at Request.emit (events.js:180:13)
Any ideas how to fix this?
When the README Code is used, it throws the following error:
module.js:557
throw err;
^
Error: Cannot find module 'google-assistant'
at Function.Module._resolveFilename (module.js:555:15)
at Function.Module._load (module.js:482:25)
at Module.require (module.js:604:17)
at require (internal/module.js:11:18)
at Object.<anonymous> (/private/tmp/google-assistant/main.js:2:25)
at Module._compile (module.js:660:30)
at Object.Module._extensions..js (module.js:671:10)
at Module.load (module.js:573:32)
at tryModuleLoad (module.js:513:12)
at Function.Module._load (module.js:505:3)
In the conversation component continueConversation is set to true when DIALOG_FOLLOW_ON is received but it isn't set back to false agan which I think it should do when CLOSE_MICROPHONE is received.
I'm having problems with the assistant continues sometimes when it shouldn't which I think could be related to this.
Hi.
I'm testing your code in my OSX laptop.
After finishing install (including node-record-lpcm, node-speaker), I tried to test your mic-speaker.js
I created and downloaded client_secret_xxx_json for auth.
[mic-speaker.js]
const config = {
auth: {
keyFilePath: './client_secret_xxxxxxx....apps.googleusercontent.com.json',
savedTokensPath: './tokens.js', // where you want the tokens to be saved
},
Is this right?
But when I try to run this. (node mic-speaker.js), I meet this error code.
module.js:471
throw err;
^
Error: Cannot find module './client_secret_xxxxxxx.....apps.googleusercontent.com.json'
at Function.Module._resolveFilename (module.js:469:15)
at Function.Module._load (module.js:417:25)
at Module.require (module.js:497:17)
at require (internal/module.js:20:19)
at new Auth (..../google-assistant/components/auth.js:27:15)
at new GoogleAssistant (..../google-assistant/index.js:16:16)
at Object.<anonymous> (..../google-assistant/examples/mic-speaker.js:76:19)
...
I guess this error indicates me something wrong I've done. Does keyFilePath mean some kind of JS file? not js? But I cannot find any instruction for this kind of file. Can you tell me more details?
The project I'm working on only requires the console input function, and not the mic/speaker setup.
As it is now, I dont believe this can run on Windows due to the mic/speaker config. Is it possible to strip out all the mic/speaker stuff and just have the console commands only to run on windows?
Hey guys, how can I implement some initial trigger on the mic-speaker.js
example? I can run it without end the conversation by using pm2
but I would like to implement a hotword
to start the conversation. Any snippet or example will help a lot. Thanks!
Is it possible to use multiple Google accounts by supporting multiple client files and tokens?
It would be nice if it would be possible to cancel ongoing request.
for example if i 'hey google ask me something ...' ... now if i want to cancel the trivia game i should be able to say 'hey google' (which would cancel ongoing request before creating new one) ... right now it will just create a new request and then i'll get both audio outputs to my speakers.
Dear sir / madame,
I'm trying to build your library into an Electron Google Assistant Desktop app, I finally got the authentication right thanks to Electron Google OAuth that I made work with your package (forked here).
The problem is that I keep getting Service unavailable. errors (see screenshot). I do finally have valid keys right now thanks to Electron Google OAuth, here's an example of my filled in tokens.js file;
{"access_token":"ya29.GlsTBccFq1uY1lvvptZT9SfYfRf4Wf51NjCcJJzTZ3WBIoxrAfA4PcBHdE34JQKHxxTAb56l4Yw234bqjJkdnFV0sX-ZF237VHGRa-4fetbtiNKqmWsSc2Az_PM","expires_in":3600,"refresh_token":"1/DRyw8o8OsQ-oGLp3o37BMsZSPZCTmn5egxADA2czfRUcrGz9_34_pgOwRTSWucDT","token_type":"Bearer"}
( I replaced some numbers so token is invalid for public)
So it seems that it all works until I start a conversion, that's when I get the service unavailable error.
Any tips on how I could debug this?
Fork of your Google Assistant library: https://github.com/WMisiedjan/google-assistant
( ^ included support for Electron Google OAuth )
Fork of Electron Google OAuth (minor bug fix for electron);
https://github.com/WMisiedjan/electron-google-oauth
Hope someone could point me in the right direction to fix or debug this!
Would this library work with indefinite streamed audio such as music, or the radio?
Thanks!
I'm writing an Electron app and I'd like to let my users authorize Google Assistant from within my app, right now this package will automatically open a browser window, but I'd like instead to show the auth window in an in-app page.
Also, I'd then need a way for the user to paste the token in some UI of my app and forward it to this package.
How can I do?
Is it possible to get a string of what the assistant responded with, besides just the user's transcription?
Any help guys I would appreciate it! Thanks!
if (error) throw new Error('Error getting tokens:', error);
^
Error: Error getting tokens:
at oauthClient.getToken (/home/pi/google-assistant-nodejs/node_modules/google-assistant/components/auth.js:73:24)
at /home/pi/google-assistant-nodejs/node_modules/google-auth-library/lib/auth/oauth2client.js:154:5
at Request._callback (/home/pi/google-assistant-nodejs/node_modules/google-auth-library/lib/transporters.js:106:7)
at Request.self.callback (/home/pi/google-assistant-nodejs/node_modules/request/request.js:186:22)
at Request.emit (events.js:180:13)
at Request.<anonymous> (/home/pi/google-assistant-nodejs/node_modules/request/request.js:1163:10)
at Request.emit (events.js:180:13)
at IncomingMessage.<anonymous> (/home/pi/google-assistant-nodejs/node_modules/request/request.js:1085:12)
at Object.onceWrapper (events.js:272:13)
at IncomingMessage.emit (events.js:185:15)
const record = require('node-record-lpcm16');
const Speaker = require('speaker');
const path = require('path');
const GoogleAssistant = require('google-assistant');
const speakerHelper = require('./speaker-helper');
const config = {
auth: {
keyFilePath: path.resolve(__dirname, '/home/pi/credentials.json'),
savedTokensPath: path.resolve(__dirname, 'tokens.json'), // where you want the tokens to be saved
},
conversation: {
audio: {
sampleRateOut: 24000, // defaults to 24000
},
lang: 'en-US', // defaults to en-US, but try other ones, it's fun!
},
};
Does anyone have an alternative way to get the client json file from Google?
It seems they've changed their sign up process and things are a little bit different now..
[root@apu google-assistant]# node examples/mic-speaker.js
Say something!
node: symbol lookup error: /root/google-assistant/node_modules/grpc/src/node/extension_binary/grpc_node.node: undefined symbol: SSL_CTX_set_alpn_protos
Any ideas? Running on Centos 7, looks like the issue may be with the stupid old openSSL they use.
anyone got the error when send audio data?
Say something!
Conversation Error: { [AssertionError: Failure: Type not convertible to Uint8Array.]
message: 'Failure: Type not convertible to Uint8Array.',
reportErrorToServer: true,
messagePattern: 'Failure: Type not convertible to Uint8Array.' }
Conversation Error: { [Error: Serialization failure] code: 13, metadata: Metadata { _internal_repr: {} } }
Thanks so much for the great library. I'm running into a problem getting it working on a Raspberry Pi Zero W.
When running the mic-speaker example I'm seeing this error after it runs for a short moment. Nothing else is logged.
(With verbose:true passed into the record call)
Say something!
Recording with sample rate 16000...
Recording 65536 bytes
Recording 16384 bytes
Recording 16384 bytes
Conversation Error: { Error: 3 INVALID_ARGUMENT: Invalid 'audio_in': audio frame length is too long.
at Object.exports.createStatusError (/home/pi/madison/gassist/node_modules/grpc/src/common.js:87:15)
at ClientDuplexStream._emitStatusIfDone (/home/pi/madison/gassist/node_modules/grpc/src/client.js:235:26)
at ClientDuplexStream._receiveStatus (/home/pi/madison/gassist/node_modules/grpc/src/client.js:213:8)
at Object.onReceiveStatus (/home/pi/madison/gassist/node_modules/grpc/src/client_interceptors.js:1316:15)
at InterceptingListener._callNext (/home/pi/madison/gassist/node_modules/grpc/src/client_interceptors.js:590:42)
at InterceptingListener.onReceiveStatus (/home/pi/madison/gassist/node_modules/grpc/src/client_interceptors.js:640:8)
at /home/pi/madison/gassist/node_modules/grpc/src/client_interceptors.js:1136:18
code: 3,
metadata: Metadata { _internal_repr: {} },
details: 'Invalid \'audio_in\': audio frame length is too long.' }
Recording 16384 bytes
Recording 16384 bytes
Recording 4096 bytes
Recording 12288 bytes
var record = require('node-record-lpcm16')
var fs = require('fs')
var file = fs.createWriteStream('test.wav', { encoding: 'binary' })
const mic = record.start();
mic.on('data', data => {
file.write(data);
})
// Stop recording after three seconds
setTimeout(function () {
record.stop()
}, 3000)
The error implies that the recroding rate isn't correct but aside from the threshold:0, my test app records exactly the same way and the file is reported with a correct frame length. I'm assuming google's "audio frame length" is referring to the same value as "Rate" within aplay.
I'm on 0.5.1
Hi,
First, thanks for this brilliant module. I am using it to create a plugin for https://github.com/matueranet/genie-router. I succesfully authenticated and created a token. After that, in my plugin code the Assistant response only returned an empty string. So I attempted to run your cli-input.js example to see if that worked.
There the returned Assistant was also empty, see output:
Type your request: hello
Assistant Response:
Conversation Complete
Any idea what's going on here?
Is it possible to write mp3 data to the assistant?
Is it possible to set the temperature unit (F/C) of the weather ?
When asking the question like "What is the weather now", the result default would be given in Fahrenheit degree.
After I authenticate, it says:
Say something!
events.js:141
throw er; // Unhandled 'error' event
^
Error: spawn rec ENOENT
at exports._errnoException (util.js:837:11)
at Process.ChildProcess._handle.onexit (internal/child_process.js:178:32)
at onErrorNT (internal/child_process.js:344:16)
at doNTCallback2 (node.js:429:9)
at process._tickCallback (node.js:343:17)
What causes this?
(i'm running the mic and speaker example)
The setup went quite smoothly, minus a nit, but when I want to start a conversation assistant replies with something like "actually I need basic permissions. Just go to your google home app...". Can it be, that the OAuth client tokens must be setup for webapps instead of "other"?
I'm trying to get up and running based on the instructions in the README, but I'm unable to. When running the first sample, it waits for 5 seconds or so and then shows this:
Service unavailable.
{ Error: Unknown Error.
at ClientDuplexStream._emitStatusIfDone (/Users/USER/Desktop/PROJECT/node_modules/grpc/src/node/src/client.js:270:19)
at ClientDuplexStream._receiveStatus (/Users/USER/Desktop/PROJECT/node_modules/grpc/src/node/src/client.js:248:8)
at /Users/USER/Desktop/PROJECT/node_modules/grpc/src/node/src/client.js:772:12
code: 2,
metadata: Metadata { _internal_repr: { 'content-disposition': [Object] } } }
Everything is entered exactly as is shown in the first example with the exception of the config which is as shown below:
const config = {
auth: {
keyFilePath: `${__dirname}/secret.json`,
// where you want the tokens to be saved
// will create the directory if not already there
savedTokensPath: `${__dirname}/tokens/tokens.js`,
}
};
I've tried the other demos as well, but I get the same error for both (console and mic):
Conversation Error: { AssertionError: Failure: Type not convertible to Uint8Array.
at new goog.asserts.AssertionError (/Users/USER/Desktop/PROJECT/node_modules/google-protobuf/google-protobuf.js:98:603)
Thoughts?
Nathans-Mac:google-assistant nathan$ node examples/mic-speaker.js
Say something!
events.js:182
throw er; // Unhandled 'error' event
^
Error: spawn rec ENOENT
at _errnoException (util.js:1041:11)
at Process.ChildProcess._handle.onexit (internal/child_process.js:192:19)
at onErrorNT (internal/child_process.js:374:16)
at _combinedTickCallback (internal/process/next_tick.js:138:11)
at process._tickCallback (internal/process/next_tick.js:180:9)
at Function.Module.runMain (module.js:611:11)
at startup (bootstrap_node.js:158:16)
at bootstrap_node.js:598:3
Nathans-Mac:google-assistant nathan$ node examples/console-input.js
Type your request: Hello
Fetching TTS file...
Conversation Error: { AssertionError
at new goog.asserts.AssertionError (/Users/nathan/google-assistant/node_modules/google-protobuf/google-protobuf.js:98:603)
at Object.goog.asserts.fail (/Users/nathan/google-assistant/node_modules/google-protobuf/google-protobuf.js:100:89)
at Object.jspb.utils.byteSourceToUint8Array (/Users/nathan/google-assistant/node_modules/google-protobuf/google-protobuf.js:247:257)
at jspb.BinaryWriter.writeBytes (/Users/nathan/google-assistant/node_modules/google-protobuf/google-protobuf.js:285:83)
at Function.proto.google.assistant.embedded.v1alpha1.ConverseRequest.serializeBinaryToWriter (/Users/nathan/google-assistant/lib/google/assistant/embedded/v1alpha1/embedded_assistant_pb.js:1417:12)
at proto.google.assistant.embedded.v1alpha1.ConverseRequest.serializeBinary (/Users/nathan/google-assistant/lib/google/assistant/embedded/v1alpha1/embedded_assistant_pb.js:1394:60)
at requestSerialize (/Users/nathan/google-assistant/components/embedded-assistant.js:10:27)
at ClientDuplexStream.serialize (/Users/nathan/google-assistant/node_modules/grpc/src/node/src/common.js:53:12)
at ClientDuplexStream._write (/Users/nathan/google-assistant/node_modules/grpc/src/node/src/client.js:158:20)
at doWrite (_stream_writable.js:385:12)
message: 'Failure: Type not convertible to Uint8Array.',
reportErrorToServer: true,
messagePattern: 'Failure: Type not convertible to Uint8Array.' }
Conversation Error: { AssertionError
at new goog.asserts.AssertionError (/Users/nathan/google-assistant/node_modules/google-protobuf/google-protobuf.js:98:603)
at Object.goog.asserts.fail (/Users/nathan/google-assistant/node_modules/google-protobuf/google-protobuf.js:100:89)
at Object.jspb.utils.byteSourceToUint8Array (/Users/nathan/google-assistant/node_modules/google-protobuf/google-protobuf.js:247:257)
at jspb.BinaryWriter.writeBytes (/Users/nathan/google-assistant/node_modules/google-protobuf/google-protobuf.js:285:83)
at Function.proto.google.assistant.embedded.v1alpha1.ConverseRequest.serializeBinaryToWriter (/Users/nathan/google-assistant/lib/google/assistant/embedded/v1alpha1/embedded_assistant_pb.js:1417:12)
at proto.google.assistant.embedded.v1alpha1.ConverseRequest.serializeBinary (/Users/nathan/google-assistant/lib/google/assistant/embedded/v1alpha1/embedded_assistant_pb.js:1394:60)
at requestSerialize (/Users/nathan/google-assistant/components/embedded-assistant.js:10:27)
at ClientDuplexStream.serialize (/Users/nathan/google-assistant/node_modules/grpc/src/node/src/common.js:53:12)
at ClientDuplexStream._write (/Users/nathan/google-assistant/node_modules/grpc/src/node/src/client.js:158:20)
at doWrite (_stream_writable.js:385:12)
message: 'Failure: Type not convertible to Uint8Array.',
reportErrorToServer: true,
messagePattern: 'Failure: Type not convertible to Uint8Array.' }
Conversation Error: { AssertionError
at new goog.asserts.AssertionError (/Users/nathan/google-assistant/node_modules/google-protobuf/google-protobuf.js:98:603)
at Object.goog.asserts.fail (/Users/nathan/google-assistant/node_modules/google-protobuf/google-protobuf.js:100:89)
at Object.jspb.utils.byteSourceToUint8Array (/Users/nathan/google-assistant/node_modules/google-protobuf/google-protobuf.js:247:257)
at jspb.BinaryWriter.writeBytes (/Users/nathan/google-assistant/node_modules/google-protobuf/google-protobuf.js:285:83)
at Function.proto.google.assistant.embedded.v1alpha1.ConverseRequest.serializeBinaryToWriter (/Users/nathan/google-assistant/lib/google/assistant/embedded/v1alpha1/embedded_assistant_pb.js:1417:12)
at proto.google.assistant.embedded.v1alpha1.ConverseRequest.serializeBinary (/Users/nathan/google-assistant/lib/google/assistant/embedded/v1alpha1/embedded_assistant_pb.js:1394:60)
at requestSerialize (/Users/nathan/google-assistant/components/embedded-assistant.js:10:27)
at ClientDuplexStream.serialize (/Users/nathan/google-assistant/node_modules/grpc/src/node/src/common.js:53:12)
at ClientDuplexStream._write (/Users/nathan/google-assistant/node_modules/grpc/src/node/src/client.js:158:20)
at doWrite (_stream_writable.js:385:12)
message: 'Failure: Type not convertible to Uint8Array.',
reportErrorToServer: true,
messagePattern: 'Failure: Type not convertible to Uint8Array.' }
Conversation Error: { Error: Serialization failure
at ClientDuplexStream._emitStatusIfDone (/Users/nathan/google-assistant/node_modules/grpc/src/node/src/client.js:270:19)
at ClientDuplexStream._receiveStatus (/Users/nathan/google-assistant/node_modules/grpc/src/node/src/client.js:248:8)
at /Users/nathan/google-assistant/node_modules/grpc/src/node/src/client.js:772:12 code: 13, metadata: Metadata { _internal_repr: {} } }
Hi,
Sending commands to Home Control devices is working fine, I can send a query "turn X on" and the device will turn on.
But when sending a query "is X on?" i'm getting a blank response from the assistant.
The same query sent via google home or the google assistant on the phone is working properly.
What can be the reason for this?
Thanks,
I'm trying to run the console-input example since I'm only interested in text-based communications. I set everything up as instructed, downloaded the OAuth JSON file, and ran it. I get this:
Type your request: Who won the Superbowl last year?
Fetching TTS file...
Transcription: who won the Superbowl last year
Assistant Speaking
Conversation Complete
Assistant Finished Speaking
So I hear a lady's voice saying "Give me permission to help you". What am I missing?
Thanks,
Alvaro
I have a working text only interface:
'use strict';
var express = require('express');
var router = express.Router();
/* GET home page. */
router.get('/', function(req, res, next) {
res.render('index', { title: '_gideon+hosted' });
});
const https = require('https');
const lame = require('lame');
const googleTTS = require('google-tts-api');
// const Speaker = require('speaker');
const GoogleAssistant = require('google-assistant');
const config = {
auth: {
keyFilePath: 'client_secret.json',
savedTokensPath: 'tokens.js', // where you want the tokens to be saved
},
audio: {
encodingIn: 'LINEAR16', // supported are LINEAR16 / FLAC (defaults to LINEAR16)
sampleRateOut: 24000, // supported are 16000 / 24000 (defaults to 24000)
},
};
const startConversation = (conversation, ttsResponse, type, res, encoded) => {
var encoder = new lame.Encoder({
// INPUT
channels: 2,
bitDepth: 16,
sampleRate: 24000,
// OUTPUT
bitRate: 128,
outSampleRate: 24000,
mode: lame.STEREO
});
// setup the conversation
conversation
// send the audio buffer to the speaker
.on('audio-data', (data) => {
const now = new Date().getTime();
encoder.write(data);
/*
speaker.write(data);
// kill the speaker after enough data has been sent to it and then let it flush out
spokenResponseLength += data.length;
const audioTime = spokenResponseLength / (config.audio.sampleRateOut * 16 / 8) * 1000;
clearTimeout(speakerTimer);
speakerTimer = setTimeout(() => {
speaker.end();
}, audioTime - Math.max(0, now - speakerOpenTime));
*/
console.log('Data: ' + data);
})
// done speaking (since we aren't speaking, let's just console log)
.on('end-of-utterance', () => {
console.log('TTS playback complete');
})
// just to spit out to the console what was said
.on('transcription', text => {
console.log('Transcription:', text);
if (type == 1) {
// Send the transcription:
console.log("Got Transcription");
res.send(text);
}
})
// once the conversation is ended, see if we need to follow up
.on('ended', (error, continueConversation) => {
if (error) {
console.log('Conversation Ended Error:', error);
} else {
console.log('Conversation Complete');
conversation.end();
if (type == 0) {
// Transcribe the message
console.log("Starting transcription.");
assistant.start((converse) => {
startConversation(converse, null, 1, res, encoder);
console.log("Transcription started");
console.log("Using data: " + encoder);
});
} else {
// We got the transcription!
console.log("DONE");
}
}
})
// catch any errors
.on('error', (error) => {
console.log('Conversation Error:', error);
});
// decode the mp3 and send it off
if (type == 0) {
// TTS the text request
const decoder = new lame.Decoder();
ttsResponse.pipe(decoder).on('format', (format) => {
decoder.pipe(conversation);
});
} else if (type == 1) {
// We're sending the response from the Google Assistant
console.log("Sending data to Google Assistant");
const decoder = new lame.Decoder();
encoded.pipe(decoder).on('format', (format) => {
decoder.pipe(conversation);
})
}
/* setup the speaker
const speaker = new Speaker({
channels: 1,
sampleRate: config.audio.sampleRateOut,
});
speaker
.on('open', () => {
console.log('Assistant Speaking');
speakerOpenTime = new Date().getTime();
})
.on('close', () => {
console.log('Assistant Finished Speaking');
conversation.end();
});
*/
// Turn the audio into text
};
const assistant = new GoogleAssistant(config);
assistant
.on('ready', () => {
// Ready for requests
})
// .on('started', promptForInput)
.on('error', (error) => {
console.log('Assistant Error:', error);
});
router.get('/ask', function(req, res, next) {
var request = req.query.query;
googleTTS(request)
.then((url) => {
console.log('Grabbing TTS file...');
// go snag the file
https.get(url, (response) => {
if (!response || response.statusCode !== 200) {
console.error('Failed to download TTS file from', url, response.statusMessage);
return;
}
// start the conversation
assistant.start((conversation) => {
startConversation(conversation, response, 0, res);
});
});
})
.catch(error => console.error);
});
module.exports = router;
It's not perfect, but it works if you want to try and adapt it into an example.
Error getting tokens: { Error: getaddrinfo ENOTFOUND accounts.google.com accounts.google.com:443
at errnoException (dns.js:28:10)
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:73:26)
code: 'ENOTFOUND',
errno: 'ENOTFOUND',
syscall: 'getaddrinfo',
hostname: 'accounts.google.com',
host: 'accounts.google.com',
port: 443 }
There was a recent update to grpc to address a lodash vulnerability, as seen in this commit:
grpc/grpc-node@e97264c
I don't quite understand how the version numbers work when there are, what appears to be, multiple packages inside one package, like in grpc. I'm not sure what to do/recommend to update google-assistant to use the new version of grpc. Any help would be appreciated.
In some cases google will not respond anything to your command
(for example when you say 'hey google, volume 5')
in this scenario no voice data will be received, the speaker will not get opened, and nothing will fire the conversation.end()
for such scenarios 'ended' event should probably auto fire. ('no voice response' should be detected by the lib ... as there seems to be no way of detecting it on my end)
Environment:
Node 8.8.0
Hey, I have the following config:
const config = { auth: { keyFilePath: 'keys.json', savedTokensPath: 'tokens.js', // where you want the tokens to be saved }, audio: { encodingIn: 'LINEAR16', // supported are LINEAR16 / FLAC (defaults to LINEAR16) sampleRateOut: 24000, // supported are 16000 / 24000 (defaults to 24000) }, };
But when spinning up the GoogleAssistant it spits out:
Error: Cannot find module 'keys.json'
Like it's loading the file like a module. Just in case it's being required I tried ./keys.json and the same thing happened, either I'm being stupid or something has changed in Node 8
[running mic-speaker.js]
Say something!
Conversation Error: { Error: Invalid 'audio_in': too long.
at ClientDuplexStream._emitStatusIfDone (/.../node_modules/grpc/src/node/src/client.js:270:19)
at ClientDuplexStream._receiveStatus (/.../node_modules/grpc/src/node/src/client.js:248:8)
at /.../node_modules/grpc/src/node/src/client.js:772:12 code: 3, metadata: Metadata { _internal_repr: {} } }
^C
After auth I get this error. What's the problem?
.on('volume-percent', percent => console.log('New Volume Percent:', percent))
// the device needs to complete an action
.on('device-action', (action) => {
console.log('Device Action:', action)
})
After volume-percent
or device-action
, Assistant is halted. I think I have to complete this conversation but I don't know how to. What should I do?
Now that v1alpha1 is deprecated. Will support for v1alpha2 be added?
https://developers.google.com/assistant/sdk/reference/rpc/google.assistant.embedded.v1alpha2
I did have a look at it, but I have no idea how its implemented
Paste your code: (node:7847) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): Error: Exited with code 3
(node:7847) DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
I can't figure out of this is an issue related to this lib or if it's on Google's side.
I've been using this to send text commands to the service by sending 'broadcast hello world'
This recently stopped working and it seems to coincide with the SDK update.
Is anyone else having this issue?
Hi,
Everything is working fine. :) but I want to stop audio stream from google sometimes.
For example:
1- I speak to assistant: Tell me something about guns.
2- Assistant sends me back transcription as "Tell me something about guns." with audio response (I don't want audio response)
3- I want my program to say: Are you sure you want to know about guns? (I can do it by using transcription, no problem)
4- Then I will perform next task I want to do.
On 2nd step, I just don't want my assistant to continue, I just want it to Stop Forcefully!
I can remove the audio-data event but the conversation is not properly end there. how I can achieve that?
Thanks in advance!
I'm able to ask what the weather is as well as broadcast messages on my LAN, but custom commands will not work when requesting to play music (i.e. nothing happens). Please see below. When will this be fixed?
Thanks
James
( More on weather.com )
Received command broadcast Looks kind of cloudy
No user specified, using assistantgateway-assistantgateway-s0vh5u
Received command play jazz on office home
No user specified, using assistantgateway-assistantgateway-s0vh5u
Is there anyway to get the response of the assistant as text / string?
As a workaround i could use a speech to text api but i'd rather not.
The 0.0.2 release does support alarms and timers https://developers.google.com/assistant/sdk/release-notes
Can this library support these? With the mic and speaker example I can set these but they obviously never goes of as the program ends.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.