ritwik12 / virtual-assistant Goto Github PK
View Code? Open in Web Editor NEWA linux based Virtual assistant on Artificial Intelligence in C
License: GNU General Public License v3.0
A linux based Virtual assistant on Artificial Intelligence in C
License: GNU General Public License v3.0
Reminder which can help us schedule our work or remember special days and to be up to time.
It will be a great feature if Virtual assistant could open Gmail or read Gmail notifications.
There are certain cases where we can work on Reducing the time complexity of code.
One such instance is:
Line 39 in 2712a89
I wanted to look into this project and see if I could use it as a base and build up from but I have been lost trying to compile the code, in here I have attached a doc of my step-by-step going through the start-up guided with screenshots of the output I would get. any help would be appreciated.
starting up virtual assistant errors and problems.pdf
Your idea seems interesting but after quick look at the repository I noticed some alarming problems with style of your code and overall structure. To begin with, your project consists of just ONE file and, which is even worse, only ONE function that is somewhere around 400 lines long. There is a serious need to split sections of your main function into multiple functions to make maintenance and readability easier. I would also suggest adapting some folders' structure and splitting your code into multiple files.
I also couldn't help the feeling that some parts of your code are duplicated and moreover probably unnecessary as functions performing those task might be provided in standard library.
If you find my statements useful or need some explanations, feel free to email me.
After I invoke make, this is printed out:
gcc -c -o build/init_config.o src/init_config.c -std=gnu11 -Isrc -Iutils
pkg-config --cflags libcurlgcc -o build/virtual_assistant build/main.o build/init_config.o -std=gnu11 -Isrc -Iutils
pkg-config --cflags libcurl-lssl -lcrypto -ljson-c
pkg-config --libs libcurl/usr/bin/ld: build/init_config.o:(.bss+0x0): multiple definition of
str'; build/main.o:(.bss+0x11bc0): first defined here
/usr/bin/ld: build/init_config.o:(.bss+0x3e8): multiple definition of start'; build/main.o:(.bss+0x11fa8): first defined here /usr/bin/ld: build/init_config.o:(.bss+0x3f0): multiple definition of
pv'; build/main.o:(.bss+0x11fb0): first defined here
/usr/bin/ld: build/init_config.o:(.bss+0x400): multiple definition of location'; build/main.o:(.bss+0x11fc0): first defined here /usr/bin/ld: build/init_config.o:(.bss+0x800): multiple definition of
youtube'; build/main.o:(.bss+0x123c0): first defined here
/usr/bin/ld: build/init_config.o:(.bss+0xc00): multiple definition of songs'; build/main.o:(.bss+0x127c0): first defined here /usr/bin/ld: build/init_config.o:(.bss+0x1000): multiple definition of
cal'; build/main.o:(.bss+0x12bc0): first defined here
/usr/bin/ld: build/init_config.o:(.bss+0x1080): multiple definition of search'; build/main.o:(.bss+0x12c40): first defined here /usr/bin/ld: build/init_config.o:(.bss+0x1100): multiple definition of
HOMEDIR'; build/main.o:(.bss+0x12cc0): first defined here
/usr/bin/ld: build/init_config.o:(.bss+0x1500): multiple definition of WebBrowser'; build/main.o:(.bss+0x130c0): first defined here /usr/bin/ld: build/init_config.o:(.bss+0x1900): multiple definition of
MediaPlayer'; build/main.o:(.bss+0x134c0): first defined here
collect2: error: ld returned 1 exit status
make: *** [Makefile:18: build/virtual_assistant] Error`
Get to Know the weather forecast for any location.
i would like to contribute basic math capabilities to the assistant (and update the readme of course), what do you think about that ?
Hello , I am Santhosh and am new to open-source and all .This Virtual assistant seems interesting for me , it would be highly appreciated if u can give me some ideas for me to start and work with and so that I may be able to comtribute :) Thanks
I keep getting this error when I try to run the command say "hello"
ALSA lib pcm_dmix.c:1108 : ( snd_pcm_open )
unable to open slave audio_open_alsa
: failed to open audio device default. Device or resource busy.
I have no other application using an audio device, Ubuntu-19.04
.
Hello all,
Is there any special reason why the code is organised by including the .c files. According to my knowledge, implementation of a function in .c file and prototype in .h file creates loose coupling among modules. I guess there is a reason why the code in this project is structured this way. I just want to know your views.
Regards,
PP
Right now we are only having direct comparisons for calendar feature like "open calendar","calendar".
An enum can be created of all categories and based on that rewriting of arrays and other functionality code can be handled. In future, if more categories are added, functionality should be handled automatically.
I got started with open source through this project in spirit of taking part in Hacktoberfest. Seems like my PR didnt get counted as it shows this repo as ineligible :\ . Any idea why that happened ?
Thanks!
strlen(example) function is called in each iteration of the loop to determine the length of the string example. However, calculating the length of the string repeatedly in each iteration can be inefficient, especially if the length remains constant throughout the loop.
for (int iter_char = 0; iter_char < strlen(example); iter_char++) {
int len = strlen(example);
for (int iter_char = 0; iter_char < len; iter_char++) {
example[iter_char] = tolower(example[iter_char]);
if (example[iter_char] == ' ') {
if (example[iter_char + 1] != ' ') {
split[word][character] = '\0';
character = 0;
word++;
}
continue;
} else {
split[word][character++] = example[iter_char];
}
}
By calculating the length of the string outside the loop and storing it in the variable len, you avoid the overhead of recomputing the length in each iteration. This can lead to a significant improvement in performance, especially for long strings.
In requests.c the class arrays are initialized with maximum word length 10, but for restaurant_class the word length is 11 for "restaurants" and 12 if '\0' is included (code below)
char *restaurant_class[10][10] = { {"Please", "find", "some", "restaurants"}, {"Find", " ", "some", "restaurants"}, {"Show", " ", " ", "restaurants"}, {"Find", "places", "to", "eat"} };
Every feature of Virtual Assistant have its unique implementation such as the implementation for Restaurant, Weather, Google search, Email and Media is different for each.
To understand better it will be good to have Docs related to the working of these.
Attempting to install the transitional packages libjson0/libjson0-dev $ sudo apt-get install libjson0 libjson0-dev
during setup on Ubuntu 17.10 results in the following issue:
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package libjson0
E: Unable to locate package libjson0-dev
Replacing libjson0-dev with libjson-c-dev (see https://packages.debian.org/source/jessie/json-c) in the following manner seems to have solved this problem for me while installing Virtual-Assistant:
sudo apt-get install libjson-c-dev
Please note that this also requires updating the compile statement to:
`gcc main.c $(pkg-config --libs --cflags libcurl) -l json-c -std=gnu11'
And in main.c, #include <json/json.h>
must be replaced with:
#include <json-c/json_object.h>
#include <json-c/json_tokener.h>
I would be happy to commit the patch if the developers consider it useful.
On my laptop "say" command works as intended even though i have two sound outputs (Normal and HDMI audio). Due to me recording and producing music too, on my desktop PC i have the normal Soundcard installed in the Tower (which i don't use), and an external one connected over USB. I can switch to which device i wanna use in ubuntu over pulseaudio selection, or even alsa, but it doesn't change which soundcard "say" command is using for output. Is there a way to define that somewhere? if yes maybe we could even add that as a feature to Virtual-Assistant. Help anyone?
What is the author's consensus on using RAG to make said virtual assistant "smarter" by adding in an option for querying LLMs either through APIs or through local models via tools like Ollama.
If yes, I'd like to work on this issue.
Right now this virtual assistant is comparing strings entered by user directly, But it is not a good way as there are thousands of options and we can't compare all.
So for that purpose we need to make our Virtual assistant intelligent so that it can understand what a user wants it to do.
NLP is one solution to that.
To make NLP (Natural Language Processing) better, we need to update the corpus time to time with new additions of data.
make install will install the binary and other dependencies in /usr/bin, this will help to run virtual_assistant accessible from anywhere. Uninstall will clean up the changes.
I find it a bit hard to know how to compile and run this program as there are some things to be added to the command while compiling like -std=c99 -lcurl -ljson. I am using the following way to compile " gcc main.c -o a -std=c99 -lcurl -ljson ". Is this the correct way or is there any other easy way I am missing.
After #53, when trying to say "find places to eat". It executes email functionality which it shouldn't.
Thanks for your contribution to opensource! There's some code safety improvements possible, for example:
scanf
return value not taken into account. Possibly here(for example) and in other places.valgrind
tool to mitigate this kind of safety flaws.system
calls without user input validation. Possibly here(for example) and in other places. Please see how it may be abused here.Google map integration to find a place or even a restaurant and show the result.
Virtual assistant is only text to speech at this moment as it is hard to implement Speech to text in C language.
But if we can have Speech to text too, then we'll be able to use it with voice commands also and that'll be an huge improvement.
Don't we want to save the arrays for AI into files or DB?
There is one minor typo in the README that I will fix.
If the virtual assistant have no idea about the user’s sentence, it will simply search that sentence on Google.
The sentence is awkward and could be adjusted
Just like we are getting the weather forecast, we can even get the list of restaurants near us.
When nothing is provided as an input, still firefox is invoked to google.
Here in this feature, the virtual assistant should be able to answer almost all the questions which are available on the internet. The basic approach is to fetch the answer(knowledge) from the internet (such as Google or Wikipedia) and passing it back to the answer.
The approach which I know is to get the data from Wikipedia using Wikipedia APIs and getting the data in JSON format and converting it to string for printing on the terminal.
For further knowledge please see these links-
https://stackoverflow.com/questions/8555320/is-there-a-clean-wikipedia-api-just-for-retrieve-content-summary
The JSON data will look like these links -
https://en.wikipedia.org/w/api.php?format=json&action=query&prop=extracts&exintro=&explaintext=&titles=google
https://en.wikipedia.org/w/api.php?action=opensearch&search=facebook
The data can be fetched using above link and for converting it from JSON to string please refer our Restaurant feature.
I suggest moving the #include directives outside of function calls, given that they are text replacement by the preprocessor and not handled at runtime. In the process init_config.c can be reorganized with appropriate sanity checks and functions, as generally using an include directive in place of functions is unusual code style.
Adding section on the working and also output screenshots for #53.
@GaelleMarais can you please help here?
Right now we are using GNUstep speech engine for text to speech.
But it is not that good and we can use something better like eSpeak or festival.
Suggestions are welcomed.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.