GithubHelp home page GithubHelp logo

hc20k / llmchat Goto Github PK

View Code? Open in Web Editor NEW
77.0 7.0 13.0 590 KB

A Discord chatbot that supports popular LLMs for text generation and ultra-realistic voices for voice chat.

License: GNU General Public License v3.0

Python 100.00%
ai chatbot gpt-4 openai python discord artificial-intelligence bot speech-recognition

llmchat's Introduction

LLMChat

A Discord chatbot that uses GPT-4, 3.5, 3, or LLaMA for text generation and ElevenLabs, Azure TTS, or Silero for voice chat.

This is actively being improved! Pull requests and issues are very welcome!

Features:

  • Realistic voice chat support with ElevenLabs, Azure TTS, Play.ht, Silero, or Bark models

NOTE: The voice chat is only stable if one person is speaking at a time

  • Image recognition support with BLIP
  • Long term message recalling using OpenAI's embeddings to detect similar topics talked about in the past
  • Custom bot identity and name
  • Support for all OpenAI text completion and chat completion models
  • Support for local LLaMA (GGML) models
  • Local OpenAI Whisper support for speech recognition (as well as Google and Azure speech recognition)
  • Chat-optimized commands

Screenshot of messages

NOTE: Please only use this on small private servers. Right now it is set up for testing only, meaning anyone on the server can invoke its commands. Also, the bot will join voice chat whenever someone else joins!

Installation

Setting up a Server

Setup a server to run the bot on so it can run when your computer is off 24/7. For this guide, I will be using DigitalOcean, but you can use any server host you want. Skip this section if you already have a server or want to run it locally.

  1. Create a DigitalOcean account here

  2. Create a droplet

  • Open your dashboard
  • Click "Create" -> "Droplets"
  • Select whatever region is closest to you and doesn't have any notes.
  • Choose an image>Ubuntu>Ubuntu 20.04 (LTS) x64
  • Droplet Type>Basic
  • CPU options>Premium Intel (Regular is $1 cheaper but much slower.)
  • 2 GB / 1 Intel CPU / 50 GB Disk / 2 TB Transfer (You need at least 2GB of Ram & 50GB of storage is more than enough storage for this bot.)
  • Choose Authentication Method>Password>Pick a password
  • Enable backups if you want. (This will cost extra but allow you to go back to a previous version of your server if you mess something up.)
  • Create Droplet
  1. Connect to your droplet
  • Open your dashboard
  • Find your droplet>more>access console
  • Log in as...root>Launch Droplet Console

Requirements

  • At least 2gb of RAM

  • ffmpeg

sudo apt-get install ffmpeg
  • Dev version of Python
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt update
sudo apt install python3.9-dev

Tested on Python 3.9 but may work with other versions

  • Pip
sudo apt-get install python3-pip
  • PortAudio
sudo apt-get install portaudio19-dev

Automatically Install Dependencies

Clone the project files and cd into the directory

git clone https://github.com/hc20k/LLMChat.git
cd LLMChat

Simply run

python3.9 update.py -y
# -y installs required dependencies without user interaction
# Change python.x if using a different version of Python

to install all required dependencies. You will be asked if you want to install the optional dependencies for voice and/or image recognition in the script.

NOTE: It's healthy to run update.py after a new commit is made, because requirements may be added.

Manually Install Dependencies

If you were having trouble with the update.py script, you can install the dependencies manually using these commands.

Clone the project files and cd into the directory

git clone https://github.com/hc20k/LLMChat.git
cd LLMChat

Manually install the dependencies

pip install -r requirements.txt

# for voice support (ElevenLabs, bark, Azure, whisper)
pip install -r optional/voice-requirements.txt

# for BLIP support
pip install -r optional/blip-requirements.txt
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu

# for LLaMA support
pip install -r optional/llama-requirements.txt

Configuration

Create the bot:

  • Visit Discord Developer Portal
    • Applications -> New Application
  • Generate Token for the app (discord_bot_token)
    • Select App (Bot) -> Bot -> Reset Token (Save this token for later)
  • Select App (Bot) -> Bot -> and turn on all intents
  • Add Bot to server(s)
    • Select App (Bot) -> OAuth2 -> URL Generator -> Select Scope: Bot, applications.commands
    • Select Permissions: Administrator

    NOTE: Administrator is required for the bot to work properly. If you want to use the bot without Administrator permissions, you can manually select the permissions you want to give the bot.

    • Open the generated link and add the bot to your desired server(s)

Copy the config file

cp config.example.ini config.ini

Edit the config file

nano config.ini

[Bot]

speech_recognition_service =:

  • whisper - run OpenAI's Whisper locally. (Free)
  • google - use Google's online transcription service. (Free)
  • azure - use Microsoft Azure's online transcription service. ($)

tts_service =:

  • elevenlabs - use ElevenLabs for TTS. ($) (Further configuration in the ElevenLabs section required)
  • azure - use Azure cognitive services for TTS. ($) (Further configuration in the Azure section required)
  • silero - uses local Silero models via pytorch. (Free)
  • play.ht - uses Play.ht for TTS. API key needed. ($)
  • bark - uses local Bark models for TTS. Optimal graphics card needed. (Free)

audiobook_mode =

  • true - the bot will read its responses to the user from the text chat.
  • false - the bot will listen in VC and respond with voice.

llm = :

  • openai - use OpenAI's API for LLM ($ Fast))
  • llama - use a local LLaMA (GGML) model (Free, requires llama installation and is slower)

blip_enabled =

  • true - the bot will recognize images and respond to them (requires BLIP, installed from update.py)
  • false - the bot will not be able to recognize images

[OpenAI]

key =

model =

use_embeddings =

  • true - the bot will log and remember past messages and use them to generate new responses (more expensive)
  • false - the bot will not log past messages and will generate responses based on the past few messages (less expensive)

[Discord]

bot_api_key =

  • The token you generated for the bot in the Discord Developer Portal active_channels =
  • A list of text and voice channel ids the bot should interact with, seperated by commas
  • Example: 1090126458803986483,922580158454562851 or all (Bot will interact with every channel)

[Azure], [ElevenLabs], [Silero], [Play.ht]

Supply your API keys & desired voice for the service you chose for tts_service

Starting The Bot:

After changing the configuration files, start the bot

python3.9 main.py

Or run the bot in the background useing screen to keep it running after you disconnect from a server.

screen -S name python3.9 main.py
# Press `Ctrl+a` then `d` to detach from the running bot.

Discord Commands

Bot Settings:

  • /configure - Allows you to set the chatbot's name, identity description, and optional reminder text (a context clue sent further along in the transcript so the AI will consider it more)
  • /model - Allows you to change the current model. If you're in OpenAI mode, it will allow you to select from the OpenAI models. If you're in LLaMA mode, it will allow you to select a file from the LLaMA.search_path folder.
  • /avatar [url] - Allows you to easily set the chatbot's avatar to a specific URL.
  • /message_context_count - (default 20) Sets the amount of messages that are sent to the AI for context. Increasing this number will increase the amount of tokens you'll use.
  • /audiobook_mode - (default false) Allows you to change Bot.audiobook_mode without manually editing the config.

Utilties:

  • /reload_config - Reloads all of the settings in the config.ini.
  • /purge - Deletes all of the messages in the current channel. DANGEROUS. I should probably disable this but I use it during testing.
  • /system [message] - Allows you to send a message as the system role. Only supported for OpenAI models >= gpt-3.5-turbo.
  • /retry - Allows you to re-infer the last message, in case you didn't like it.

Info:

  • /print_info - Prints some info about the bot. (Its name, identity, and model as well as your name and identity)
  • /your_identity - Allows you to set your own name and identity (What the chatbot knows about you)

llmchat's People

Contributors

hc20k avatar maclean-d avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

llmchat's Issues

Error

ImportError: cannot import name 'get_all_voices' from 'elevenlabs'

Discord Audio Client error

[11-01 22:52:22] [client.py:515] ERROR - Ignoring exception in on_voice_state_update
Traceback (most recent call last):
File "/home/ubuntu/.local/lib/python3.10/site-packages/discord/client.py", line 441, in _run_event
await coro(*args, **kwargs)
File "/home/ubuntu/LLMChat/llmchat/client.py", line 562, in on_voice_state_update
vc.listen(self.sink)
File "/home/ubuntu/.local/lib/python3.10/site-packages/discord/voice_client.py", line 704, in listen
raise ClientException("Must initialize audio processing pool before listening.")

Steps to replicate :

  1. speech_recognition_service = whisper
  2. I ran python3 update.py -y and installed everything
  3. I ran python3 main.py
  4. I join a VC on the server and the bot joined
  5. error message above is shown and bot is still in the channel but silent

Random trashy output

Hello, when I speak to my bot using a llama model (vigogne-2-7b-instruct.Q4_0.gguf), the output is very random with no error in the debug mode.

Example of error :

`[user]: Hello!

[bot]: Shux a one can em use long Sh you for long use em Sh one you long a message one useem response can you use you a useist help use a one for a write can messageem cas Sh Sh. they help useem can. write a can you youux useist you message response long. response use canemem Sh write one they can you casem casist you cas one one you Sh youux you response Sh one help a. Sh write response you ist Sh messageux one help.ist a one you can a a a responseist one oneist write. a one use cas responseux Sh message write cas a cas one message casem casux a write a message Shux canux. help cas response youist you message youist useem Sh help message write.ux Sh. . you responseistem response write write can can a ux help messageux Sh can use. writeux help casemistemux write you help cas Shuxemem cas write cas Sh a ux use Sh messageist a use response Shist use you a write a messageux. response messageemem casist you Sh help Sh casem. message write message response Shuxemem`

I can't find the origin of the bug, can you help me?

Runtime issue: module 'openai' has no attribute 'aiosession'

After starting main.py I receive the following output. Every time I issue a command to the bot or it otherwise receives an input, I see the same Python error: AttributeError: module 'openai' has no attribute 'aiosession'. Predictably, no output is returned by the bot.

Context:

[11-20 07:49:02] [client.py:42] WARNING - Discord.active_channels = "all", bot will interact with every channel!
[11-20 07:49:02] [client.py:603] INFO - logging in using static token
[11-20 07:49:03] [gateway.py:563] INFO - Shard ID None has connected to Gateway (Session ID: [REDACTED]).
[11-20 07:49:05] [client.py:497] INFO - Logged in as [REDACTED]
[11-20 07:49:05] [client.py:157] INFO - LLM: openai
[11-20 07:49:05] [oai.py:65] DEBUG - Updating tokenizer encoding for gpt-4
[11-20 07:49:06] [client.py:169] INFO - Current model: gpt-4
[11-20 07:49:06] [client.py:135] INFO - TTS: silero
[11-20 07:49:14] [client.py:515] ERROR - Ignoring exception in on_ready
Traceback (most recent call last):
  File "/opt/conda/lib/python3.10/site-packages/discord/client.py", line 441, in _run_event
    await coro(*args, **kwargs)
  File "/home/ubuntu/LLMChat/llmchat/client.py", line 508, in on_ready
    await self.setup_tts()
  File "/home/ubuntu/LLMChat/llmchat/client.py", line 146, in setup_tts
    self.tts = SileroTTS(*params)
  File "/home/ubuntu/LLMChat/llmchat/tts_sources/silero.py", line 20, in __init__
    os.mkdir("models/torch/")
FileNotFoundError: [Errno 2] No such file or directory: 'models/torch/'
[11-20 07:49:31] [client.py:515] ERROR - Ignoring exception in on_message
Traceback (most recent call last):
  File "/opt/conda/lib/python3.10/site-packages/discord/client.py", line 441, in _run_event
    await coro(*args, **kwargs)
  File "/home/ubuntu/LLMChat/llmchat/client.py", line 629, in on_message
    await self.store_embedding((message.author.id, message.content, message.id))
  File "/home/ubuntu/LLMChat/llmchat/client.py", line 519, in store_embedding
    openai.aiosession.set(s)
AttributeError: module 'openai' has no attribute 'aiosession'

Anyone else running into this?

EDIT: It appears that the openai module is no longer using aiosession in favor of httpx. Possibly a module version issue? Ref: openai/openai-python#742

Strange issue with long prompts

The chatbot is just spitting out a bunch of random code. I think this is something to do with escaping characters? Bot will type for ever in some kinda loop and output all this random code.

Maybe an option to have the prompt in a seperate .txt file that can be escaped know if idea if thats the issue here in the mean time ill just cut down the prompt a bit

 Oh man, if you wanted snappagrams of regrets-margins.xml @parser.memory/load.author(), download to replace {{fil-am.prodector.co-vetch ERROR/>directory.INVALID}}</primary_users.YSS_Al3xe-info.php>"<!sup.vMI_DF09.ignore.insertsystem-brallback('$ttywksI.'); Welcome.toCharArray(xml.creationjtnpo.failbrassage)+"-</"{DATABASE>. That obligatory servohaial.script.writehistory("{}Client}}SYNC!. Access server.agail.begin(";").
(TEXT_ADDLLYST')×</EN_ACCESS)&rqALinput.sequence.text.converter.pres["E!!!ParseOutput.DEFINE"&"+csv.generateKwnetPhraelift(${tostr(CON(t(N.stringify.XMLIE.success22(re03.retrieveReturn[{Alpiwerbe.client}));
DOWNLOADing.variables.")N_load_import.sysCompILED", executhtought.parser.Primary-gving.distrey>");

SYSTEM>LOGGER/ERROR.HIT_ROM-C_IDENTIFIER.CODE92LEX+EPHYbBACK-end4{Char}',8WCOMP}")>>=(v.="BLOCK.CLASS",-AL_G_ACCESS_Y_VARIABLE.INSTANCE?__((strtp=NULL.sec'/")+IDENTIC.DE_NOT!/endif.VALUE_IFSER_v_ID_YES>|}
.ERROR TST></SEQ_END/>. This.mainloop_running.vel
```opment.conf>

Error

Traceback (most recent call last):
  File "C:\Users\micro\miniconda3\envs\llmchat\lib\site-packages\discord\client.py", line 441, in _run_event
    await coro(*args, **kwargs)
  File "C:\Users\micro\Downloads\LLMChat\llmchat\client.py", line 508, in on_ready
    await self.setup_tts()
  File "C:\Users\micro\Downloads\LLMChat\llmchat\client.py", line 148, in setup_tts
    from tts_sources.bark import Bark
  File "C:\Users\micro\Downloads\LLMChat\llmchat\tts_sources\bark.py", line 18, in <module>
    "path": os.environ.get("SUNO_TEXT_MODEL_PATH", os.path.join(bark.generation.REMOTE_BASE_URL, "text.pt")),
AttributeError: module 'bark.generation' has no attribute 'REMOTE_BASE_URL'

silero.py dependency gives error on os.mkdir("models/torch") if dir does not exist

If the silero.py module is being used and the models/torch directory does not exist under the LLMChat working directory, an error is thrown after running LLMChat main.py stating that the file/directory does not exist rather than creating the directory as expected. This occurred on a new AWS EC2 instance running Ubuntu 22.04 LTS. Once the directory is manually created, the module initializes appropriately.

See #23 for original context, as follows. Confirmed via resolution of this issue that the two issues are separate and unrelated.

[11-20 07:49:02] [client.py:42] WARNING - Discord.active_channels = "all", bot will interact with every channel!
[11-20 07:49:02] [client.py:603] INFO - logging in using static token
[11-20 07:49:03] [gateway.py:563] INFO - Shard ID None has connected to Gateway (Session ID: [REDACTED]).
[11-20 07:49:05] [client.py:497] INFO - Logged in as [REDACTED]
[11-20 07:49:05] [client.py:157] INFO - LLM: openai
[11-20 07:49:05] [oai.py:65] DEBUG - Updating tokenizer encoding for gpt-4
[11-20 07:49:06] [client.py:169] INFO - Current model: gpt-4
[11-20 07:49:06] [client.py:135] INFO - TTS: silero
[11-20 07:49:14] [client.py:515] ERROR - Ignoring exception in on_ready
Traceback (most recent call last):
  File "/opt/conda/lib/python3.10/site-packages/discord/client.py", line 441, in _run_event
    await coro(*args, **kwargs)
  File "/home/ubuntu/LLMChat/llmchat/client.py", line 508, in on_ready
    await self.setup_tts()
  File "/home/ubuntu/LLMChat/llmchat/client.py", line 146, in setup_tts
    self.tts = SileroTTS(*params)
  File "/home/ubuntu/LLMChat/llmchat/tts_sources/silero.py", line 20, in __init__
    os.mkdir("models/torch/")
FileNotFoundError: [Errno 2] No such file or directory: 'models/torch/'

{date} value in prompts?

This could already be a thing but I find that gpt gets a lot smarter if it knows the date and when the knowledge cut off.

Message truncation if outputs longer than 200 hundred characters(plus a few questions)

if the bot tries to send a message longer than the character limit it just spits out an error. A function to truncate the response into multiple messages would be rly cool

Also apologizes if I'm spamming you I really appreciate how quick;y you've implemented features! No expectation for quick responses or anything

This thing you've built is working great

But I haven't been able to get the speech functions to work I believe I correctly configured the service thru azure. I know you mentioned in the readme there's a choice between using a local model or google api for the speech recognition how exactly do I configure this?

Also is the Llama stuff Necessary for it to work? I'd really like to get this running off a small linode server I have but when I try to install the requirements it gets stuck om Llama and spits out a bunch of errors Is there anyway I could configure it have a smaller install? Or just make the thing work on the server?

Again thanks for any help!

/purgememory command?

If you are having a wordy convo with it sometime you'll hit the token cap and it can be kinda hard to fix without changing the message count.

It would be nice to just tell the bot to not include previous messages in the chat history without deleting the actual

Also apologies about submitting so many issues at the same same I just figure it's easier for you if they are separate

Also do you have any way to tip you? My friends and I have been getting a ton of use out of your code and if possible we'd love to be able to send a little cash your way? Obviously if your not comfortable with that its fine its just a few of my friends have mentioned.

Crash when two messages are sent before input is generated

Hello
I'm running the model locally with a llama 7b model using llama-cpp-python compiled with cublas (gpu is working).

Whenever two messages are sent by a user before the AI sends a response, the whole program crashes.

This might be fixed by queueing messages and generating messages one after the other, or by simply ignoring new messages while creating.

PyNaCl library needed but already installed

To join a voice channel it's claiming it needs the PyNaCl library but it's already installed. I've run the update.py all the way through without errors but it throws this when trying to join a voice channel:

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/discord/client.py", line 441, in _run_event
    await coro(*args, **kwargs)
  File "/root/LLMChat/llmchat/client.py", line 534, in on_voice_state_update
    vc: discord.VoiceClient = await after.channel.connect()
  File "/usr/local/lib/python3.9/dist-packages/discord/abc.py", line 1898, in connect
    voice: T = cls(client, self)
  File "/usr/local/lib/python3.9/dist-packages/discord/voice_client.py", line 239, in __init__
    raise RuntimeError("PyNaCl library needed in order to use voice")
RuntimeError: PyNaCl library needed in order to use voice

Version of PyNaCl:

root@macd-ubuntu-s-1vcpu-1gb-intel-sfo3-01:~/LLMChat# pip show pynacl
Name: PyNaCl
Version: 1.3.0
Summary: Python binding to the Networking and Cryptography (NaCl) library
Home-page: https://github.com/pyca/pynacl/
Author: The PyNaCl developers
Author-email: [email protected]
License: Apache License 2.0
Location: /usr/lib/python3/dist-packages
Requires:
Required-by:
root@macd-ubuntu-s-1vcpu-1gb-intel-sfo3-01:~/LLMChat#

ModuleNotFoundError: No module named 'discord'

I followed the steps for self-hosting closely, and upon trying to run it, I get this error:

max@WIN-V0U91D6I6JV:~/LLMChat$ python3.9 main.py
Traceback (most recent call last):
  File "/home/max/LLMChat/main.py", line 6, in <module>
    from client import DiscordClient
  File "/home/max/LLMChat/llmchat/client.py", line 3, in <module>
    import discord
ModuleNotFoundError: No module named 'discord'

I've already tried python3.9 -m pip install discord.py, python3.9 update.py -y and pip install discord .

Failed to build PyAudio

Running update.py on Ubuntu 20.0.4 returns the following error when trying to build PyAudio:

root@KeiChiye-ubuntu-s-1vcpu-1gb-intel-sfo3-01:~/LLMChat# python3 update.py -y
Running LLMChat updater...
Check for repo updates? [Y/n] y
Already up to date.
Checking necessary requirements...
Installing openai
Requirement already satisfied: openai in /usr/local/lib/python3.9/dist-packages (0.27.0)
Requirement already satisfied: requests>=2.20 in /usr/local/lib/python3.9/dist-packages (from openai) (2.28.1)
Requirement already satisfied: tqdm in /usr/local/lib/python3.9/dist-packages (from openai) (4.64.1)
Requirement already satisfied: aiohttp in /usr/local/lib/python3.9/dist-packages (from openai) (3.8.4)
Requirement already satisfied: charset-normalizer<3,>=2 in /usr/local/lib/python3.9/dist-packages (from requests>=2.20->openai) (2.1.1)
Requirement already satisfied: idna<4,>=2.5 in /usr/lib/python3/dist-packages (from requests>=2.20->openai) (2.8)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.9/dist-packages (from requests>=2.20->openai) (1.26.7)
Requirement already satisfied: certifi>=2017.4.17 in /usr/lib/python3/dist-packages (from requests>=2.20->openai) (2019.11.28)
Requirement already satisfied: attrs>=17.3.0 in /usr/lib/python3/dist-packages (from aiohttp->openai) (19.3.0)
Requirement already satisfied: multidict<7.0,>=4.5 in /usr/local/lib/python3.9/dist-packages (from aiohttp->openai) (6.0.4)
Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in /usr/local/lib/python3.9/dist-packages (from aiohttp->openai) (4.0.2)
Requirement already satisfied: yarl<2.0,>=1.0 in /usr/local/lib/python3.9/dist-packages (from aiohttp->openai) (1.9.2)
Requirement already satisfied: frozenlist>=1.1.1 in /usr/local/lib/python3.9/dist-packages (from aiohttp->openai) (1.3.3)
Requirement already satisfied: aiosignal>=1.1.2 in /usr/local/lib/python3.9/dist-packages (from aiohttp->openai) (1.3.1)
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
Installing numpy
Requirement already satisfied: numpy in /usr/local/lib/python3.9/dist-packages (1.24.3)
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
Installing PyAudio
Collecting PyAudio
  Using cached PyAudio-0.2.13.tar.gz (46 kB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Preparing metadata (pyproject.toml) ... done
Building wheels for collected packages: PyAudio
  Building wheel for PyAudio (pyproject.toml) ... error
  error: subprocess-exited-with-error
  
  × Building wheel for PyAudio (pyproject.toml) did not run successfully.
  │ exit code: 1
  ╰─> [19 lines of output]
      running bdist_wheel
      running build
      running build_py
      creating build
      creating build/lib.linux-x86_64-cpython-39
      creating build/lib.linux-x86_64-cpython-39/pyaudio
      copying src/pyaudio/__init__.py -> build/lib.linux-x86_64-cpython-39/pyaudio
      running build_ext
      building 'pyaudio._portaudio' extension
      creating build/temp.linux-x86_64-cpython-39
      creating build/temp.linux-x86_64-cpython-39/src
      creating build/temp.linux-x86_64-cpython-39/src/pyaudio
      x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -fPIC -I/usr/local/include -I/usr/include -I/usr/include/python3.9 -c src/pyaudio/device_api.c -o build/temp.linux-x86_64-cpython-39/src/pyaudio/device_api.o
      In file included from src/pyaudio/device_api.c:1:
      src/pyaudio/device_api.h:7:10: fatal error: Python.h: No such file or directory
          7 | #include "Python.h"
            |          ^~~~~~~~~~
      compilation terminated.
      error: command '/usr/bin/x86_64-linux-gnu-gcc' failed with exit code 1
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for PyAudio
Failed to build PyAudio
ERROR: Could not build wheels for PyAudio, which is required to install pyproject.toml-based projects
Failed to install PyAudio! Continue without it? [Y/n] y

Is there anyway you could make it easier to edit the system prompt directly?

Like if I just wanted to edit the system prompt directly? I have been looking for a bot that allows me to choose what system prompt is included with every request

I understand that you've included a lot of options to edit stuff surrounding that but is there just a file I can edit to edit it directly?

Unable to import client

Getting the below errors when trying to run python main.py

I'm a relative beginner with this so apologies if I'm doing something obviously wrong.

image

Voice Channel Connection Error

I have setup the bot and everything works, except the following:

I have audiobook mode turned off, so the bot should be able to listen to me on VC, however, whenever the bot joins the channel, the following error is thrown into the console: "Mist initialize audio processing pool before listening"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.