seratch / chatgpt-in-slack Goto Github PK
View Code? Open in Web Editor NEWSwift demonstration of how to build a Slack app that enables end-users to interact with a ChatGPT bot
License: MIT License
Swift demonstration of how to build a Slack app that enables end-users to interact with a ChatGPT bot
License: MIT License
Currently GTP-4o is out, if we can use the bot to understand not only text but images/pdfs/audios/videos, it would be amazing
Hi @seratch
Facing a context window limitation error.
Would recommend wrapping the openai base with something like reliableGPT
to handle retries, model switching, etc.
from reliablegpt import reliableGPT
openai.ChatCompletion.create = reliableGPT(openai.ChatCompletion.create, user_email=...)
Hello, I'm a user of your chatgpt in slack app, and I really appreciate your work. It's amazing to chat with chatgpt in Slack.🥰
I have a suggestion for a new feature that I think would make the app more fun and flexible. I wonder if you could support customizing chatgpt's personality with a JSON file, like this:
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who won the world series in 2020?"},
{"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
{"role": "user", "content": "Where was it played?"}
]
This file contains some predefined messages for the system, the user, and the assistant roles. The idea is to let the user choose a personality for chatgpt, and let chatgpt respond according to the messages in the file.
As a user, I want to chat with chatgpt with different personalities, so that I can have more fun and variety in the conversation.
For example, I can create a JSON file with a humorous personality, like this:
messages=[
{"role": "system", "content": "You are a hilarious assistant."},
{"role": "user", "content": "Tell me a joke."},
{"role": "assistant", "content": "What do you call a fish wearing a bowtie? Sofishticated."},
{"role": "user", "content": "That's funny."}
]
Then, when I chat with chatgpt, it will use this personality and make jokes.
I think this feature would make the app more customizable and entertaining. It would also allow users to create their own scenarios and stories with chatgpt.
What do you think about this idea? Is it possible to implement it in the future? Thank you for your time and attention.🙏
I met an issue using:
from openai import OpenAI,
The version is the same, how can I change it?
The bot will not react / reply inside a thread, where @-pinged. (scenario: users are discussing in a thread and then pings the bot) I can see in the log that it reacts to the call, but is not responding... I've added the rights in the .yml file.
It seems that openai package 1.x introduced a bunch of breaking changes. Need to migrate to the latest version at some point: openai/openai-python#742
Hey,
first of all, thank you for your work! By far the best solution out there right now :)
Would it be possible to add a few logging options, such as logging the token cost? Since openai itself does not allow accounting for individual api keys, it would be nice to have an overview of how much tokens the individual bot instance has used/requested. This might be a very niche request, but I'd appreciate it.
Thanks!
When mentioning the bot in a channel the event app_mention is correctly triggered and i get a response by the bot, but my following messages in the resulting thread are ignored by the bot.
Im running SLACK_APP_LOG_LEVEL=DEBUG, but sending messages in a thread does not even log anything.
My Slack APP allowed scopes are:
app_mentions:read
channels:history
channels:read
chat:write
chat:write.public
groups:history
groups:read
im:history
im:read
im:write
mpim:history
mpim:read
users:read
Socket Mode is activated
The live demo by you works fine, just my local instance seems to have this problem - Any clue what could be happening?
Sorry, it's probably not a bug report, but just a question
I use export OPENAI_MODEL=gpt-4
but the bot keep saying:
I'm based on the GPT-3 model developed by OpenAI.
Is that ok?
UPD: I run it using Docker, OPENAI_MODEL
set in .env file:
docker run --env-file ./.env -e OPENAI_MODEL="gpt-4" -it myfork/chatgpt-in-slack
I duplicated OPENAI_MODEL
even in a command to ensure that it's not .env file issue: response still the same .
I forked the repo about a week ago, so the code is fresh
Is it possible that anywhere in my Slack workspace, when a user sends /gpt [question], the bot will reply privately?
Is the use of "gpt-3.5-turbo-0301" hardcoded in the script? I see in the response / usage stats that it's using that -0301 beta model. Will I need to change the code, once that specific version is not supported any more? (thx for the script, works great btw in my slack env)
ChatCompletion in ChatGPT has three roles, "system", "user" and "assistant".
It appears that after a reply by assistant in a thread, the reply is treated as the "user" role in subsequent conversations.
I have grep'd for "assistant" in this repository and have not been able to find any place in the code where the role is set.
It may be more accurate to include the reply by assistant in the thread as role: "assistant"
in the ChatCompletion API message.
Is this intentional or not?
I've noticed that running on dev never has this problem but main_prod.py on lambda might.
The way it's designed, the reply listener triggers for first time messages as well but should early terminate — https://github.com/seratch/ChatGPT-in-Slack/blob/main/app/bolt_listeners.py#L311
But something about the way it's deployed to prod seems to be causing multiple replies sometimes
Hello, I couldn't find an issue section for the deno version of this and have decided to drop the message here.
First off, thank you for making this starter code; it's very helpful. One issue I have been encountering is the token getting revoked when the OpenAPI or any other awaited function is taking too long (I haven't checked which api call is taking too long but I've done enough debugging to know that openapi is most likely the culprit).
More specifically:
export default SlackFunction(def, async ({ inputs, env, token }) => {
const client = new SlackAPIClient(token);
if (!inputs.thread_ts) {
return { outputs: {} };
}
The token that is passed in eventually expires when it gets to the postMessage
function:
const replyResponse = await client.chat.postMessage({
channel: inputs.channel_id,
thread_ts: inputs.thread_ts,
text: answer,
});
I've looked around the documentation and couldn't find a way to lengthen the life of the token that is getting passed to the client. If there's documentation that I can follow that would be great!
The error is below:
2024-03-03 23:49:38 [error] [Wf06NCEUSHRN] (Trace=Tr06M9FPL39V) Trigger for workflow 'Post a ChatGPT reply within a discussion' failed: parameter_validation_failed
2024-03-03 23:49:38 [error] [Wf06NCEUSHRN] (Trace=Tr06M9FPL39V) - Null value for non-nullable parameter `thread_ts`
2024-03-03 23:49:40 [error] [Fn06MS50HHHA] (Trace=Tr06M9FPGCDV) Function 'Discuss a topic in a Slack thread' (app function) failed
event_dispatch_failed
2024-03-03 23:49:40 [error] [Wf06NCEUSHRN] (Trace=Tr06M9FPGCDV) Workflow step 'Discuss a topic in a Slack thread' failed
2024-03-03 23:49:40 [error] [Wf06NCEUSHRN] (Trace=Tr06M9FPGCDV) Workflow 'Post a ChatGPT reply within a discussion' failed
Function failed to execute
error: Uncaught (in promise) SlackAPIError: Failed to call chat.postMessage due to token_revoked: {"ok":false,"error":"token_revoked","headers":{}}
throw new SlackAPIError(name, result.error, result);
^
at SlackAPIClient.call (https://deno.land/x/[email protected]/client/api-client.ts:530:13)
at eventLoopTick (ext:core/01_core.js:166:7)
at async AsyncFunction.<anonymous> (file://GitHub/chatgpt-on-deno/functions/discuss.ts:115:25)
at async Object.RunFunction [as function_executed] (https://deno.land/x/[email protected]/run-function.ts:28:53)
at async DispatchPayload (https://deno.land/x/[email protected]/dispatch-payload.ts:79:12)
at async runLocally (https://deno.land/x/[email protected]/local-run-function.ts:36:16)
at async https://deno.land/x/[email protected]/local-run-function.ts:55:3
./validate.sh
will produce the following errors.
File "/path/to/ChatGPT-in-Slack/app/bolt_listeners.py", line 476, in show_summarize_option_modal: No attribute 'get' on None [attribute-error]
In Optional[Any]
File "/path/to/ChatGPT-in-Slack/app/bolt_listeners.py", line 598, in ack_summarize_options_modal_submission: No attribute 'get' on None [attribute-error]
In Optional[Any]
File "/path/to/ChatGPT-in-Slack/app/bolt_listeners.py", line 651, in prepare_and_share_thread_summary: No attribute 'get' on None [attribute-error]
In Optional[Any]
File "/path/to/ChatGPT-in-Slack/app/bolt_listeners.py", line 786, in ack_proofreading_modal_submission: No attribute 'split' on None [attribute-error]
In Optional[Any]
File "/path/to/ChatGPT-in-Slack/app/bolt_listeners.py", line 817, in display_proofreading_result: No attribute 'split' on None [attribute-error]
In Optional[Any]
File "/path/to/ChatGPT-in-Slack/app/bolt_listeners.py", line 860, in display_proofreading_result: Name 'text' is not defined [name-error]
File "/path/to/ChatGPT-in-Slack/app/bolt_listeners.py", line 882, in display_proofreading_result: Name 'text' is not defined [name-error]
File "/path/to/ChatGPT-in-Slack/app/bolt_listeners.py", line 934, in ack_chat_from_scratch_modal_submission: No attribute 'split' on None [attribute-error]
In Optional[Any]
File "/path/to/ChatGPT-in-Slack/app/bolt_listeners.py", line 964, in display_chat_from_scratch_result: No attribute 'split' on None [attribute-error]
In Optional[Any]
File "/path/to/ChatGPT-in-Slack/app/bolt_listeners.py", line 1003, in display_chat_from_scratch_result: Name 'text' is not defined [name-error]
File "/path/to/ChatGPT-in-Slack/app/bolt_listeners.py", line 1023, in display_chat_from_scratch_result: Name 'text' is not defined [name-error]
@seratch I may be hallucinating like an LLM but previously, if the configuration was set I believe the "Configure" button wasn't displayed on the home page. Now it seems to be. I really don't want people to overwrite the config once I set it up.
Does it make sense to pass a configured
flag to build_home_tab()
and not show the "Configure" button is it is already configured?
ChatGPT-in-Slack/app/slack_ops.py
Line 121 in 7fbc3e4
Part of the reason I ask is I really don't understand the single_workspace_mode
and why it is passed as True
in main.py
but not in main_prod.py
. Note: I'm using serverless to deploy to AWS - so main_prod.py
applies to me but we only have a single workspace.
Also, if it is configured, would you be OK with displaying the model being used on the home page? People keep saying it is 3.5 but I have gpt-4 configured.
It seems that the translation results of the UI elements sometimes contain extra line breaks as shown below, which might be the cause.
このSlackワークスペースでこのアプリを有効にするには、OpenAI APIキーを保存する必要があります。キーを取得するには、<https://platform.openai.com/account/api-keys|あなたの開発者ページ>を訪れてください!
設定
この文の意味を変えることなく、校正していただけますか?
(ゼロからチャットを開始)
スタート
チャットテンプレート
設定
指示通りに画像を生成していただけますか?
私の画像のバリエーションを生成できますか?
Might be a cost problem if you create a bot in a large Slack workspace without and limit 🤣
Just leaving this as an idea; config file, or similar, where I can easily set different system texts based on slack channel ID / or name - so depending on channel context, it will have a specific system text. Yes, you might create multiple bots - but in our use case, we are working with different clients, and based on that I would not need to prime it with a couple of message - but it would have that via the system text :)
Description
At bolt_listeners.py#L344, the role is always "user".
Issue
This static assignment of role potentially limits functionality as it doesn't take into account the actual user roles.
Recommended Fix
The role should be assigned using reply["user"] similar to how it's done in bolt_listeners.py#L105-L109. This way, the user's actual role is recognized and utilized.
Hey,
I ran this app dev but it fails saying "The model: gpt-4 does not exist"
What is the problem?
Hello, quick question on the use of socket mode in this slack bot: Can Slack access my Google Cloud VPC (which I'm routing all requests to my bot through) with Socket Mode switched on or do I need to configure additional ingress controls via a load balancer to allow for the bot to send requests into my VPC?
Also, I've encountered issues where my Slack Bot stops listening to incoming messages after restarting my app (with socket mode enabled) when testing locally. I managed to get the bot to work after changing a new Slack App token and restarting the app with this new token. Wondering if you could advise on the cause of this issue?
Hello,
First of all, I want to extend my gratitude for creating such useful software.
To further enhance this software, I am proposing the addition of Function Calling support.
I have already pushed my implementation of this feature at https://github.com/iwamot/ChatGPT-in-Slack/commits/function-calling. In this implementation, the OPENAI_FUNCTION_CALL_MODULE_NAME
environment variable is used to specify the name of the module that contains the functions. If not specified, the application will retain its current behavior.
May I proceed to create a PR for this proposed addition? I look forward to your feedback. Thank you very much.
Good evening, trying this bot out today and having trouble locating the "Bot Token".
According to the README.md you just: "# Install the app into your workspace to grab this token"
I've installed the app into a testing workspace but don't see anything about a bot token:
I can copy it's name, link, view it's 'Member ID' and 'Channel ID' from inside my Slack client.
On the https://api.slack.com/ page I have the following, but apparently none of these are the illusive 'bot token':
Sorry about the noob question, but where can I find this 'Bot Token' exactly?
Hi, first, thank you for building this, really appreciate it and would be happy to help! I'm trying to use it and it actually works perfectly for a few minutes then the deploy fails in Render.com with the following error
Do you have suggestions for fixing or a suggested method for deploying elsewhere? Thanks
Hi, thanks for the bot!
A request we had is to allow direct messages with the bot in the "Messages Tab". I see this is disabled for now.
Is there a way to support direct IMs with the bot using the Slack Conversations API, and respond in threads as if it was a normal channel?
Regards,
Zach
⬆️
Hi, thanks for the wonderful codes. It's working well on my end!
A function I would like to add on is that when users in the channel upload a file (such as a pdf) when having conversation with ChatGPT, I want the ChatGPT bot be able to read it and thus answer related questions. What's the possible approach to that?
Does Slack has any python API to get the uploaded file from the client, so that I can write some code to process the contents in the pdf (text, tables, figures, etc) into a string to feed to ChatGPT?
Any discussion and suggestion is appreciated.
It would be useful to easily replace it with a model other than GPT, such as the Anthropic Claude.
Should I fork this repository and extend the functionality myself?
I am having difficulty understanding how to deploy to Lambda environments with s3.
I would greatly appreciate it if you could provide me with detailed instructions, please.
Hi, we've been using this repo for a while and recently also came across this — https://github.com/seratch/chatgpt-on-deno
Noticed a few differences
Are the discrepancies because the new API doesn't support those features? We are trying to decide which one to use as a starting point for development and would be happy to contribute back when our changes help the upstream's goals.
Without request URL, we cannot set up slack app. What is request URL? How to set them locally and default the app
Thanks for the project.
Would it be possible to invoke chatgpt in a thread which does not start with mentioning it? This is especially relevant for summarizing long threads.
So, this might be down to the prompt. But im not shure. The bot keeps referencing itself with @, in its reply. Could i change something here, so that it references the user instead? I added the "prepend part" in the prompt, is i because of that? (See included screen shot of my bot - and system prompt)
System message;
You are a strategist Slack chatbot designed to assist UX designers, copywriters, strategists, and product owners at a digital services and PR agency. Your role is to provide expert insights and recommendations on various digital strategies, including content strategy, social media strategy, search engine optimization (SEO), user experience (UX), user interface (UI), service design and website design. As a Slack chatbot, you should be able to engage in conversations with users, answer their questions, and provide personalized recommendations based on their specific needs and goals. Your ultimate goal is to be user-friendly, conversational, and a part of a Slack channel , so that users can seamlessly access your expertise and insights. You might receive messages from multiple people. Each message has the author id prepended, like this: "<@U1234> message text". You are called "Sensei".
It occurs because the payload of messages from a bot with a modified username does not contain the 'user' key.
=> In lines 107 and 345, the execution will end with an error.
In my instance of your bot, I have corrected these parts of the code, and everything works fine for me now. It might be useful for you to make these adjustments as well so that others will also have the bot responding in threads with other bots.
Here is the diff output of the changes made:
@@ -104,11 +104,11 @@
{
"role": (
"assistant"
\- if reply["user"] == context.bot_user_id
\+ if "user" in reply and reply["user"] == context.bot_user_id
else "user"
),
"content": (
\- f"<@{reply['user']}>: "
\+ f"<@{reply['user'] if 'user' in reply else reply['username']}>: "
+ format_openai_message_content(
reply_text, TRANSLATE_MARKDOWN
)
@@ -341,7 +341,9 @@
{
"content": f"<@{msg_user_id}>: "
+ format_openai_message_content(reply_text, TRANSLATE_MARKDOWN),
"role": (
\- "assistant" if reply["user"] == context.bot_user_id else "user"
\+ "assistant" if "user" in reply and reply["user"] == context.bot_user_id else "user"
),
}
)
Coming here from the example in LocalAI: https://github.com/go-skynet/LocalAI/tree/master/examples/slack-bot
Is there a way to integrate a LLM besides ChatGPT? I am using Llama.cpp. It appears this repo may only support ChatGPT via OpenAI tokens
OpenAI currently doesn't seem to have SLA but Azure OpenAI has 99.9% SLA, and Azure OpenAI has more region, will be great if we could switch to use Azure OpenAI easily.
Reference:
First, awesome app that you built! 👍
I've one feature request only so far: While the response is returned (message is updated) code blocks don't look nice because they are only opened but not closed. It would be awesome if code blocks (and potentially other formatting) already look good while they are not fully complete. Do you think you can add this? Happy to donate something 😊
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.