GithubHelp home page GithubHelp logo

chatgpt-on-wechat's Introduction

OpenAI on WeChat Railway Deploy License: ISC wakatime Visitor Count

🤖️ Turn your WeChat into an auto-reply chatbot powered by OpenAI within only 2 steps! 🤖️

Your Chatbot in Group Chat!

Acknowledgement & Features

This project is implemented based on this amazing project that I contibuted before, with Wechaty SDK and OpenAI API Key, we achieve:

  • fast and robust connection to dozens of AI model with different features
  • stable and persistent deployment on cloud servers Railway

0. Table of Content

1. How to Deploy this Bot?

You can deploy in local or on cloud, whatever you want.

The deploy on cloud method is recommended.

1.1 Deploy in Local

1.1.1 Get your OpenAI API Keys


1.1.2 Configure Environment Variables

You can copy the template config.yaml.example into a new file config.yaml, and paste the configurations:

openaiApiKey: "<your_openai_api_key>"
openaiOrganizationID: "<your_organization_id>"
chatgptTriggerKeyword: "<your_keyword>"

Or you can export the environment variables listed in .env.sample to your system, which is a more encouraged method to keep your OpenAI API Key safe:

export OPENAI_API_KEY="sk-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
export OPENAI_ORGANIZATION_KEY="org-XXXXXXXXXXXXXXX"
export CHATGPT_TRIGGER_KEYWORD="机器人你好:"

Please note:

  • chatgptTriggerKeyword is the keyword which can trigger auto-reply:
    • In private chat, the message starts with it will trigger auto-reply
    • In group chat, the message starts with @Name <keyword> will trigger auto-reply (Here @Name means "@ the bot" in the group chat)
  • chatgptTriggerKeyword can be empty string, which means:
    • In private chat, every messages will trigger auto-reply
    • In group chat, only "@ the bot" will trigger auto-reply

1.1.3 Setup the Docker

  1. Setup Docker Image
docker build -t openai-on-wechat .
  1. Setup Docker Container
docker run -v $(pwd)/config.yaml:/app/config.yaml openai-on-wechat

1.1.4 Login your WeChat

Once you deploy the bot successfully, just follow the terminal or Logs in Docker container prompt carefully:

  1. Scan the QR Code with mobile WeChat
  2. Click "Log in" to allow desktop login (where our bot stays)
  3. Wait a few seconds and start chatting!

🤖 Enjoy your powerful chatbot! 🤖


1.2 Deploy on Cloud

Click the button below to fork this repo and deploy with Railway!

Deploy on Railway


1.2.1 Configure on Railway

Fill in the following blanks:

  1. Your forked repo name (can be any name you like)
  2. Choose make it private or not (also up to you)
  3. Environment variables (for how to get OpenAI API keys, please refer to 1.1.1 Get your OpenAI API Keys

Railway Config

Please note:

Make sure the environment variables are set in RailWay instead of writing directly in config.yaml. It's really NOT recommended to implicitly write out your OpenAI API Key in public repo. Anyone with your key can get access to the OpenAI API services, and it's possbile for you to lose money if you pay for that.


1.2.2 Deploy & Login on Railway

The deploy process is automatic. It may take a few minutes for the first time. As you see the Success, click the tab to see the details. (which is your secret WeChat console!)

Railway Deploy

Click Deply Logs and you will see everything is setting up, wait for a QR Code to pop up. Scan it as if you are login to your desktop WeChat, and click "Log in" on your mobile WeChat.

Railway Scan QR Code

Finally, everything is good to go! You will see the logs when people sending you messagem, and whenever the chatbot is auto-triggered to reply.

Railway Log

🤖 Enjoy your powerful chatbot! 🤖

2. Any Fancy Advanced Settings?

2.1 Config Reply in Error

When the OpenAI API encounters some errors (e.g. over-crowded traffic, no authorization, ...), the chatbot will auto-reply the pre-configured message.

You can change it in src/chatgpt.js:

const chatgptErrorMessage = "🤖️:AI机器人摆烂了,请稍后再试~";

2.2 Config OpenAI Models

You can change whatever OpenAI Models you like to handle task at different capability & time-consumption trade-off. (e.g. model with better capability costs more time to respond)

Currently, we use the latest text-davinci-003 model, which is:

Most capable GPT-3 model. Can do any task the other models can do, often with higher quality, longer output and better instruction-following. Also supports inserting completions within text.

Also, for the same model, we can configure dozens of parameter (e.g. answer randomness, maximum word limit...). For example, for the temperature field:

Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer.

You can configure all of them in src/chatgpt.js:

const ChatGPTModelConfig = {
  // this model field is required
  model: "text-davinci-003",
  // add your OpenAI model parameters below
  temperature: 0.3,
  max_tokens: 2000,
};

For more details, please refer to OpenAI Models Doc.


2.3 Config Model Features

You can change whatever features you like to handle different types of tasks. (e.g. complete text, edit text, generate code...)

Currently, we use createCompletion() to generate or manipulate text for daily usage, which:

Creates a completion for the provided prompt and parameters

You can configure in src/chatgpt.js:

const response = await this.OpenAI.createCompletion({
  ...ChatGPTModelConfig,
  prompt: inputMessage,
});

Of course you can ask how to edit text in current mode, but the outcome may fall short of expectations.

For more details, please refer to OpenAI API Doc.


2.4 Add Customized Task Handler

You can add your own task handlers to expand the ability of this chatbot!

Currently, add task handler in src/main.ts:

// e.g. if a message starts with "Hello", the bot sends "World!"
if (message.text().startsWith("Hello")) {
  await message.say("World!");
  return;
}

Of course, stuffing all handlers in main function is really a BAD habit in coding. As a result, we will fix this in future updates to do logic separation.

3. How to Contribute to this Project?

You can raise some issues, fork this repo, commit your code, submit pull request, and after code review, we can merge your patch. I'm really looking forward to develop more interesting features!

chatgpt-on-wechat's People

Contributors

kx-huang avatar dependabot[bot] avatar

Watchers

yukang lin avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.