GithubHelp home page GithubHelp logo

mckaywrigley / chatbot-ui Goto Github PK

View Code? Open in Web Editor NEW
27.9K 249.0 7.8K 3.56 MB

AI chat for every model.

Home Page: https://chatbotui.com

License: MIT License

TypeScript 95.57% JavaScript 0.25% CSS 0.22% Shell 0.01% PLpgSQL 3.94%

chatbot-ui's Introduction

Chatbot UI

The open-source AI chat app for everyone.

Chatbot UI

Demo

View the latest demo here.

Updates

Hey everyone! I've heard your feedback and am working hard on a big update.

Things like simpler deployment, better backend compatibility, and improved mobile layouts are on their way.

Be back soon.

-- Mckay

Official Hosted Version

Use Chatbot UI without having to host it yourself!

Find the official hosted version of Chatbot UI here.

Sponsor

If you find Chatbot UI useful, please consider sponsoring me to support my open-source work :)

Issues

We restrict "Issues" to actual issues related to the codebase.

We're getting excessive amounts of issues that amount to things like feature requests, cloud provider issues, etc.

If you are having issues with things like setup, please refer to the "Help" section in the "Discussions" tab above.

Issues unrelated to the codebase will likely be closed immediately.

Discussions

We highly encourage you to participate in the "Discussions" tab above!

Discussions are a great place to ask questions, share ideas, and get help.

Odds are if you have a question, someone else has the same question.

Legacy Code

Chatbot UI was recently updated to its 2.0 version.

The code for 1.0 can be found on the legacy branch.

Updating

In your terminal at the root of your local Chatbot UI repository, run:

npm run update

If you run a hosted instance you'll also need to run:

npm run db-push

to apply the latest migrations to your live database.

Local Quickstart

Follow these steps to get your own Chatbot UI instance running locally.

You can watch the full video tutorial here.

1. Clone the Repo

git clone https://github.com/mckaywrigley/chatbot-ui.git

2. Install Dependencies

Open a terminal in the root directory of your local Chatbot UI repository and run:

npm install

3. Install Supabase & Run Locally

Why Supabase?

Previously, we used local browser storage to store data. However, this was not a good solution for a few reasons:

  • Security issues
  • Limited storage
  • Limits multi-modal use cases

We now use Supabase because it's easy to use, it's open-source, it's Postgres, and it has a free tier for hosted instances.

We will support other providers in the future to give you more options.

1. Install Docker

You will need to install Docker to run Supabase locally. You can download it here for free.

2. Install Supabase CLI

MacOS/Linux

brew install supabase/tap/supabase

Windows

scoop bucket add supabase https://github.com/supabase/scoop-bucket.git
scoop install supabase

3. Start Supabase

In your terminal at the root of your local Chatbot UI repository, run:

supabase start

4. Fill in Secrets

1. Environment Variables

In your terminal at the root of your local Chatbot UI repository, run:

cp .env.local.example .env.local

Get the required values by running:

supabase status

Note: Use API URL from supabase status for NEXT_PUBLIC_SUPABASE_URL

Now go to your .env.local file and fill in the values.

If the environment variable is set, it will disable the input in the user settings.

2. SQL Setup

In the 1st migration file supabase/migrations/20240108234540_setup.sql you will need to replace 2 values with the values you got above:

  • project_url (line 53): http://supabase_kong_chatbotui:8000 (default) can remain unchanged if you don't change your project_id in the config.toml file
  • service_role_key (line 54): You got this value from running supabase status

This prevents issues with storage files not being deleted properly.

5. Install Ollama (optional for local models)

Follow the instructions here.

6. Run app locally

In your terminal at the root of your local Chatbot UI repository, run:

npm run chat

Your local instance of Chatbot UI should now be running at http://localhost:3000. Be sure to use a compatible node version (i.e. v18).

You can view your backend GUI at http://localhost:54323/project/default/editor.

Hosted Quickstart

Follow these steps to get your own Chatbot UI instance running in the cloud.

Video tutorial coming soon.

1. Follow Local Quickstart

Repeat steps 1-4 in "Local Quickstart" above.

You will want separate repositories for your local and hosted instances.

Create a new repository for your hosted instance of Chatbot UI on GitHub and push your code to it.

2. Setup Backend with Supabase

1. Create a new project

Go to Supabase and create a new project.

2. Get Project Values

Once you are in the project dashboard, click on the "Project Settings" icon tab on the far bottom left.

Here you will get the values for the following environment variables:

  • Project Ref: Found in "General settings" as "Reference ID"

  • Project ID: Found in the URL of your project dashboard (Ex: https://supabase.com/dashboard/project/<YOUR_PROJECT_ID>/settings/general)

While still in "Settings" click on the "API" text tab on the left.

Here you will get the values for the following environment variables:

  • Project URL: Found in "API Settings" as "Project URL"

  • Anon key: Found in "Project API keys" as "anon public"

  • Service role key: Found in "Project API keys" as "service_role" (Reminder: Treat this like a password!)

3. Configure Auth

Next, click on the "Authentication" icon tab on the far left.

In the text tabs, click on "Providers" and make sure "Email" is enabled.

We recommend turning off "Confirm email" for your own personal instance.

4. Connect to Hosted DB

Open up your repository for your hosted instance of Chatbot UI.

In the 1st migration file supabase/migrations/20240108234540_setup.sql you will need to replace 2 values with the values you got above:

  • project_url (line 53): Use the Project URL value from above
  • service_role_key (line 54): Use the Service role key value from above

Now, open a terminal in the root directory of your local Chatbot UI repository. We will execute a few commands here.

Login to Supabase by running:

supabase login

Next, link your project by running the following command with the "Project ID" you got above:

supabase link --project-ref <project-id>

Your project should now be linked.

Finally, push your database to Supabase by running:

supabase db push

Your hosted database should now be set up!

3. Setup Frontend with Vercel

Go to Vercel and create a new project.

In the setup page, import your GitHub repository for your hosted instance of Chatbot UI. Within the project Settings, in the "Build & Development Settings" section, switch Framework Preset to "Next.js".

In environment variables, add the following from the values you got above:

  • NEXT_PUBLIC_SUPABASE_URL
  • NEXT_PUBLIC_SUPABASE_ANON_KEY
  • SUPABASE_SERVICE_ROLE_KEY
  • NEXT_PUBLIC_OLLAMA_URL (only needed when using local Ollama models; default: http://localhost:11434)

You can also add API keys as environment variables.

  • OPENAI_API_KEY
  • AZURE_OPENAI_API_KEY
  • AZURE_OPENAI_ENDPOINT
  • AZURE_GPT_45_VISION_NAME

For the full list of environment variables, refer to the '.env.local.example' file. If the environment variables are set for API keys, it will disable the input in the user settings.

Click "Deploy" and wait for your frontend to deploy.

Once deployed, you should be able to use your hosted instance of Chatbot UI via the URL Vercel gives you.

Contributing

We are working on a guide for contributing.

Contact

Message Mckay on Twitter/X

chatbot-ui's People

Contributors

bmpolonsky avatar castortech avatar chatgpt-ai-user avatar davecrab avatar faraday avatar fbec76 avatar fkesheh avatar francofantini avatar gijigae avatar gitgitgogogo avatar im-calvin avatar jgaltio avatar jzhangdev avatar kovrichard avatar matiasfnunezdev avatar mckaywrigley avatar meetpateltech avatar mikey032 avatar mikodin avatar ochen1 avatar perstarkse avatar qte123 avatar sharma-shray avatar spammenotinoz avatar superhappychris avatar tim13246879 avatar tomoyuki28jp avatar xycjscs avatar yoginderkumar avatar zacharytamas avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chatbot-ui's Issues

Error "Sorry, there was an error."

I'm trying to get things running locally. I've cloned the repo, installed dependencies, provided my api key, but I keep receiving the message: Sorry, there was an error.

I've tried creating a new api key and it doesn't fix the issue. Although I can see on OpenAI's site that it updates my API key "last used" to today, so it does hit it.

I've also searched for an error log file but couldn't find it. Any help would be appreciated thanks! (Great work by the way!)

Screenshot 2023-03-21 at 11 54 57

Support for deleting or editing messages

I believe the biggest strength of the API is to be able to edit the whole conversation, similar to the openAI playground. It would be great to have edit/delete features.

Feature Request: Allow to reply to messages to add context to the chat

Sample:
When chatting in a long thread, it can happen that we need some clarifications or we have any question about old messages in the same thread, currently we must copy the whole content go down and ask the chat again and paste the copied text.

With a reply feature we can easily click on "reply" and down in the chat we just ask another question adding the replied text content internally and automatically as part of the context.

Improve the new conversation behaviour.

We should refine the logic for creating new sessions to align it more closely with the ChatGPT website behavior. Specifically, when users click on "New Chat," they will be directed to an empty chat page, and the conversation will only be created in the sidebar after they send their first message.

Any way to set proxy?

I have a networking problem and it seems that I have to set a proxy. Message:

 [TypeError: fetch failed] {
  cause:  [ConnectTimeoutError: Connect Timeout Error] {
  name: 'ConnectTimeoutError',
  code: 'UND_ERR_CONNECT_TIMEOUT',
  message: 'Connect Timeout Error'
}

Dialog Context gets stuck

After chatting back and forth a few times, the dialog context seems to freeze and the responses to an new query stay the same.

Seems there is an issue with the dialogue state?
(GPT3.5 default).

activate gpt-4

My api key gives me access to model gpt-4 and un-commenting

// GPT_4 = "gpt-4"
and
// [OpenAIModel.GPT_4]: "GPT-4"
, I could verify that everything works.

Now, I guess you kept it commented in order to avoid being flooded with issues saying that GPT 4 is producing errors from people with no gpt-4 access.

I wrote the following api route that provides a list a supported available models given an api key, but I'm not familiar with next.js and I see issues with spamming openai again and again for just checking the available models. Caching should be used. Anyway, here's the endpoint code in case part of it can be useful (maybe with getServerSideProps ?)

// api/models.ts
import { OpenAIModel } from '@/types';

export const config = {
  runtime: "edge"
};

type Model = {
  id: string;
};

type ModelsData = {
  data: Model[];
}



const handler = async (req: Request): Promise<Response> => {
  try {
    let key: string | null = null;
    try {
      key = (await req.json())?.key;
    } catch (e) { }

    const res = await fetch("https://api.openai.com/v1/models", {
      headers: {
        Authorization: `Bearer ${key ? key : process.env.OPENAI_API_KEY}`
      },
    });

    if (res.status !== 200) {
      throw new Error(`OpenAI API returned an error ${res.status} : ${await res.text()}`);
    }

    const { data } = await res.json() as ModelsData;
    const availableModelIds = data.map((model: Model) => model.id)
    const supportedModels = availableModelIds.filter(
      x => (Object.values(OpenAIModel) as String[]).includes(x)
    )
    return new Response(
      JSON.stringify(supportedModels),
      {
        status: 200,
        headers: {
          'content-type': 'application/json',
        },
      }
    );
  } catch (error) {
    console.error(error);
    return new Response("Error", { status: 500 });
  }
};

export default handler;

Error handling

An error message should be displayed, and provide a retry button when an error occurs.

Suggestion: Saveable System prompts

Currently, you have a default system prompt that resets with every new chat.

Having the ability to store and utilize our own customized system prompts within the interface would be an extremely valuable addition. This would allow a user to reuse prompts based on the specific behaviour we desire from GPT in a given thread. Instead of re-typing a new system prompt each time.

image

Keep focus on the chat input box after sending a message

First of all, I'd like to thank you for creating and maintaining this wonderful project. I have been using it and find it very helpful.

However, I noticed that the chat input box loses focus after sending a message. This requires users to click on the input box again with the mouse before they can type another message, which can be a bit inconvenient.

I would like to request a feature that keeps the focus on the chat input box after sending a message. This would allow users to continue typing and sending messages more efficiently, without having to manually click on the input box each time.

Best regards

UI design

Hi Is the UI intend to replicate the original chatgpt? because right now it isnt the same and some things things hurt my eyes, also there are some layout issues. If you intend to replicate it 100% I am happy to contribute on that end, fixing the mismatches and layout issues

error - Class extends value undefined is not a constructor or null

error - Class extends value undefined is not a constructor or null

This might be caused by a React Class Component being rendered in a Server Component, React Class Components only works in Client Components. Read more: https://nextjs.org/docs/messages/class-component-in-server-component
at ../../node_modules/.pnpm/[email protected]/node_modules/undici/lib/fetch/file.js (evalmachine.:5724:19)
at __require (evalmachine.:14:50)
at ../../node_modules/.pnpm/[email protected]/node_modules/undici/lib/fetch/formdata.js (evalmachine.:5881:49)
at __require (evalmachine.:14:50)
at ../../node_modules/.pnpm/[email protected]/node_modules/undici/lib/fetch/body.js (evalmachine.:6094:35)
at __require (evalmachine.:14:50)
at ../../node_modules/.pnpm/[email protected]/node_modules/undici/lib/fetch/response.js (evalmachine.:6510:49)
at __require (evalmachine.:14:50)
at (evalmachine.:11635:30)
at requireFn (file:///Users/bobjames/code/chatgpt/chatbot-ui/node_modules/next/dist/compiled/edge-runtime/index.js:1:7079) {
name: 'TypeError'
}
error - Class extends value undefined is not a constructor or null

This might be caused by a React Class Component being rendered in a Server Component, React Class Components only works in Client Components. Read more: https://nextjs.org/docs/messages/class-component-in-server-component
at ../../node_modules/.pnpm/[email protected]/node_modules/undici/lib/fetch/file.js (evalmachine.:5724:19)
at __require (evalmachine.:14:50)
at ../../node_modules/.pnpm/[email protected]/node_modules/undici/lib/fetch/formdata.js (evalmachine.:5881:49)
at __require (evalmachine.:14:50)
at ../../node_modules/.pnpm/[email protected]/node_modules/undici/lib/fetch/body.js (evalmachine.:6094:35)
at __require (evalmachine.:14:50)
at ../../node_modules/.pnpm/[email protected]/node_modules/undici/lib/fetch/response.js (evalmachine.:6510:49)
at __require (evalmachine.:14:50)
at (evalmachine.:11635:30)
at requireFn (file:///Users/bobjames/code/chatgpt/chatbot-ui/node_modules/next/dist/compiled/edge-runtime/index.js:1:7079) {
name: 'TypeError'
}

Feature request: Tree-like message history

Hello, I recently built a tree-like message history feature (link) and I would like to develop it further to make it more similar to a real ChatGPT conversation. However, implementing this feature would require refactoring the current structure of the conversation. I'm wondering if this is worth doing?

Publish Docker Image

Summary

Add a CI step to build and publish the docker image for this repo to the GitHub package registry.

Details

Use GitHub Actions to define a build/deploy step that runs on each commit to the main branch. The CI step should build the docker image and publish it to the GitHub package repository for this repo under both a tag with the short commit hash, as well as the latest tag.

This would allow users to pull an already-built docker image from the registry and enable easier use and integration of new changes in hosted docker environments.

Does not support users who type in Japanese

bug

The Japanese language has both kanji and hiragana, and the enter key is used to convert hiragana into kanji.
In the current implementation, handleSend is executed when the conversion is confirmed and starts sending messages.

solution

When the conversion key is pressed, the message is not sent.

スクリーンショット 2023-03-18 8 30 03

Feature Request: scrap urls to get context from it

Sample:
While chatting with the bot, the user can add a url for an article or a code on github, the chat-bot could identify there is an url, scrap it and extract the contents and use it as context for the chat.

markdown rendering?

How can I make the markdown code/math render like it does in real chat gpt? thanks

error - Class extends value undefined is not a constructor or null

tg_image_162084211
This might be caused by a React Class Component being rendered in a Server Component, React Class Components only works in Client Components. Read more: https://nextjs.org/docs/messages/class-component-in-server-component
at ../../node_modules/.pnpm/[email protected]/node_modules/undici/lib/fetch/file.js (evalmachine.:5724:19)
at __require (evalmachine.:14:50)
at ../../node_modules/.pnpm/[email protected]/node_modules/undici/lib/fetch/formdata.js (evalmachine.:5881:49)
at __require (evalmachine.:14:50)
at ../../node_modules/.pnpm/[email protected]/node_modules/undici/lib/fetch/body.js (evalmachine.:6094:35)
at __require (evalmachine.:14:50)
at ../../node_modules/.pnpm/[email protected]/node_modules/undici/lib/fetch/response.js (evalmachine.:6510:49)
at __require (evalmachine.:14:50)
at (evalmachine.:11635:30)
at requireFn (file:///Users/bobjames/code/chatgpt/chatbot-ui/node_modules/next/dist/compiled/edge-runtime/index.js:1:7079) {
name: 'TypeError'
}
error - Class extends value undefined is not a constructor or null

This might be caused by a React Class Component being rendered in a Server Component, React Class Components only works in Client Components. Read more: https://nextjs.org/docs/messages/class-component-in-server-component
at ../../node_modules/.pnpm/[email protected]/node_modules/undici/lib/fetch/file.js (evalmachine.:5724:19)
at __require (evalmachine.:14:50)
at ../../node_modules/.pnpm/[email protected]/node_modules/undici/lib/fetch/formdata.js (evalmachine.:5881:49)
at __require (evalmachine.:14:50)
at ../../node_modules/.pnpm/[email protected]/node_modules/undici/lib/fetch/body.js (evalmachine.:6094:35)
at __require (evalmachine.:14:50)
at ../../node_modules/.pnpm/[email protected]/node_modules/undici/lib/fetch/response.js (evalmachine.:6510:49)
at __require (evalmachine.:14:50)
at (evalmachine.:11635:30)
at requireFn (file:///Users/bobjames/code/chatgpt/chatbot-ui/node_modules/next/dist/compiled/edge-runtime/index.js:1:7079) {
name: 'TypeError'
}
image

Requests

Great app! Here are a few possible improvements...

Streaming API response -- currently have to wait for API completion to finish. I think stream:true API parameter does this
Stop API generation -- if the response was streaming, a button so you can cut the bot off mid generation, like with the original
Flash of pure white screen on page load and reload -- quite painful in dark mode

Could not parse ~/Dropbox/github/package.json. Ignoring it.

I'm getting this:

$ npm run dev

> [email protected] dev
> next dev

ready - started server on 0.0.0.0:3000, url: http://localhost:3000
info  - Loaded env from /Users/jay/Dropbox/github/chatbot-ui/.env.local
[Browserslist] Could not parse /Users/jay/Dropbox/github/package.json. Ignoring it.
[Browserslist] Could not parse /Users/jay/Dropbox/github/package.json. Ignoring it.
event - compiled client and server successfully in 1762 ms (173 modules)
wait  - compiling...
event - compiled successfully in 178 ms (139 modules)

Suggestion: Decouple auto scrolling when user manually scrolls during reply streaming.

Currently, when your interface GPT writes a reply that exceeds the bottom of the page, it automatically scrolls down to keep the streaming text in view. However, it would be more user-friendly if this feature could be decoupled whenever the user manually scrolls, similar to how the official ChatGPT works.

Sometimes we want to start reading the top of the reply, before it finishes streaming the reply.

UI responsive problem

I noticed that the UI doesn't seem to work properly when I switch to mobile mode. When I click on conversation button, there is no response.
tg_image_2501166157

Getting error - Class extends value undefined is not a constructor or null

After setting up env file, running "npm run dev" gives:

error - Class extends value undefined is not a constructor or null

This might be caused by a React Class Component being rendered in a Server Component, React Class Components only works in Client Components. Read more: https://nextjs.org/docs/messages/class-component-in-server-component
at ../../node_modules/.pnpm/[email protected]/node_modules/undici/lib/fetch/file.js (evalmachine.:5724:19)
at __require (evalmachine.:14:50)
at ../../node_modules/.pnpm/[email protected]/node_modules/undici/lib/fetch/formdata.js (evalmachine.:5881:49)
at __require (evalmachine.:14:50)
at ../../node_modules/.pnpm/[email protected]/node_modules/undici/lib/fetch/body.js (evalmachine.:6094:35)
at __require (evalmachine.:14:50)
at ../../node_modules/.pnpm/[email protected]/node_modules/undici/lib/fetch/response.js (evalmachine.:6510:49)
at __require (evalmachine.:14:50)
at (evalmachine.:11635:30)
at requireFn (file://D:\software\projects\html\chatbot-ui\node_modules\next\dist\compiled\edge-runtime\index.js:1:7079) {
name: 'TypeError'
}

Feature request: Speech2Text (Whisper) as chat input

Use OpenAI Whisper API to transcribe user voice to input into the chat field (OS native transcription is inaccurate). There could be a passphrase like ("over out" or whatever) that functions as [Enter] command and send the query.

I could try to develop this feature if it's deemed valuable

I'm sorry but the issue you and alanjustalan closed...

Sorry there's really python beginner like me😅

10 hours ago you told Alanjustalan to "

Ah I see! Get rid of the < and >. Sorry for the confusion there. That should fix it!"

The same error, while I don't understand. You means a special character in my openai API key?

捕获
捕获2

Table not displaying correctly

Hello ! I tried to do a table by asking ChatGPT but it doesn't display the table, it displays the data. Look at the picture to see what I'm talking about. Thanks you !
image_2023-03-18_141951573

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.