GithubHelp home page GithubHelp logo

hibobmaster / matrix_chatgpt_bot Goto Github PK

View Code? Open in Web Editor NEW
76.0 3.0 14.0 186 KB

A simple matrix bot that supports image generation and chatting using ChatGPT, Langchain

Home Page: https://matrix.to/#/#public:matrix.qqs.tw

License: MIT License

Python 99.56% Dockerfile 0.44%
chatgpt gpt3-turbo matrix matrix-nio matrix-synapse bot chatbot python matrix-dendrite langchain

matrix_chatgpt_bot's People

Contributors

dependabot[bot] avatar hibobmaster avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

matrix_chatgpt_bot's Issues

!pic timeout after 180s, even if set to 5000 in config.json

matrix_chatgpt_bot is set up with localai

config:

{
    "homeserver": "https://xxxxx.xxxxx.xxx",
    "user_id": "@mistral:xxxxx.xxxxx.xxx",
    "password": "MistralChatBot",
    "access_token": "xxxxx.xxxxx.xxxxxxxx.xxxxx.xxxxxxxx.xxxxx.xxxxxxxx.xxxxx.xxx",
    "device_id": "MatrixChatGPTBot",
    "room_id": "!xxxxx.xxxxx.xxxxxxxx.xxxxx.xxxxxxxx.xxxxx.xxxxxxxx.xxxxx.xxx",
    "openai_api_key": "xxxxxxxxxx",
    "gpt_api_endpoint": "http://xxx.xxx.xxx.xxx:8080/v1/chat/completions",
    "gpt_model": "gpt-4",
    "max_tokens": 4000,
    "top_p": 1.0,
    "presence_penalty": 0.0,
    "frequency_penalty": 0.0,
    "reply_count": 1,
    "temperature": 0.8,
    "system_prompt": "Antworte in einer Konversation auf Deutsch.",
    "image_generation_endpoint": "http://xxx.xxx.xxx.xxx:8080/v1/images/generations",
    "image_generation_backend": "localai",
    "image_generation_size": "256x256",
    "timeout": 5000.0
}

Timeout happens after 120 to 180s, no matter what value in config.json is set.

httpx.ReadTimeout
2023-12-28 12:34:02,143 - ERROR - 
Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/anyio/_core/_tasks.py", line 115, in fail_after
    yield cancel_scope
  File "/usr/local/lib/python3.11/site-packages/httpcore/_backends/anyio.py", line 34, in read
    return await self._stream.receive(max_bytes=max_bytes)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 1123, in receive
    await self._protocol.read_event.wait()
  File "/usr/local/lib/python3.11/asyncio/locks.py", line 213, in wait
    await fut
asyncio.exceptions.CancelledError: Cancelled by cancel scope 7facbfda7010
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/httpcore/_exceptions.py", line 10, in map_exceptions
�������
d
  File "/usr/local/lib/python3.11/site-packages/httpcore/_backends/anyio.py", line 32, in read
    with anyio.fail_after(timeout):
  File "/usr/local/lib/python3.11/contextlib.py", line 158, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/usr/local/lib/python3.11/site-packages/anyio/_core/_tasks.py", line 118, in fail_after
    raise TimeoutError
�������
TimeoutError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/httpx/_transports/default.py", line 67, in map_httpcore_exceptions
�������
d
  File "/usr/local/lib/python3.11/site-packages/httpx/_transports/default.py", line 371, in handle_async_request
    resp = await self._pool.handle_async_request(req)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/httpcore/_async/connection_pool.py", line 268, in handle_async_request
    raise exc
  File "/usr/local/lib/python3.11/site-packages/httpcore/_async/connection_pool.py", line 251, in handle_async_request
    response = await connection.handle_async_request(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/httpcore/_async/connection.py", line 103, in handle_async_request
    return await self._connection.handle_async_request(request)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/httpcore/_async/http11.py", line 133, in handle_async_request
    raise exc
  File "/usr/local/lib/python3.11/site-packages/httpcore/_async/http11.py", line 111, in handle_async_request
    ) = await self._receive_response_headers(**kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/httpcore/_async/http11.py", line 176, in _receive_response_headers
    event = await self._receive_event(timeout=timeout)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/httpcore/_async/http11.py", line 212, in _receive_event
    data = await self._network_stream.read(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/httpcore/_backends/anyio.py", line 31, in read
    with map_exceptions(exc_map):
  File "/usr/local/lib/python3.11/contextlib.py", line 158, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/usr/local/lib/python3.11/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
    raise to_exc(exc) from exc
httpcore.ReadTimeout
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
  File "/app/src/bot.py", line 1359, in pic
    image_path_list = await imagegen.get_images(
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/src/imagegen.py", line 66, in get_images
    resp = await aclient.post(
           ^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/httpx/_client.py", line 1877, in post
    return await self.request(
           ^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/httpx/_client.py", line 1559, in request
    return await self.send(request, auth=auth, follow_redirects=follow_redirects)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/httpx/_client.py", line 1646, in send
    response = await self._send_handling_auth(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/httpx/_client.py", line 1674, in _send_handling_auth
    response = await self._send_handling_redirects(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/httpx/_client.py", line 1711, in _send_handling_redirects
    response = await self._send_single_request(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/httpx/_client.py", line 1748, in _send_single_request
    response = await transport.handle_async_request(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/httpx/_transports/default.py", line 370, in handle_async_request
    with map_httpcore_exceptions():
  File "/usr/local/lib/python3.11/contextlib.py", line 158, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/usr/local/lib/python3.11/site-packages/httpx/_transports/default.py", line 84, in map_httpcore_exceptions
    raise mapped_exc(message) from exc
httpx.ReadTimeout
2023-12-28 12:34:02,544 - INFO - Message received in room AI
mistral | > <@XXXX.XXXXX.XXX> !pic A foto of a black cat on a white table in a victorian garden, flower meadow, hdr photograph, professional, 4k, highly detailed, cinematic lighting | malformed, cropped
Image generation failed

Bot seems never make requests to language model

Iam not sure what iam doing wrong.
The bot itself runs, reacts to !help and other commands
but when i send an !gpt !chat, nothing happens
tested first just with matrix settings
then with an api key
then with full env settings
but everytime same results

Based on thread conversations, a better way to integrate bots

We can create spaces in Matrix, where multiple rooms can be created and bots can be invited to join (or set to join automatically). Essentially, each room represents a topic, such as programming, translation, etc. The nature of the topic is determined by the room's description, which can serve as system prompts for the bots.
In these rooms, many threads can be initiated, just like this current one—where bots are set to reply within threads, engaging in conversations of varying context lengths about different issues.
We might also consider setting up bots to automatically trigger responses in specific rooms without needing to be mentioned.
Overall, this is just an idea. I'm very grateful to the developer for creating such a useful project, and I hope to have the opportunity to contribute to it.

node-chatgpt-api connecting issue

Hi, I got error below while running on an amd64 based machine.

# got error: ✔ Container node-chatgpt-api Started0.4s image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was re[+] Running 3/3trix-bot-bingai1 ✔ Container node-chatgpt-api Started0.4s image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was re ✔ Container matrix-bot-bingai1 Started0.4s
! api The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested 0.0s

But it appears the two containers are running though if I ask bing, it will have lots of errors with the most recent ones as below from docker logs command.
_Traceback (most recent call last):
File "/app/bing.py", line 28, in ask_bing
resp = await self.session.post(url=self.bing_api_endpoint, json=self.data, timeout=120)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/aiohttp/client.py", line 536, in _request
conn = await self._connector.connect(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/aiohttp/connector.py", line 540, in connect
proto = await self._create_connection(req, traces, timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/aiohttp/connector.py", line 901, in _create_connection
_, proto = await self._create_direct_connection(req, traces, timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/aiohttp/connector.py", line 1166, in create_direct_connection
raise ClientConnectorError(req.connection_key, exc) from exc
aiohttp.client_exceptions.ClientConnectorError: Cannot connect to host node-chatgpt-api:3000 ssl:default [Try again]

Is this due to the lack support of arm64? Cheers.

Interactions with files

Hi,

Is it possible to have the bot summarize an uploaded file? I've been trying get this working, used uploads in a thread and uploads in the chat window but the model keeps returning that it doesn't know what file I'm referring to.

Thanks,

feature request: bot message limit warning message

Hi, we have used this matrix bot in a chat application. Where, users can use publicly available ai bot for upto 10 messages and post this, it will send a message to user about "10 message limit reached, resume in 1 hour or add your own API key"

We tried to implement this but ran into issues where the bot stopped responding after message limit is reached.

More context of our implementation: User can add own API key by using a !key command. but in our implementation, after the 10 message warning the bot stops to respond.

Any ideas how we could implement this?

thanks !

Sometimes, there is no response to prompts,

And errors only appear in the logs after a reboot:
image

  1. File and Line Numbers:

    • File "/app/src/main.py", line 125: The error occurs at line 125 in main.py, where asyncio.run(main()) is called to execute the main function.
    • File "/app/src/main.py", line 120: At line 120 in the main function, there is an await sync_task statement waiting for an asynchronous task to complete.
  2. Error Type:

    • asyncio.exceptions.CancelledError: This is an exception specific to asyncio, indicating that an asynchronous task was cancelled.
  3. Context of the Error:

    • The error occurs within the sync_forever function, where the program awaits a response (await self.run_response_callbacks([await response])). This function is called in a loop to continuously sync data.
    • The error happens in the get method of asyncio/queues.py, indicating that the task was cancelled while attempting to retrieve it from a queue.
  4. Timestamps in the Log:

    • The error happens at 2024-04-24 16:04:56.
    • Shortly after, at 2024-04-24 16:05:02, the log indicates "matrix chatgpt bot start.....", suggesting that despite the error, the program might be trying to restart or continue execution.
  5. Possible Causes:

    • CancelledError typically results from parts of the program explicitly cancelling a task, or from tasks not being properly completed during shutdown/restart processes.
    • In this case, the cancellation might be triggered by the Bot closed! event (indicated by INFO - Bot closed! in the log), initiating some cleanup process that cancels the ongoing asynchronous tasks.
  6. Handling:

    • Review how asynchronous tasks are managed in the program to ensure they are properly completed or orderly cancelled when the program attempts to shut down.
    • Consider adding appropriate error handling logic in the code to manage CancelledError, such as using try...except blocks to catch this exception and perform necessary cleanup.

无法使用!pic 创建图像

我接入了bing ai,可以用!bing 命令,尽管仍然有一些问题。
!pic 不可用,后台日志显示:

UnboundLocalError: cannot access local variable 'image_path' where it is not associated with a value
cannot access local variable 'image_path' where it is not associated with a value
Traceback (most recent call last):
  File "/app/bot.py", line 541, in pic
    await send_room_image(self.client, room_id, image_path)

interefence between rooms

Hi there,

I found if I have two or more rooms chatting with say Bing Chat, there are some interefence between them. It seems the bot considers it's talking to the same person. This could be a privacy issue, if two people chatting with the bot and one guy can even guess what questions the other guy asked. Is there possible to isolate rooms? Cheers.

serverless deploy this bot

Hi, you have been of great help and im almost ready to go live with this bot for an ai app.

but the memory 40-50mb limits having multiple instances.

you have suggested /commands and one instance for all users. thats very good but in matrix client it limits the customization of user icon and user profile and description.

so for some use cases i need to go with multiple bots.

hence exploring the serverless option for deploying these bots.

https://vlad.roam.garden/How-to-create-a-serverless-Matrix-Chat-bot

this implementation shows how to do it for unencrypted rooms

do you have any suggestions here. that work in encrypted rooms also

thanks

room level and thread level context or both

https://github.com/matrixgpt/matrix-chatgpt-bot#good-to-know

This popular bot allows the following :

  • The bot uses threads by default, to keep the context you should reply to this thread or the bot will think its a new conversation. "Threads" were previously experimental, you may need to activate them in your client's settings (e.g. in Element in the "lab"-section).
  • There is support to set the context to work at either the:
    room level
    thread level
    both (threads fork the conversation from the main room)

any possibility to add this feature? would be very useful for practical use cases in a group chat setting. This also solves the problem of context per user query that we discussed few weeks ago.

Thanks

Support other LocalAI engines

I want to implement a LocalAi running Mistral7B, both matrix_chatgpt_bot and LocalAi running on docker.
but when i configure it in config.json i get this error:

raise NotImplementedError(
NotImplementedError: Engine mistral is not supported. Select from ['gpt-3.5-turbo', 'gpt-3.5-turbo-16k', 'gpt-3.5-turbo-0613', 'gpt-3.5-turbo-16k-0613', 'gpt-4', 'gpt-4-32k', 'gpt-4-0613', 'gpt-4-32k-0613']

I added "mistral" to gptbot.py, but no effect.

ENGINES = [
"gpt-3.5-turbo",
"gpt-3.5-turbo-16k",
"gpt-3.5-turbo-0613",
"gpt-3.5-turbo-16k-0613",
"gpt-4",
"gpt-4-32k",
"gpt-4-0613",
"gpt-4-32k-0613",
"mistral",
]

Please Help ;-)

_EDIT:

i solved this problem by naming the model gpt-4 and unsing gpt-4 in matrix-chatgpt_bot config.
set in docker-compose.yaml of LocalAI:

...
    environment:
      - DEBUG=true
      - MODELS_PATH=/models
      
      # You can preload different models here as well.
      # See: https://github.com/go-skynet/model-gallery
      - 'PRELOAD_MODELS=[{"url": "github:go-skynet/model-gallery/mistral.yaml", "name": "gpt-4"}]'
...

_

Please provide support for the DALL-E 3 model

In cofig.jion:

"image_generation_endpoint": "https://xxxxxxx/v1/images/generations",
"image_generation_backend": "openai",
"image_generation_size": "1024x1024",
"image_format": "webp",

Then use "!pic" to draw a picture, image generation failed. Errors were identified form my image_generation_endpoint:

Exception: 503 Service Unavailable {"error":{"message":"当前分组 home 下对于模型 dall-e 无可用渠道 (request id: 20240424153239939302215CLQ69cNy)","type":"new_api_error"}}

The model names should be dall-e-3 or dall-e-2 (which will be deprecated), not dall-e. Even though I manually added dall-e, I ultimately got redirected to dall-e-2. I hope to specify the model type directly in the configuration.

unable to change keyword for sending command to bard in v1.2.0

I tried to change the keyword from default !bard to say bard, or even empty/null in file bot.py. However it is not working. I still need to use !bard to wake up bard.

This is what I used, which removes the keyword completely.
self.bard_prog = re.compile(r"^(.+)$")

This is what I used, which changes the keyword to bard.
self.bard_prog = re.compile(r"^\sbard\s(.+)$")

I've not tested other new features, like session isolation yet.

Cheers.

missing error messages in Bing Image Creator

Hi,

Today when I tested on the bing image creator bot, I saw some of my prompts were blocked by bing after checking with the log info ("Your prompt has been blocked by Bing. Try to change any bad words and try again.") However, there is no such message in Matrix/element UI. I am wondering if it's possible to pass the errors back to the users so that they can know what happened when there was no output at all after timeout?

Also it seems that an error of "Redirect failed" might be due to expired _U cookie, which is not shown either in the user interface.

Not sure if other errors might occur, such as "Could not get results", "Bad images", "No images" etc. defined in file BingImageGen.py.

Cheers.

bot join room

can i invite the bot to join rooms? seems like currently i need to login as bot on a matrix client and then accept invite.

control temperature in chatgpt via chatgpt WEB (pandora)

Hi,

I see in v3.py, we are able to change the temperature of chatgpt, which you default to 0.5. Does it also apply when we use chatgpt via chatgpt WEB (pandora)?

Also is text-davinci-002-render-sha-mobile the only or the best model in pandora_api_model? Any other options?

Thank you.

!pic failed reading parameters from request with LocalAi & stablediffusion

on LocalAi stablediffusion is running and working.

matrix_chatgpt_bot Config:
(XXXXed out my credentials)

{
    "homeserver": "https://XXXXXXXX",
    "user_id": "@XXXXXXXXXXX",
    "password": "MistralChatBot",
    "access_token": "XXXXXXXXXXXXXXXXXXXXX",
    "device_id": "MatrixChatGPTBot",
    "room_id": "XXXXXXXXXXXXXXXXXXX",
    "openai_api_key": "xxxxxxxxxxxxxxxxxxxxxxxx",
    "gpt_api_endpoint": "http://10.XXXXXXX:8080/v1/chat/completions",
    "gpt_model": "gpt-4",
    "max_tokens": 4000,
    "top_p": 1.0,
    "presence_penalty": 0.0,
    "frequency_penalty": 0.0,
    "reply_count": 1,
    "temperature": 0.8,
    "system_prompt": "Antworte in einer Konversation auf Deutsch.",
    "image_generation_endpoint": "http://10.XXXXXXX:8080/v1/images/generations",
    "image_generation_backend": "openai",
    "timeout": 560.0
}

Log:


2023-12-20 15:14:37,788 - INFO - matrix chatgpt bot start.....

2023-12-20 15:14:40,174 - INFO - Successfully login via password

2023-12-20 15:14:46,582 - INFO - Message received in room AI

MaxR | !pic a white cat on a table

2023-12-20 15:14:46,624 - pic - ERROR - 500 Internal Server Error {"error":{"code":500,"message":"failed reading parameters from request:failed parsing request body: json: cannot unmarshal string into Go struct field OpenAIRequest.response_format of type schema.ChatCompletionResponseFormat","type":""}}

Traceback (most recent call last):

  File "/app/src/bot.py", line 1336, in pic

    b64_datas = await imagegen.get_images(

                ^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/app/src/imagegen.py", line 34, in get_images

    raise Exception(

Exception: 500 Internal Server Error {"error":{"code":500,"message":"failed reading parameters from request:failed parsing request body: json: cannot unmarshal string into Go struct field OpenAIRequest.response_format of type schema.ChatCompletionResponseFormat","type":""}}

2023-12-20 15:14:46,624 - ERROR - 500 Internal Server Error {"error":{"code":500,"message":"failed reading parameters from request:failed parsing request body: json: cannot unmarshal string into Go struct field OpenAIRequest.response_format of type schema.ChatCompletionResponseFormat","type":""}}

Traceback (most recent call last):

  File "/app/src/bot.py", line 1336, in pic

    b64_datas = await imagegen.get_images(

                ^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/app/src/imagegen.py", line 34, in get_images

    raise Exception(

Exception: 500 Internal Server Error {"error":{"code":500,"message":"failed reading parameters from request:failed parsing request body: json: cannot unmarshal string into Go struct field OpenAIRequest.response_format of type schema.ChatCompletionResponseFormat","type":""}}

2023-12-20 15:14:47,007 - INFO - Message received in room AI

mistral | > <@XXXXXXXXXXXXX> !pic a white cat on a table

Image generation failed

What might be the problem?

After entering the query !gpt Чому земля кругла?

File "/app/bot.py", line 551, in gpt
text = text.strip()
^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'strip'
2023-05-20 08:01:07,371 - ERROR - Error: 'NoneType' object has no attribute 'strip'
Traceback (most recent call last):
File "/app/bot.py", line 551, in gpt
text = text.strip()
^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'strip'

langchain api bots possible?

can this implementation be used as a template to build bots from other api endpoints like ones I could generate from langchain agents. for instance i have an api endpoint like this. wondering how can I integrate it as a matrix bot:

import requests

API_URL = "http://localhost:3000/api/v1/prediction/905e59a8-9958-4d1b-b83c-1240269861c5"

def query(payload):
response = requests.post(API_URL, json=payload)
return response.json()

output = query({
"question": "Hey, how are you?",
})

Some Feature Requests, STT, TTS, custom commands

Thank you, @hibobmaster , for developing this incredible project.
Now I would like to ask if it is possible to add some features?
Like text-to-speech, speech-to-text, and custom commands using different prompts and agents would be perfect.
If that were the case, it would be perfect.

how to have longer output using gpt-3.5-turbo-16k model

I manually changed the model in settings.js in node-chatgpt-api container to gpt-3.5-turbo-16k model and also specified max_token to say 12000 tokens. See below. However, when I asked the bot to create a 5000 words story, each time the generated text is quite short, maybe around 700 words. Is there elsewhere limiting the output length?

        payload.update(
            {
                "clientOptions": {
                    "clientToUse": "chatgpt",
                    "openaiApiKey": self.openai_api_key,
                    "modelOptions": {
                        "temperature": self.temperature,
                        "max_tokens": 12000,
                        "model": self.chatgpt_model,
                    },
                    "options": {
                        "maxContextTokens": 16384,
                    },
                }
            }

producing "Service Unavailable" in bing chat

The bing chat bot (created using v1.2.0) produces "Service Unavailable" possibly due to not having been using for a while (over 10 hours maybe). The error can be quickly resolved by restarting the container.

Such an issue is not seen in other bots like bard, bing image creator.

Is this due to a mechanism set by microsoft?

memory 50mb per deployed bot

Hi, i tried to use this bot for langchain through flowise.

each bot deployed uses approx 50mb RAM.

The way we deployed is to have seperate bots for each user. like there are multiple agents you can create in flowise.. so each user is able to create a new bot for a new agent they add in flowise.

however, this is not at all scalable as 100bots would consume 5gb ram.

is there a way to lower the memory if we are just using !lc command for flowise.

or would a serverless deploy option be ideal in this use case

alternative:
we could.i.plement one bot for flowise and the user selects the agent with !agent command.. but that would have to be done for everyroom everytime the user is accessing flowise related bots i guess.

what do you suggest. thanks

Supporting chatgpt 4.0?

Does it support chatgpt 4.0 where user can upload a photo/file/etc. and/or get a reply which consists a photo/file/etc.?

I see there is RoomMessageFile event defined in room_events.py in nio package, but I don't know how to get the file from the user and send to openai.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.