Comments (6)
Please follow the issue template to update title and description of your issue.
from chatgpt-next-web.
Bug Description
Because models like Claude have strict definitions for the roles of messages in the message queue, such as the first message can only be from the system or user, and user and assistant must strictly alternate, the functional errors in the message queue controlled by NextChat were exposed by attaching the number of historical messages and the length of historical messages. ① The minimum unit of attaching historical message count is messages that do not distinguish roles, which means that once a user message at the head of the queue is popped out, the head message of the message queue will be the role of the assistant. A better solution is to use dialogue turns as the smallest unit, if the length exceeds the limit, the head of a dialogue turn should be popped out simultaneously, that is, a group of user and assistant QA messages should be discarded. ② max_tokens was originally the maximum number of replies for large models, but in the program, it is also used to determine whether to compress the message queue, which, on one hand, combined with the situation in ①, causes problems with the message queue, and on the other hand, it also affects the use of long-context models.
由于 claude 等模型对消息队列的角色有严格的定义,比如第一条消息的角色只能是system或者user,user和assistant必须严格交替,所以nextchat原先通过附带历史消息数和历史消息长度来控制消息队列存在的功能性错误被暴露出来。 ① 附带历史消息数的最小单位是不区别角色的消息,这就导致了一旦队列头部的user消息被弹出后,消息队列的头部消息会是 assistant 的角色。一个更好的解决方案是以对话轮次为最小单位,如果长度超限,则应该将头部的一轮对话同时弹出,即抛弃一组user和assistant的QA消息。 ② max_tokens 本是大模型的最大回复数量,在程序里却也被用来判断是否压缩消息队列,一方面与情况①叠加导致消息队列出问题,另一方面也会影响长上下文模型的使用。
{ "error": { "message": "messages: first message must use the \"user\" role (request id: 2024040416494530153478024102122)", "type": "invalid_request_error", "param": "", "code": null } }
Steps to Reproduce
满足以下条件可能触发该问题:
- 调用对消息队列角色要求严格的模型,如claude-3相关模型
- 对话内容较长,上下文超过4000token
- 对话次数多,超过预设的历史消息数
Expected Behavior
User can effectively utilize a long-context model for extensive historical conversations.
- Utilize dialogue rounds as the smallest unit of dialogue memory to prevent messages starting with "assistant" in the message queue (as it does not add value to the conversation);
- Enhance the role of max_tokens in handling historical message queues.
能成功调用长上下文模型进行长历史对话
- 以对话轮次为对话记忆的最小单位,避免消息队列中出现 assistant 开头的消息(它对于对话其实也起不到什么作用);
- 优化max_tokens 在历史消息队列处理中的作用。
Screenshots
Deployment Method
- Docker
- Vercel
- Server
Desktop OS
No response
Desktop Browser
No response
Desktop Browser Version
No response
Smartphone Device
No response
Smartphone OS
No response
Smartphone Browser
No response
Smartphone Browser Version
No response
Additional Logs
No response
I've been aware of this issue related to
from chatgpt-next-web.
this issue has been resolved, please pull the new commit
from chatgpt-next-web.
you are right, thank you for your feedback, we will adopt the suggestion
from chatgpt-next-web.
this is the specific feature of anthropic's models, so it's better way to resolve this issue that fomat the message in logitics of client about the provider anthropic
from chatgpt-next-web.
I figure out the seconds issue, thank you for your feedback
from chatgpt-next-web.
Related Issues (20)
- [Bug] 随机触发全屏报错 HOT 8
- [Feature Request]: A Continue Button in NextChat V3 HOT 2
- [Bug] [iOS] Scroll HOT 4
- [Feature Request]: Vercel Functions for Hobby can now run up to 60 seconds HOT 5
- 为什么cloudflare配置Gemini的代理用不了 HOT 9
- The parse of markdown full-width colon is wrong. markdown文档的全角冒号解析错误 HOT 1
- [Bug] Cloudflare 部署错误 HOT 2
- How to config Azure OpenAI? HOT 2
- [Bug] 消息列表控制逻辑混乱 HOT 1
- [Bug] [Azure OpenAI] Typewriter animation effect is gone after switching from GPT-4 to GPT-4O
- [Bug] 当我询问用的Azure的哪个模型时,messages会预设我选择的模型,伪造成那个版本 HOT 3
- [Bug] MobileAgent使用GPT-4o HOT 1
- [Bug] 2.12.3 使用PROXY_URL部署后无法访问 HOT 3
- [Bug] Azure api 配置不生效 HOT 4
- 接口如果返回的是201而不是200会导致意外情况 HOT 3
- [Feature Request]: feat/voice-input分支版本与现在主分支不兼容 HOT 1
- [Feature Request]: Ability to change website title and brand
- [Feature Request]: I need to know what models I'm using and how much tokens consumed in each conversation.
- [Bug] nexthchat is stuck, how can I export my application data. HOT 1
- [Feature Request]: 基于模型服务商+模型名选择请求格式,而不是简单地依靠模型名 HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from chatgpt-next-web.