GithubHelp home page GithubHelp logo

aaamoon / copilot-gpt4-service Goto Github PK

View Code? Open in Web Editor NEW
9.1K 9.1K 841.0 1.76 MB

Convert Github Copilot to ChatGPT

License: MIT License

Go 69.96% Dockerfile 2.51% Smarty 4.59% Batchfile 2.99% Shell 9.61% Makefile 1.53% Python 8.81%
chatgpt copilot gpt4 openai

copilot-gpt4-service's Issues

支持持久化缓存

Description

使用sqlite实现持久化缓存

Why

在目前的实现中,缓存仅存在于一个map中,重启程序缓存就没了
对于本机部署,重启是经常的事,这样会加大请求github api的次数。所以持久化缓存比较适合本地使用

Alternatives

已经初步实现了Cache结构体,同时支持存储在map里和sqlite里,下一步尝试实现配置文件解析,添加是否启用持久化缓存的配置

验证在服务端部署成功

我在服务端上使用README中提供的docker命令进行执行后,使用curl 127.0.0.1:8080/healthz命令返回curl: (52) Empty reply from server,请问可能是什么原因呢?
image

Cloudflare Worker 部署问题

使用 Cloudflare Worker 部署服务端,但无法使用。在 ChatGPT-Next-Web 和沉浸式翻译配置都提示 500: Missing or malformed Authorization header。直接访问 Cloudflare Worker 的 Endpoint 显示如下:

{"object":"list","data":[{"id":"gpt-4","object":"model","created":1687882411,"owned_by":"openai"},{"id":"gpt-3.5-turbo","object":"model","created":1677610602,"owned_by":"openai"}]}

同一个 GitHub Copilot token 可以在 https://gpt4copilot.tech 中正常使用。

ChatGPT-Next-Web failed to fetch

在 ChatBox 和 ChatX 上都成功了(OpenCat 不支持 http 协议就没尝试)
但是 ChatGPT-Next-Web 上一直失败,不论是自己部署在 Vercel 的实例还是官方的 demo,都报 failed to fetch

关于token

是不是首先得自己是copilot付费用户? 看readme 看不懂。

能解释一下这些参数的含义吗?谢谢

Messages: []map[string]string{
{"role": "system",
"content": "\nYou are ChatGPT, a large language model trained by OpenAI.\nKnowledge cutoff: 2021-09\nCurrent model: gpt-4\nCurrent time: 2023/11/7 11: 39: 14\n"},
{"role": "user",
"content": content},
},
Model: "gpt-4",
Temperature: 0.5,
TopP: 1,
N: 1,
Stream: true,
Intent: true,
OneTimeReturn: false,

已经设置好了api,但是每条消息前都会有"强烈建议自行部署 copilot-gpt4-service 服务端..."的提醒

现在已经可以进行正常对话,但最近好像每条消息之前都会有"强烈建议自行部署 copilot-gpt4-service 服务端,否则大量用户的Token会被Github检测到来自同一IP,有可能会对Github账号造成影响,本接口服务将会逐渐下线。
It is strongly recommended to deploy the copilot-gpt4-service server by yourself, otherwise a large number of users' tokens will be detected by Github from the same IP, which may affect the Github account, and the API service will be gradually offline.
强烈建议自行部署 copilot-gpt4-service 服务器,否则 Github 会检测到来自同一 IP 的大量用户令牌,从而影响 Github 账户,API 服务也会逐渐离线。"的提醒.是我自己的问题还是什么问题

代理服务器设置

我注意到 ChatGPT-Next-Web 在使用 Docker 部署时,可以配置代理,即 -e PROXY_URL=http://localhost:7890,想问下本项目会支持吗

提供速率限制参数默认速率限制

RT.
目前使用场景是集成了 沉浸式翻译 chrome插件.
触发了 copilot api 的速率限制. 是否可以针对当前进行速率的基准测试, 然后服务提供速率限制的功能?

  • Channel
  • 直接 forbid

条件是基于请求的 content? 预估 TOKEN 的消耗? 或者基于异常响应? 当然这有可能是一个伪需求.

成功了,非常感谢

Are there certain things to report that are not a bug or feature?
Please tell us as exactly as possible about your request, thanks.
We will reply as soon as possible.

但是不能画画吗,用的gpt-4

报错{"code":403,"error":"{\"error_details\":{\"url\":\"https://github.com/github-copilot/signup?editor={EDITOR}\",\"message\":\"No access to GitHub Copilot found. You are currently logged in as guochenchn.\",\"title\":\"Signup for GitHub Copilot\",\"notification_id\":\"no_copilot_access\"},\"message\":\"Resource not accessible by integration\"}"}data: {"choices":[{"index":0,"delta":{"content":null,"role":"assistant"}}],"created":1704445604,"id":"chatcmpl-8daS4chlUvIAfZ4CPQxzgBQaeFyF7"}

{"code":403,"error":"{"error_details":{"url":"https://github.com/github-copilot/signup?editor={EDITOR}\",\"message\":\"No access to GitHub Copilot found. You are currently logged in as guochenchn.","title":"Signup for GitHub Copilot","notification_id":"no_copilot_access"},"message":"Resource not accessible by integration"}"}data: {"choices":[{"index":0,"delta":{"content":null,"role":"assistant"}}],"created":1704445604,"id":"chatcmpl-8daS4chlUvIAfZ4CPQxzgBQaeFyF7"}

data: {"choices":[{"index":0,"delta":{"content":"你","role":null}}],"created":1704445604,"id":"chatcmpl-8daS4chlUvIAfZ4CPQxzgBQaeFyF7"}

data: {"choices":[{"index":0,"delta":{"content":"好","role":null}}],"created":1704445604,"id":"chatcmpl-8daS4chlUvIAfZ4CPQxzgBQaeFyF7"}

data: {"choices":[{"index":0,"delta":{"content":"!","role":null}}],"created":1704445604,"id":"chatcmpl-8daS4chlUvIAfZ4CPQxzgBQaeFyF7"}

data: {"choices":[{"index":0,"delta":{"content":"有","role":null}}],"created":1704445604,"id":"chatcmpl-8daS4chlUvIAfZ4CPQxzgBQaeFyF7"}

data: {"choices":[{"index":0,"delta":{"content":"什","role":null}}],"created":1704445604,"id":"chatcmpl-8daS4chlUvIAfZ4CPQxzgBQaeFyF7"}

data: {"choices":[{"index":0,"delta":{"content":"么","role":null}}],"created":1704445604,"id":"chatcmpl-8daS4chlUvIAfZ4CPQxzgBQaeFyF7"}

data: {"choices":[{"index":0,"delta":{"content":"我","role":null}}],"created":1704445604,"id":"chatcmpl-8daS4chlUvIAfZ4CPQxzgBQaeFyF7"}

data: {"choices":[{"index":0,"delta":{"content":"可以","role":null}}],"created":1704445604,"id":"chatcmpl-8daS4chlUvIAfZ4CPQxzgBQaeFyF7"}

data: {"choices":[{"index":0,"delta":{"content":"帮","role":null}}],"created":1704445604,"id":"chatcmpl-8daS4chlUvIAfZ4CPQxzgBQaeFyF7"}

data: {"choices":[{"index":0,"delta":{"content":"助","role":null}}],"created":1704445604,"id":"chatcmpl-8daS4chlUvIAfZ4CPQxzgBQaeFyF7"}

data: {"choices":[{"index":0,"delta":{"content":"你","role":null}}],"created":1704445604,"id":"chatcmpl-8daS4chlUvIAfZ4CPQxzgBQaeFyF7"}

data: {"choices":[{"index":0,"delta":{"content":"的","role":null}}],"created":1704445604,"id":"chatcmpl-8daS4chlUvIAfZ4CPQxzgBQaeFyF7"}

data: {"choices":[{"index":0,"delta":{"content":"吗","role":null}}],"created":1704445604,"id":"chatcmpl-8daS4chlUvIAfZ4CPQxzgBQaeFyF7"}

data: {"choices":[{"index":0,"delta":{"content":"?","role":null}}],"created":1704445604,"id":"chatcmpl-8daS4chlUvIAfZ4CPQxzgBQaeFyF7"}

data: {"choices":[{"finish_reason":"stop","index":0,"delta":{"content":null,"role":null}}],"created":1704445604,"id":"chatcmpl-8daS4chlUvIAfZ4CPQxzgBQaeFyF7"}

data: [DONE]

nil pointer exception

`
$ docker logs e279b6349f44
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.

  • using env: export GIN_MODE=release
  • using code: gin.SetMode(gin.ReleaseMode)

[GIN-debug] POST /v1/chat/completions --> main.chatCompletions (4 handlers)
[GIN-debug] GET /v1/models --> main.createMockModelsResponse (4 handlers)
[GIN-debug] GET /healthz --> main.main.func1 (4 handlers)
[GIN-debug] Listening and serving HTTP on :8080
[GIN] 2024/01/06 - 15:01:06 | 200 | 9.209µs | 192.168.65.1 | OPTIONS "/v1/chat/completions"

2024/01/06 15:01:06 [Recovery] 2024/01/06 - 15:01:06 panic recovered:
POST /v1/chat/completions HTTP/1.1
Host: 127.0.0.1:8080
[GIN] 2024/01/06 - 15:01:06 | 500 | 274.797667ms | 192.168.65.1 | POST "/v1/chat/completions"
Accept: text/event-stream
Accept-Encoding: gzip, deflate, br
Accept-Language: en-HK,en;q=0.9,zh-HK;q=0.8,zh;q=0.7,en-US;q=0.6,en-GB;q=0.5,zh-TW;q=0.4
Authorization: *
Connection: keep-alive
Content-Length: 404
Content-Type: application/json
Origin: https://gpt4copilot.tech
Sec-Ch-Ua: "Not_A Brand";v="8", "Chromium";v="120", "Google Chrome";v="120"
Sec-Ch-Ua-Mobile: ?0
Sec-Ch-Ua-Platform: "macOS"
Sec-Fetch-Dest: empty
Sec-Fetch-Mode: cors
Sec-Fetch-Site: cross-site
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36
X-Requested-With: XMLHttpRequest

runtime error: invalid memory address or nil pointer dereference
/usr/local/go/src/runtime/panic.go:261 (0x5c53b)
/usr/local/go/src/runtime/signal_unix.go:861 (0x5c508)
/app/utils/utils.go:49 (0x30271c)
/app/utils/utils.go:83 (0x3029c7)
/app/main.go:158 (0x303dbf)
/go/pkg/mod/github.com/gin-gonic/[email protected]/context.go:165 (0x2f876b)
/app/main.go:30 (0x304b7b)
/go/pkg/mod/github.com/gin-gonic/[email protected]/context.go:165 (0x2fd38f)
/go/pkg/mod/github.com/gin-gonic/[email protected]/recovery.go:99 (0x2fd374)
/go/pkg/mod/github.com/gin-gonic/[email protected]/context.go:165 (0x2fc76f)
/go/pkg/mod/github.com/gin-gonic/[email protected]/logger.go:241 (0x2fc738)
/go/pkg/mod/github.com/gin-gonic/[email protected]/context.go:165 (0x2fb917)
/go/pkg/mod/github.com/gin-gonic/[email protected]/gin.go:489 (0x2fb648)
/go/pkg/mod/github.com/gin-gonic/[email protected]/gin.go:445 (0x2fb393)
/usr/local/go/src/net/http/server.go:2938 (0x227e2b)
/usr/local/go/src/net/http/server.go:2009 (0x2251d7)
/usr/local/go/src/runtime/asm_arm64.s:1197 (0x782a3)
`

有些三方客户端无法使用

copilot返回的数据结构和openai的不太一致,缺少了一些字段,导致有些三方客户端解析失败,例如opencat
image

[Feature & Question]实测可以选择gpt-4v,但是似乎有bug

希望作者可以解决默认token数的问题和传入图片url的json相关问题

下图为gpt-4v
IMG_2377

此图为gpt-4
IMG_2376

这是gpt-4v的api请求范式
curl https://api.openai.com/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $OPENAI_API_KEY" \ -d '{ "model": "gpt-4-vision-preview", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "What’s in this image?" }, { "type": "image_url", "image_url": { "url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg" } } ] } ], "max_tokens": 300 }'

流式传输的问题

4de3bce 中试图添加流式传输缺失字段,但是他检测
他是通过检测当前的缓冲区里面是否有特定字段比如choice等,判断是不是应该加入缺失的字段
但是由于字段可能被截断 (比如choi在上一个缓冲区里,ce在下一个缓冲区里),这样就会出问题
所以其实流式传输的部分应该再改一次,应该读取一个stream event,处理后发给客户端,而不是用固定大小的缓冲区读取固定大小

本地部署NextChat报错“failed to fetch”

你好,我尝试在本地安装NextChat并使用Docker部署了copilot-gpt4-service,接口地址填http://127.0.0.1:8080时可以正常使用。

然后我尝试在局域网另一台服务器使用podman部署了copilot-gpt4-service,在本机使用NextChat,接口地址填http://服务器地址:8080 则报错

{
  "error": true,
  "message": "Failed to fetch"
}

该服务器的防火墙已经允许8080的入站访问,浏览器访问http://服务器地址:8080 可以出现404 page not found页面。

此外,在本地安装的NextChat中接口地址填入http://gpt4copilot.tech/ 也报同样的错误。

想问一下这个问题该怎么解决呢?谢谢!

How To Get GitHub Copilot Token

Get GitHub Copilot Token

Install github-copilot-cli:

# https://www.npmjs.com/package/@githubnext/github-copilot-cli
npm i @githubnext/github-copilot-cli -g

Retrieve the token:

github-copilot-cli auth

View the token:

vim ~/.copilot-cli-access-token

同样遇到了空白信息的问题

当我重新试图申请那个API key的时候

Get device code failed
Please make sure you entered the information correctly.

我无法再进入那个页面了

本地docker部署,请求时报错

我通过docker在本地部署了aaamoon/copilot-gpt4-service:latest,在NextWeb客户端中填写地址为http://127.0.0.1:8080,在对NextWeb发送信息时迟迟没有回复,看docker的log发现docker成功接收了请求,但是紧接着报出错误Encountering an error when sending the request.。如果换填https://gpt4copilot.tech/就可以正常使用。上级路由通过透明代理做了全局代理,可以直接访问copilot、ChatGPT之类的服务,应该不是被墙了连接不了,想问下可能会是什么原因?
image

关于此项目

之前有位大佬写了一个copilot破解的小脚本,也是通过修改token实现的,后来他的账号被封禁、仓库被删。希望本仓库作者小心为妙

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.