sugarforever / chat-ollama Goto Github PK
View Code? Open in Web Editor NEWChatOllama is an open source chatbot based on LLMs. It supports a wide range of language models, and knowledge base management.
License: MIT License
ChatOllama is an open source chatbot based on LLMs. It supports a wide range of language models, and knowledge base management.
License: MIT License
在做rag查询到的时候一直出错
【nuxt】 【request error】 【unhandled】 【500】 Chroma getOrCreateCollection error: Error: TypeError: fetch failed
at Chroma.ensureCollection (/D:/working/opensource/chat-ollama/node_modules/@langchain/community/dist/vectorstores/chroma.js:99:23)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async Chroma.similaritySearchVectorWithScore (/D:/working/opensource/chat-ollama/node_modules/@langchain/community/dist/vectorstores/chroma.js:187:28)
at async Chroma.similaritySearch (/D:/working/opensource/chat-ollama/node_modules/@langchain/core/dist/vectorstores.js:104:25)
at async VectorStoreRetriever.getRelevantDocuments (/D:/working/opensource/chat-ollama/node_modules/@langchain/core/dist/retrievers.js:67:29)
at Object.handler (D:\working\opensource\chat-ollama\server\api\models\chat\index.post.ts:67:1)
使用的 chroma 配置如下
chroma:
image: ghcr.io/chroma-core/chroma:latest
container_name: chroma
environment:
- IS_PERSISTENT=TRUE
volumes:
# Default configuration for persist_directory in chromadb/config.py
# Currently it's located in "/chroma/chroma/"
- ./chroma-data:/chroma/chroma/
ports:
- 8011:8000
如果是 mac m1 系统,可以尝试用 docker compose方式启动。😀
https://hub.docker.com/r/chromadb/chroma/tags
docker pull chromadb/chroma
docker run -d -p 8000:8000 chromadb/chroma
# 访问地址:http://localhost:8000
#从 .env.example 复制文件 .env
cp .env.example .env
#修改配置文件 .env 中的 sqlite 路径:
DATABASE_URL=file:/Users/xxx/chat-ollama/prisma/.sqlite/chatollama.sqlite
# 安装prisma
npm install -g prisma
# 执行脚本迁移
cd prisma
prisma migrate dev
非docker。
本地安装后,打开localhost:3000,配置host,点击save没有反应。
报错如下:
yarn run v1.22.22
$ nuxt dev
Nuxt 3.10.3 with Nitro 2.9.2 13:33:45
13:33:45
➜ Local: http://localhost:3000/
➜ Network: use --host to exposei Using default Tailwind CSS file nuxt:tailwindcss 13:33:47
➜ DevTools: press Shift + Alt + D in the browser (v1.0.8) 13:33:48
i Tailwind Viewer: http://localhost:3000/_tailwind/ nuxt:tailwindcss 13:33:48
i Vite client warmed up in 5117ms 13:33:54
i Vite server warmed up in 4784ms 13:33:54
√ Nuxt Nitro server built in 1463 ms nitro 13:33:55
(node:34616) [DEP0040] DeprecationWarning: Thepunycode
module is deprecated. Please use a userland alternative instead.
(Usenode --trace-deprecation ...
to show where the warning was created)
Ollama: {
host: 'http://localhost:11434',
username: undefined,
password: undefined
}
Ollama: {
host: 'http://localhost:11434',
username: undefined,
password: undefined
}
Ollama: {
host: 'http://localhost:11434',
username: undefined,
password: undefined
}
现在遇到两个问题:
1、host设置http://127.0.0.1:11434/,直接访问能显示Ollama is running,但是chatollama无法调用。如果用ngrok则会成功;
2、用ngrok代理之后能调用ollama,但是添加knowledgebases的时候会一直转圈,无法添加。表现为,点击选择模型不会弹出模型选项,二是手动输入模型型号也无法添加。
有要求Ollama server is running on http://localhost:11434/
請教如果我希望使用openai gpt4作為LLM使用,
我應該如何修改設定呢?
還是目前只能使用ollama運行開源模型?
感謝您(因為自家的GPU不給力,所以希望用OPENAI API)
把回答面板分成左右两个,一个是自己调优的模型,一个是用来对比的参照模型(比如OpenAI)。方便比较自己模型和其他模型的相应质量。
多谢
用nomic-embed-text做嵌入模型,创建支持库时候报500错误,如下:
ReadableStream { locked: false, state: 'readable', supportsBYOB: false }
Error in handler LangChainTracer, handleChainError: Error: Failed to batch create run: 401 Unauthorized {"detail":"Need authorization header or api key"}
[nuxt] [request error] [unhandled] [500] Oll
ama call failed with status code 400: embedding models do not support chat
nomic-embed-text是ollama支持的嵌入模型,手动ollama下http://localhost:11434/api/embeddings 没问题,麻烦看看什么问题
chat 功能正常。
知识库报错如下:
Error: Failed to batch create run: 401 Unauthorized {"detail":"Need authorization header or api key"}
ChatOllama需要一个Discord服务器,供贡献者和用户交流
文档分组的意义:比如我有“财务”、“人事”、“质检”的若干工作,每项工作有各项文档若干,每组分别建立向量数据库。这样一样更为实用,二是别人无需知道有哪些文件,访问WEBUI界面就可以获取相关知识,在同事间变传统文件共享为知识共享。有非常实际的应用场景。
WEBUI界面显示API调用示例的意义在于,我们很多工作都在微信群、QQ群里商讨,相对地使用QQ或微信更便于推广,有了API,就可以更方便地接入。
知识库文档能不能添加、删除、最多能支持多大的文档
这个项目非常有吸引力,自定义知识库,但是项目使用部署等的难度太高了,普通人甚至有一定基础的人都不一定能用,这大大限定了项目受众。
如果项目进行模块化,降低上手难度,比如利用docker进行分布式部署,Ollama作为一个模块,数据库一个模块,前端一个模块,连接成一个整体使用。
ollama rm xxx
输出信息:
Ollama: {
host: 'http://192.168.99.12:28537',
username: undefined,
password: undefined
}
Ollama: {
host: 'http://192.168.99.12:28537',
username: 'null',
password: 'null'
}
Created knowledge base activity: [object Object]
ingestDocument
Warning: TT: undefined function: 32
问题是在server\api\knowledgebases\index.post.ts 的32行左右.
if (existingCollection) {
//error
//splits可以正常打印
await existingCollection.addDocuments(splits);
console.log(Chroma collection ${collectionName} updated
);
} else {
await Chroma.fromDocuments(splits, embeddings, dbConfig);
console.log(Chroma collection ${collectionName} created
);
}
表现为无法正常上传文件(500k 左右), Save一只在loading状态.
Hello,
我已经搜索过 issue 中关于500的文章,没有找到类似的。
我的错误如下:
Created knowledge base test3: [object Object]
[nuxt] [request error] [unhandled] [500] fetch failed
at node:internal/deps/undici/undici:13737:13
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async OllamaEmbeddings._request (./node_modules/@langchain/community/dist/embeddings/ollama.js:107:26)
at async RetryOperation._fn (./node_modules/p-retry/index.js:50:12)
可见,knowledge base test3 创建之后报错了。
环境信息
"detail": "Not Found"
你好,按照教程用docker已经在服务器部署了chatollama,服务器也安装了ollama服务,下载了几个模型。也可以正常运行。
tcp 0 0 127.0.0.1:11434 0.0.0.0:* LISTEN 2216/ollama
但是按照教程设置了Ollama server to http://host.docker.internal:11434,刷新model无法显示已下载的model,后台报错[nuxt] [request error] [unhandled] [500] fetch failed
chatollama-1 | at Object.fetch (node:internal/deps/undici/undici:11576:11)
chatollama-1 | at process.processTicksAndRejections (node:internal/process/task_queues:95:5),请指教
Thank you for this great app.
I am wondering if you can add supports for openai compatible APIs.
Many open source LLMs provide OpenAI compatible APIs.
For example,
https://console.groq.com/docs/openai,
https://github.com/groq/groq-python,
from openai import OpenAI
client = OpenAI(
api_key = 'GROQ_API_KEY',
base_url = "https://api.groq.com/openai/v1"
)
response = client.chat.completions.create(
model="mixtral-8x7b-32768",
messages=[
{"role": "system", "content": "You are my coding assistant."},
{"role": "user", "content": "Can you tell me how to write python flask application?"}
]
)
print(response.choices[0].message.content)
or
import openai
openai.api_key = 'GROQ_API_KEY'
# all client options can be configured just like the `OpenAI` instantiation counterpart
openai.base_url = "https://api.groq.com/openai/v1/"
openai.default_headers = {"x-foo": "true"}
completion = openai.chat.completions.create(
model="mixtral-8x7b-32768",
messages=[
{
"role": "user",
"content": "How do I output all files in a directory using Python?",
},
],
)
print(completion.choices[0].message.content)
These are implemented in Python.
Could you implement them in JS?
For example, add three environment variables:
openai_key='GROQ_API_KEY'
openai_model= "mixtral-8x7b-32768"
openai_base_url="https://api.groq.com/openai/v1"
And in the openai function call: add openai_model
and openai_base_url
parameters.
Thanks.
Hope this app is getting better.
chat 无回应。
知识库正常。
代码版本Git Commit ID: 2dd0ddd
点击下载,后台报错,
错误信息:
Ollama: { host: 'null', username: 'null', password: 'null' }
Authorization: dW5kZWZpbmVkOnVuZGVmaW5lZA==
[nuxt] [request error] [unhandled] [500] fetch failed
at node:internal/deps/undici/undici:12345:11
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
Ollama: { host: 'null', username: 'null', password: 'null' }
Authorization: dW5kZWZpbmVkOnVuZGVmaW5lZA==
[nuxt] [request error] [unhandled] [500] fetch failed
at node:internal/deps/undici/undici:12345:11
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
Ollama: { host: 'null', username: 'null', password: 'null' }
Authorization: dW5kZWZpbmVkOnVuZGVmaW5lZA==
[nuxt] [request error] [unhandled] [500] fetch failed
at node:internal/deps/undici/undici:12345:11
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
您好!我想实现一个基于现有的一些数据表格,自动对数据进行分析,生成数据分析报告;
可以提供以往的数据表格和分析文档来进行训练
I believe it already supports OpenAI and Anthropic at this moment - for a long-term run, is it possible to have a plugin system to allow users to add more LLMs, even more in the future, embedings/vectorDBs/etc., via public API? saying, one or a few yaml files to define input/output schema of a 3rd-party systems to integrate in.
用yarn run generate 好像不行 有跨域问题
It would offer much flexibility in doployment using docker, esp. when there are many containers running on the same host and there is conflict on the demanded port.
E:\chat-ollama\chat-ollama-main>npm run dev
dev
nuxt dev
Nuxt 3.10.3 with Nitro 2.9.1 20:26:23
20:26:23
➜ Local: http://localhost:3000/
➜ Network: use --host to expose
i Using default Tailwind CSS file nuxt:tailwindcss 20:26:24
➜ DevTools: press Shift + Alt + D in the browser (v1.0.8) 20:26:25
i Tailwind Viewer: http://localhost:3000/_tailwind/ nuxt:tailwindcss 20:26:25
i Vite client warmed up in 3218ms 20:26:29
i Vite server warmed up in 3004ms 20:26:29
√ Nuxt Nitro server built in 425 ms nitro 20:26:29
Ollama: {
host: 'http://localhost:11434',
username: undefined,
password: undefined
}
Ollama: {
host: 'http://localhost:11434',
username: undefined,
password: undefined
}
Error fetching knowledge bases: PrismaClientInitializationError:
Invalid prisma.knowledgeBase.findMany()
invocation in
E:\chat-ollama\chat-ollama-main\server\api\knowledgebases\index.get.ts:6:1
3 const listKnowledgeBases = async (): Promise<KnowledgeBase[] | null> => {
4 const prisma = new PrismaClient();
5 try {
→ 6 return await prisma.knowledgeBase.findMany(
error: Environment variable not found: DATABASE_URL.
--> schema.prisma:3
|
2 | provider = "sqlite"
3 | url = env("DATABASE_URL")
|
Validation Error Count: 1
at _n.handleRequestError (E:\chat-ollama\chat-ollama-main\node_modules@prisma\client\runtime\library.js:123:7154)
at _n.handleAndLogRequestError (E:\chat-ollama\chat-ollama-main\node_modules@prisma\client\runtime\library.js:123:6188)
at _n.request (E:\chat-ollama\chat-ollama-main\node_modules@prisma\client\runtime\library.js:123:5896)
at async l (E:\chat-ollama\chat-ollama-main\node_modules@prisma\client\runtime\library.js:128:10871)
at listKnowledgeBases (E:\chat-ollama\chat-ollama-main\server\api\knowledgebases\index.get.ts:6:1)
at Object.handler (E:\chat-ollama\chat-ollama-main\server\api\knowledgebases\index.get.ts:18:1)
at async Object.handler (file:///E:/chat-ollama/chat-ollama-main/node_modules/h3/dist/index.mjs:1962:19)
at async toNodeHandle (file:///E:/chat-ollama/chat-ollama-main/node_modules/h3/dist/index.mjs:2249:7)
at async ufetch (file:///E:/chat-ollama/chat-ollama-main/node_modules/unenv/runtime/fetch/index.mjs:9:17)
at async $fetchRaw2 (file:///E:/chat-ollama/chat-ollama-main/node_modules/ofetch/dist/shared/ofetch.00501375.mjs:219:26) {
clientVersion: '5.10.2',
errorCode: undefined
}
Ollama: {
host: 'http://localhost:11434',
username: undefined,
password: undefined
}
[nuxt] [request error] [unhandled] [500] Cannot read properties of null (reading 'map')
at E:\chat-ollama\chat-ollama-main\pages\knowledgebases\index.vue:122:36
at ReactiveEffect.fn (E:\chat-ollama\chat-ollama-main\node_modules@vue\reactivity\dist\reactivity.cjs.js:998:13)
at ReactiveEffect.run (E:\chat-ollama\chat-ollama-main\node_modules@vue\reactivity\dist\reactivity.cjs.js:181:19)
at get value [as value] (E:\chat-ollama\chat-ollama-main\node_modules@vue\reactivity\dist\reactivity.cjs.js:1010:109)
at unref (E:\chat-ollama\chat-ollama-main\node_modules@vue\reactivity\dist\reactivity.cjs.js:1132:29)
at Object.get (E:\chat-ollama\chat-ollama-main\node_modules@vue\reactivity\dist\reactivity.cjs.js:1138:35)
at _sfc_ssrRender (E:\chat-ollama\chat-ollama-main\pages\knowledgebases\index.vue:357:18)
at renderComponentSubTree (E:\chat-ollama\chat-ollama-main\node_modules@vue\server-renderer\dist\server-renderer.cjs.js:695:9)
at E:\chat-ollama\chat-ollama-main\node_modules@vue\server-renderer\dist\server-renderer.cjs.js:639:25
at async unrollBuffer$1 (E:\chat-ollama\chat-ollama-main\node_modules@vue\server-renderer\dist\server-renderer.cjs.js:882:16)
使用 qwen:4b模型报错[500] stream.getReader is not a function
[
Document {
pageContent: '\n' +
'49\n' +
'\n' +
\n' +
'\n' +
'【\n' +
'\n' +
'【***************】',
metadata: { blobType: 'application/pdf', source: 'blob', loc: [Object] }
}
]
<ref *1> IterableReadableStream [ReadableStream] {
_state: 'readable',
_reader: undefined,
_storedError: undefined,
_disturbed: false,
_readableStreamController: ReadableStreamDefaultController {
_controlledReadableStream: [Circular *1],
_queue: SimpleQueue {
_cursor: 0,
_size: 0,
_front: [Object],
_back: [Object]
},
_queueTotalSize: 0,
_started: true,
_closeRequested: false,
_pullAgain: false,
_pulling: true,
_strategySizeAlgorithm: [Function (anonymous)],
_strategyHWM: 1,
_pullAlgorithm: [Function: pullAlgorithm],
_cancelAlgorithm: [Function: cancelAlgorithm]
},
reader: undefined
}
[nuxt] [request error] [unhandled] [500] stream.getReader is not a function
at Function.fromReadableStream (chat-ollama/node_modules/@langchain/core/dist/utils/stream.js:68:31)
at createOllamaStream (chat-ollama/node_modules/@langchain/community/dist/utils/ollama.js:36:43)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async createOllamaChatStream (chat-ollama/node_modules/@langchain/community/dist/utils/ollama.js:57:5)
at async ChatOllama._streamResponseChunks (chat-ollama/node_modules/@langchain/community/dist/chat_models/ollama.js:396:30)
at async ChatOllama._streamIterator (chat-ollama/node_modules/@langchain/core/dist/language_models/chat_models.js:78:34)
at async ChatOllama.transform (chat-ollama/node_modules/@langchain/core/dist/runnables/base.js:337:9)
at async wrapInputForTracing (chat-ollama/node_modules/@langchain/core/dist/runnables/base.js:231:30)
at async pipeGeneratorWithSetup (chat-ollama/node_modules/@langchain/core/dist/utils/stream.js:223:19)
at async StringOutputParser._transformStreamWithConfig (chat-ollama/node_modules/@langchain/core/dist/runnables/base.js:252:26)
如题
支持Anthropic新发布的Claude 3 Haiku
我试了修改nuxt.config.ts 和 package.json好像都没用啊
nuxt.config.ts
export default defineNuxtConfig({
server: {
host: '0.0.0.0',
port: 8080
},
devtools: { enabled: true },
modules: [
'@nuxt/ui'
]
})
指令页:
Invalid prisma.instruction.findMany()
invocation in
/opt/localLLM/chatdemo/chat-ollama/server/api/instruction/index.get.ts:6:1
知识库页:
Error fetching knowledge bases: PrismaClientInitializationError:
Invalid prisma.knowledgeBase.findMany()
invocation in
/opt/localLLM/chatdemo/chat-ollama/server/api/knowledgebases/index.get.ts:6:1
3 const listKnowledgeBases = async (): Promise<KnowledgeBase[] | null> => {
4 const prisma = new PrismaClient();
5 try {
→ 6 return await prisma.knowledgeBase.findMany(
Prisma Client could not locate the Query Engine for runtime "rhel-openssl-1.0.x".
此外,使用mixtral对话时,内容一直在重复输出,无法停止,不得不把后台全部停了。
点击Save后终端返回:
Created knowledge base DataBot: [object Object]
[nuxt] [request error] [unhandled] [500] Chroma getOrCreateCollection error: Error: [object Response]
数据库sqlite已对接,chroma部署在一台服务器docker镜像上,用ip+8000端口访问返回:{"detail":"Not Found"}
.env里的CHROMADB_URL设置了ip+8000的地址
请问是哪里没有配置好吗?
Warning: Your current platform windows
is not included in your generator's binaryTargets
configuration ["linux-arm64-openssl-3.0.x","darwin-arm64","debian-openssl-3.0.x"].
{
"url": "/api/models/",
"statusCode": 500,
"statusMessage": "",
"message": "fetch failed",
"stack": "<pre><span class=\"stack internal\">at Object.fetch (node:internal/deps/undici/undici:11731:11)</span>\n<span class=\"stack internal\">at process.processTicksAndRejections (node:internal/process/task_queues:95:5)</span></pre>"
}
windows系统,使用的Node版本是:v18.19.1
1.看到你的其他nomic-embed-text 的教程 ,要注册并有KEY,但你这个项目好像没有用到?
2.这个nomic-embed-text 最大支持的文件尺寸是否有限制 8000 TOKEN?还是和选择的模型有限制?
http://localhost:3000/knowledgebases
在知识库页面直接刷新,就报错了,也无法上传知识库,docker logs 报错如下:
The table
main.KnowledgeBasedoes not exist in the current database. at _n.handleRequestError (/app/.output/server/node_modules/@prisma/client/runtime/library.js:123:6854) at _n.handleAndLogRequestError (/app/.output/server/node_modules/@prisma/client/runtime/library.js:123:6188) at _n.request (/app/.output/server/node_modules/@prisma/client/runtime/library.js:123:5896) at async l (/app/.output/server/node_modules/@prisma/client/runtime/library.js:128:10871) at async listKnowledgeBases (file:///app/.output/server/chunks/index.get2.mjs:13:12) at async Object.handler (file:///app/.output/server/chunks/index.get2.mjs:24:26) at async Object.handler (file:///app/.output/server/chunks/nitro/node-server.mjs:2908:19) at async toNodeHandle (file:///app/.output/server/chunks/nitro/node-server.mjs:3174:7) at async ufetch (file:///app/.output/server/chunks/nitro/node-server.mjs:3777:17) at async $fetchRaw2 (file:///app/.output/server/chunks/nitro/node-server.mjs:3650:26) { code: 'P2021', clientVersion: '5.10.2', meta: { modelName: 'KnowledgeBase', table: 'main.KnowledgeBase' } }
[nuxt] [request error] [unhandled] [500] Cannot read properties of null (reading 'map') at ./.output/server/chunks/app/_nuxt/index-kxFoGMUd.mjs:101:40 at ReactiveEffect.fn (./.output/server/node_modules/@vue/reactivity/dist/reactivity.cjs.prod.js:927:13) at ReactiveEffect.run (./.output/server/node_modules/@vue/reactivity/dist/reactivity.cjs.prod.js:170:19) at get value [as value] (./.output/server/node_modules/@vue/reactivity/dist/reactivity.cjs.prod.js:938:68) at unref (./.output/server/node_modules/@vue/reactivity/dist/reactivity.cjs.prod.js:1036:29) at ./.output/server/chunks/app/_nuxt/index-kxFoGMUd.mjs:340:15 at renderComponentSubTree (./.output/server/node_modules/@vue/server-renderer/dist/server-renderer.cjs.prod.js:430:9) at ./.output/server/node_modules/@vue/server-renderer/dist/server-renderer.cjs.prod.js:374:25 at async unrollBuffer$1 (./.output/server/node_modules/@vue/server-renderer/dist/server-renderer.cjs.prod.js:617:16) at async unrollBuffer$1 (./.output/server/node_modules/@vue/server-renderer/dist/server-renderer.cjs.prod.js:622:16) Ollama: { host: 'http://localhost:11434', username: null, password: null }
openai和anthropic经常封号,还是google的api用起来更方便
目前OpenAI API 访问的路径是默认的URL,是吗? 我希望能指定OpenAI为国内中转的URL该如何设置呢?谢谢
报错:Ollama: { host: 'null', username: 'null', password: 'null' }
Created knowledge base 平安: [object Object]
[nuxt] [request error] [unhandled] [500] Chroma getOrCreateCollection error: Error: TypeError: fetch failed
at Chroma.ensureCollection (./node_modules/@langchain/community/dist/vectorstores/chroma.js:99:23)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async Chroma.fromExistingCollection (./node_modules/@langchain/community/dist/vectorstores/chroma.js:274:9)
at ingestDocument (./server/api/knowledgebases/index.post.ts:25:1)
at Object.handler (./server/api/knowledgebases/index.post.ts:92:1)
at async Object.handler (./node_modules/h3/dist/index.mjs:1962:19)
at async Server.toNodeHandle (./node_modules/h3/dist/index.mjs:2249:7)
在colab通过ngrok做了3000端口的url映射,和大模型直接对话没问题,但是创建知识库时没反应
现在这部分代码硬编码为OpenAI gpt-3.5-turbo。
https://github.com/sugarforever/chat-ollama/blob/main/server/api/chat/index.ts#L25
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.