Comments (7)
@felixepiklah Can you reproduce this locally without OpenVPN etc?
from bionic-gpt.
what happen if you add dns like the docker-compose-ollama.yml
llm-api:
image: ghcr.io/berriai/litellm:main-v1.10.3
command: ["/bin/sh", "-c", "pip install async_generator && litellm --model ollama/llama2 --api_base http://host.docker.internal:11434 --host 0.0.0.0 --port 3000"]
entrypoint: []
platform: linux/amd64
dns:
- 8.8.8.8
- 8.8.4.4
ports:
- "3000:3000"
extra_hosts:
- "host.docker.internal:host-gateway"
or like docker-compose-hardened:
# False DNS so the service can't make DNS lookups.
dns: 0.0.0.0
from bionic-gpt.
@felixepiklah Can you reproduce this locally without OpenVPN etc?
@9876691 there are no issue when I tried it locally. I just did some testing with OpenVPN and it failed like the description.
Please help give us an advice.
Thanks & Regards,
Felix Nguyen
from bionic-gpt.
@felixepiklah Can you a docker ps and show me what is running?
from bionic-gpt.
Edit: If I run bionicGPT without the docker-compose-gpu.yml everything works as expected.
I seem to be having a similar issue. Trying the docker-compose.yml and docker-compose-gpu.yml the latest as of today. I have tried running everything with default values in the docker compose files on a ubuntu vm with nvidia drivers installed and using ssh -D 8222 user@ipofvm to access the webui from my desktop. Everything seems to work fine until I send a message. I get the following error in the docker logs
app-1 | 2024-02-19T20:34:31.312147Z INFO embeddings_api: Processing 384 bytes
app-1 | 2024-02-19T20:34:31.312225Z INFO axum_server::prompt: prompt.name="Default (Exclude All Datasets)"
app-1 | 2024-02-19T20:34:31.314269Z INFO axum_server::prompt: Retrieved 0 chunks
app-1 | 2024-02-19T20:34:31.314887Z INFO axum_server::prompt: Retrieved 3 history items
app-1 | 2024-02-19T20:34:31.314895Z INFO axum_server::prompt: Using context size of 1024
app-1 | 2024-02-19T20:34:31.802327Z ERROR axum_server::errors: response="status = 422 Unprocessable Entity, message = error trying to connect: tcp connect error: Connection refused (os error 111)"
I have also tried changing localhost to the ip of the vm in the docker-compose.yml and accessing it from the ip of the vm with the same results. Also in both cases I get an error in the console of the web browser
Uncaught (in promise) Error: request ended without sending any chunks
tx error.ts:5
es ChatCompletionStream.ts:85
_fromReadableStream ChatCompletionStream.ts:141
fromReadableStream ChatCompletionStream.ts:45
_run AbstractChatCompletionRunner.ts:76
setTimeout handler*_run AbstractChatCompletionRunner.ts:75
fromReadableStream ChatCompletionStream.ts:45
re streaming-chat.ts:45
promise callback*re streaming-chat.ts:43
t9 streaming-chat.ts:13
<anonymous> index.ts:32
rp turbo.es2017-esm.js:344
notifyApplicationAfterPageLoad turbo.es2017-esm.js:3158
visitCompleted turbo.es2017-esm.js:3062
visitCompleted turbo.es2017-esm.js:2417
complete turbo.es2017-esm.js:1787
loadResponse turbo.es2017-esm.js:1854
render turbo.es2017-esm.js:2037
loadResponse turbo.es2017-esm.js:1845
visitRequestCompleted turbo.es2017-esm.js:2078
recordResponse turbo.es2017-esm.js:1831
simulateRequest turbo.es2017-esm.js:1818
issueRequest turbo.es2017-esm.js:1808
visitStarted turbo.es2017-esm.js:2065
start turbo.es2017-esm.js:1767
startVisit turbo.es2017-esm.js:2341
visitProposedToLocation turbo.es2017-esm.js:2060
visitProposedToLocation turbo.es2017-esm.js:3049
proposeVisit turbo.es2017-esm.js:2331
visit turbo.es2017-esm.js:2987
fetchResponseLoaded turbo.es2017-esm.js:3611
loadFrameResponse turbo.es2017-esm.js:3564
loadResponse turbo.es2017-esm.js:3442
formSubmissionSucceededWithResponse turbo.es2017-esm.js:3517
requestSucceededWithResponse turbo.es2017-esm.js:790
receive turbo.es2017-esm.js:546
perform turbo.es2017-esm.js:521
start turbo.es2017-esm.js:743
formSubmitted turbo.es2017-esm.js:3481
submitBubbled turbo.es2017-esm.js:951
submitCaptured turbo.es2017-esm.js:939
start turbo.es2017-esm.js:960
connect turbo.es2017-esm.js:3374
connectedCallback turbo.es2017-esm.js:122
<anonymous> turbo.es2017-esm.js:3933
<anonymous> index-HnnVuYvQ.js:34
error.ts:5:6
Also here is the output for docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5276ac5d85bf quay.io/oauth2-proxy/oauth2-proxy:v7.5.1 "/bin/oauth2-proxy" 5 minutes ago Up 5 minutes 0.0.0.0:7800->7800/tcp, :::7800->7800/tcp bionicgpt-oauth2-proxy-1
debf8484a8a1 ghcr.io/bionic-gpt/bionicgpt-keycloak:1.6.1 "/opt/keycloak/bin/k…" 5 minutes ago Up 5 minutes (healthy) 8080/tcp, 0.0.0.0:7810->7810/tcp, :::7810->7810/tcp, 8443/tcp bionicgpt-keycloak-1
db3c327f6a3d ghcr.io/bionic-gpt/bionicgpt:1.6.1 "./axum-server" 5 minutes ago Up 5 minutes bionicgpt-app-1
e2567bc0f334 ghcr.io/bionic-gpt/bionicgpt-pipeline-job:1.6.1 "./pipeline-job" About an hour ago Up 5 minutes bionicgpt-pipeline-job-1
10d6ba14ae6f downloads.unstructured.io/unstructured-io/unstructured-api:4ffd8bc "scripts/app-start.sh" About an hour ago Up 5 minutes 8000/tcp bionicgpt-unstructured-1
b42a98a10dad ankane/pgvector "docker-entrypoint.s…" About an hour ago Up 5 minutes (healthy) 5432/tcp bionicgpt-postgres-1
24b50107981a ghcr.io/berriai/litellm:main-v1.10.3 "/bin/sh -c 'pip ins…" About an hour ago Up 5 minutes 4000/tcp bionicgpt-llm-api-1
4e98387fcca3 ghcr.io/huggingface/text-generation-inference:1.2 "text-generation-lau…" About an hour ago Up 5 minutes bionicgpt-tgi-1
45c8ab78ded1 ghcr.io/bionic-gpt/bionicgpt-embeddings-api:cpu-0.6 "text-embeddings-rou…" About an hour ago Up 5 minutes bionicgpt-embeddings-api-1
ansible@Bionic-GPT:~$
Happy to share any other information and thank you for your time.
from bionic-gpt.
@cwilliams001 Please try the latest docker compose files, I made a change.
from bionic-gpt.
@felixepiklah I'll close this as I think it's not a Bionic issue but rather an issue with your setup.
Also you should consider using the enterprise version.
from bionic-gpt.
Related Issues (20)
- Implement SLSA
- Use secret for email config.
- E2E encryption or encrypted datasets
- Simplify the tryout process
- fix: Integration Testing
- Chat down't work - app-1 reports: ERROR web_server::errors: 422 unprocessable Entity , error decoding response body HOT 2
- fix: Image compression
- Failed to add document to dataset HOT 5
- Add Ollama as an embedding provider HOT 1
- Consider Dagger
- Issues with API HOT 7
- Get streaming working in docker-compose
- Installer not working in some cases.
- Certified OpenShift Operator
- Have a page for RKE install
- Try GPU in RKE2
- Build containers as non root
- Look at getting onto AWS marketplace.
- Bionic Cloud
- fix: Allow installation via kubectl
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from bionic-gpt.