Comments (4)
Hey @jakern thanks for the report.
Actually, you can skip the server checks in the beginning and use the option to disable them which is persistent across sessions. Then, in the settings update the API URL and path to point towards your Docker instance and everything should just work.
Some docs would be good probably to explain this a little better.
Does that help?
Many thanks,
from twinny.
Thanks a bunch! I didn't see that option. I am able to get past the check and receive suggestions from the end point! closing.
from twinny.
I too want to use Ollama running in a docker container (on my local machine). I figured the setup would be exactly like the native Ollama app, but when asking the Twinny chat, I get an endless progress spinner or Sorry, I don’t understand. Please try again.
I've got the container up and running with these instructions,
- Port
11434
exposed to host - Installed the
codellama:7b-instruct
model - able to execute
docker exec -it ollama ollama run codellama:7b-instruct
and interact with Ollama via terminal
Chatting with Ollama in terminal
Here are some logs of when I interact with Ollama docker container directly with the exec
command
2024-03-04 15:43:01 [GIN] 2024/03/04 - 21:43:01 | 200 | 1.855333ms | 127.0.0.1 | POST "/api/chat"
2024-03-04 15:43:20 [GIN] 2024/03/04 - 21:43:20 | 200 | 4.71799671s | 127.0.0.1 | POST "/api/chat"
2024-03-04 15:43:33 [GIN] 2024/03/04 - 21:43:33 | 200 | 5.928768169s | 127.0.0.1 | POST "/api/chat"
2024-03-04 15:43:50 [GIN] 2024/03/04 - 21:43:50 | 200 | 5.541841752s | 127.0.0.1 | POST "/api/chat"
Chatting via Twinny
When chatting with Twinny this is what I saw in Ollama Docker container logs
2024-03-04 15:38:26 [GIN] 2024/03/04 - 21:38:26 | 200 | 53.482893024s | 192.168.65.1 | POST "/v1/chat/completions"
2024-03-04 15:38:26 [GIN] 2024/03/04 - 21:38:26 | 200 | 27.580572721s | 192.168.65.1 | POST "/v1/chat/completions"
but that didn't help link Twinny with Ollama. I'm assuming I'm not exposing the container correctly to the system...
Above Advice
I was a bit confused by the advice
skip the server checks in the beginning and use the option to disable them
Is this setting in the VSCode User
settings? Could I get a screenshot?
update the API URL and path to point towards your Docker
from this I assumed I had to change
hostname
->127.0.0.1
/v1/chat/completions
->/api/chat
from twinny.
I have the same issue. I'm able to chat via console, but Twinny continues to answer Sorry, I don’t understand. Please try again
. I already changed /v1/chat/completions
-> /api/chat
as console shows this path.
I use the latest available ollama/ollama
Docker image from Docker Hub
from twinny.
Related Issues (20)
- something wrong with the new update no ability to add model name. what could the issue be? HOT 5
- VSCodium reports as it is not compatible with VS Code '1.81.1'. HOT 2
- Twinny for Visual Studio Community 2022 HOT 1
- Instructions for the configuration on MacOS with llama.cpp HOT 3
- Inline suggestion does not work when there is a .hg folder HOT 1
- Unable to interact with ollama running VSCode with WSL2 HOT 3
- Business use HOT 1
- Not showing all model after update to latest version HOT 8
- LM Studio supports "Multi Mode Session", must specify the model name HOT 3
- Extension does not load at all after update to v3.11.18 HOT 2
- Can't get the FIM to work with LM Studio HOT 3
- Keyboard Shortcuts - Add opt-in option HOT 2
- Type error fetch failed t.streamResponse HOT 4
- Keyboard short cut: accept part of the FIM suggestion HOT 4
- TypeError while attempting to use FIM from remote Open WebUI server HOT 4
- Wrong language: getting python suggestions in Javascript file HOT 1
- Unable to select model when using Ollama HOT 3
- command 'twinny.showSidebar' not found HOT 1
- Change shortcuts for suggestions HOT 2
- Improve documentation for FIM HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from twinny.