GithubHelp home page GithubHelp logo

llmvm's People

Contributors

djandries avatar jcdickinson avatar peterwilli avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

llmvm's Issues

How to use codeassistant with language servers that have customisations?

An example is in the README:

[language-server.llmvm-codeassist-rust]
command = "llmvm-codeassist"
args = ["rust-analyzer"]

[[language]]
name = "rust"
language-servers = [ "llmvm-codeassist-rust" ]

But i have the lsp server configuration:

[language-server.volar]
args = ["--stdio"]
command = "vue-language-server"

[language-server.volar.config.typescript]
tsdk = "/data/data/com.termux/files/home/.local/share/pnpm/global/5/node_modules/typescript/lib"

[language-server.volar.config.languageFeatures]
semanticTokens = true
references = true
definition = true
typeDefinition = true
callHierarchy = true
hover = true
rename = true
renameFileRefactoring = true
signatureHelp = true
codeAction = true
completion = { defaultTagNameCase = 'both', defaultAttrNameCase = 'kebabCase' }
schemaRequestService = true
documentHighlight = true
documentLink = true
codeLens = true
diagnostics = true
takeOverMode = true

I have no idea how to adapt it to codeassist.

Custom endpoint for `outsource/openai` backend doesn't seem to work

I'm trying to use a custom endpoint for an OpenAI compatible model API.

Running the following gives an error:

$ lmvm-core generate --model 'outsource/openai-chat/mixtral-8x7b/endpoint=https://mixtral-8x7b.lepton.run/api/v1' --prompt 'Tell me a joke' --max-tokens 64

failed to generate: failed to parse model name

Escaping the endpoint URL works, but requests are still sent to the default OpenAI endpoint:

$ llmvm-core generate --model 'outsource/openai-chat/mixtral-8x7b/endpoint=https:\/\/mixtral-8x7b.lepton.run\/api\/v1' --prompt 'Tell me a joke' --max-tokens 64

failed to generate: backend error: bad http status code: 401 Unauthorized body: {
    "error": {
        "message": "Incorrect API key provided: XXXXXX********************XXX. You can find your API key at https://platform.openai.com/account/api-keys.",
        "type": "invalid_request_error",
        "param": null,
        "code": "invalid_api_key"
    }
}

How do I use a custom endpoint?

Add an install script/wizard?

Hey I just gotta say I went from helix->vscode for the sole purpose of using copilot. But I'm now moving back to helix, which I'm stoked about, this repo is brilliant!

I just want to prefix this by saying that I'm a moron, and this is intended as a rant about my experiences as a moron installing and configuring this tool. This is not a reflection of how good/bad your tool is once it's all up and running.

I found the installation/configuration process a little disjointed and I made a bunch of mistakes and it ended up taking me around a couple of hours to get anything working. I think that while this whole project seems to be super modular which is great from a software-engineering perspective, it ends up being unnecessarily complicated to install/configure.

What are your thoughts on adding a little configuration/install tool. Something like a command line install/configuration wizard.

I could see it looking something like this;

$ cargo install llmvm
$ llmvm --quickstart
   # Welcome to the llmvm quickstart, which components would you like to install
   [x] Code assistant
   [ ] CLI chat
   
   # Which code-assistant model would you like to use
   [x] OpenAI: chatgpt3.5 (default)
   [ ] OpenAI: chatgpt4
   [ ] Lamma

   What is your OpenAI key: asdfljasdf;lksadhf
   
   # Perfect your all setup, welcom to llmvm :)

All the other management can happen behind the scenes.

Once I fully workout all the other ins and out's I'll likely open a PR improving some of the docs, and might even build the above tool I have a spare hour or so. There are still some things that are entirely clear to me. Like how would I choose chatgpt-4-turbo rather than 3.5? I tried setting that in the configs, but couldn't getting the naming right and only succeeded in crashing the LSP.

Anyway, food for thought. I'm super grateful for all your hard work here! I think that this repo is highly underrated.

Allow chat to load file, codebase or identifier blocks

Similar to how the codeassists proxies the LSP to aquire code blocks for context, I think it would be useful if the chat interface could likewise acquire context from roughly three sources:

  • Current File (+ LSP-enhanced)
  • Current Selection (+ LSP-enhanced)
  • Entire Project (+LSP-enhanced)

In Helix, users could bind a command to launch a chat with one of the three context levels.

This would enable uses cases like:

  • "Is there a bug in this file?"
  • "Write me a readme for this project"
  • "What does 'myFunctionInterface' do in simple.english?"

The code assistant crashed with non-English characters.

logs from helix editor:

2024-01-17T12:03:58.343 helix_lsp::transport [ERROR] vuels err <- "thread 'tokio-runtime-worker' panicked at /data/data/com.termux/files/home/.cargo/registry/src/index.crates.io-6f17d22bba15001f/llmvm-codeassist-0.1.0/src/content.rs:127:37:\n"
2024-01-17T12:03:58.343 helix_lsp::transport [ERROR] vuels err <- "byte index 55 is not a char boundary; it is inside 'о' (bytes 54..56) of `> пишу код чтобы другие разработчики могли разобраться.</script>`\n"
2024-01-17T12:03:58.343 helix_lsp::transport [ERROR] vuels err <- "note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace\n"
2024-01-17T12:03:58.347 helix_lsp::transport [ERROR] vuels err <- "Error: channel closed\n"

Add a `@` directive to add to context

Assistant asks LSP server for all symbol positions.
For each symbol, the assistant requests a type definition location.
A folding range is requested for each type definition location.
A text snippet is retrieved from the filesystem for each folding range


You can add particular blocks of code to the context with [...] "@" [In the prompt]

It appears to be a convenient feature to quickly add code blocks to the context by referncing them with their identifier with a @ directive.

As seen on: https://cursor.sh/features

If I understand the architecture of llmvm right, this idea is not completely off base, either. Unfortunately, I can't add any practical experience to judge if it's actually useful and productive. So please read this feature request also with a grain of salt.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.