Use Ollama for local llama inference on Raycast.
- Ollama installed and running.
- At least orca 3b and llama2 7b model installed (they are the default). Use 'Manage Models' commands for pulling images or ollama cli.
ollama pull orca
ollama pull llama2
This plugin allows you to select a different model for each command. Keep in mind that you need to have the corresponding model installed on your machine. You can find all available model here.
With 'Create Custom Command' you can create your own custom command or chatbot using whatever model you want.