GithubHelp home page GithubHelp logo

hass-ollama-conversation's Introduction

GitHub Release Downloads Build Status License hacs

Ollama Conversation

The Ollama integration adds a conversation agent powered by Ollama in Home Assistant.

This conversation agent is unable to control your house. The Ollama conversation agent can be used in automations, but not as a sentence trigger. It can only query information that has been provided by Home Assistant. To be able to answer questions about your house, Home Assistant will need to provide Ollama with the details of your house, which include areas, devices and their states.

Installation

To install the Ollama Conversation integration to your Home Assistant instance, use this My button:

Open your Home Assistant instance and open a repository inside the Home Assistant Community Store.

Manual Insallation

If the above My button doesn’t work, you can also perform the following steps manually:

  • Browse to your Home Assistant instance.
  • Go to HACS > Integrations > Custom Repositories.
  • Add custom repository.
    • Repository is ej52/hass-ollama-conversation.
    • Category is Integration.
  • Click Explore & Download Repositories.
  • From the list, select Ollama Conversation.
  • In the bottom right corner, click the Download button.
  • Follow the instructions on screen to complete the installation.

Note:

HACS does not "configure" the integration for you, You must add Ollama Conversation after installing via HACS.

  • Browse to your Home Assistant instance.
  • Go to Settings > Devices & Services.
  • In the bottom right corner, select the Add Integration button.
  • From the list, select Ollama Conversation.
  • Follow the instructions on screen to complete the setup.

Options

Options for Ollama Conversation can be set via the user interface, by taking the following steps:

  • Browse to your Home Assistant instance.
  • Go to Settings > Devices & Services.
  • If multiple instances of Ollama Conversation are configured, choose the instance you want to configure.
  • Select the integration, then select Configure.

General Settings

Settings relating to the integration itself.

Option Description
API Timeout The maximum amount of time to wait for a response from the API in seconds

System Prompt

The starting text for the AI language model to generate new text from. This text can include information about your Home Assistant instance, devices, and areas and is written using Home Assistant Templating.

Model Configuration

The language model and additional parameters to fine tune the responses.

Option Description
Model The model used to generate response.
Context Size Sets the size of the context window used to generate the next token.
Maximum Tokens The maximum number of words or “tokens” that the AI model should generate in its completion of the prompt.
Mirostat Mode Enable Mirostat sampling for controlling perplexity.
Mirostat ETA Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive.
Mirostat TAU Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text.
Temperature The temperature of the model. A higher value (e.g., 0.95) will lead to more unexpected results, while a lower value (e.g. 0.5) will be more deterministic results.
Repeat Penalty Sets how strongly to penalize repetitions. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient.
Top K Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.
Top P Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text.

Contributions are welcome!

If you want to contribute to this please read the Contribution guidelines

Discussions

Discussions for this integration over on Home Assistant Community


hass-ollama-conversation's People

Contributors

dependabot[bot] avatar ej52 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

hass-ollama-conversation's Issues

Edit the systempromp with a service call.

Checklist

  • I have filled out the template to the best of my ability.
  • This only contains 1 feature request (if you have multiple feature requests, open one feature request for each feature request).
  • This issue is not a duplicate feature request of previous feature requests.

Is your feature request related to a problem? Please describe.

I have created a livehood dream and now have a bipolair wasingmachine.

It could even be better if i could edit the system prompt from within an automation.

Describe the solution you'd like

A service call to set the system prompt.

Describe alternatives you've considered

Removing the one backend for an integration making more instances with multiple prompts inside ha possible.
Or adding a other mean to make more instances with different settings like the systemprompt.

Additional context

Nope

"Invalid URL" - Can't add external Ollama API

System Health details

System Information

version core-2024.2.0
installation_type Home Assistant OS
dev false
hassio true
docker true
user root
virtualenv false
python_version 3.12.1
os_name Linux
os_version 6.1.74-haos
arch x86_64
timezone America/Sao_Paulo
config_dir /config
Home Assistant Community Store
GitHub API ok
GitHub Content ok
GitHub Web ok
GitHub API Calls Remaining 4960
Installed Version 1.34.0
Stage running
Available Repositories 1460
Downloaded Repositories 2
HACS Data ok
Home Assistant Cloud
logged_in false
can_reach_cert_server ok
can_reach_cloud_auth ok
can_reach_cloud ok
Home Assistant Supervisor
host_os Home Assistant OS 11.5
update_channel stable
supervisor_version supervisor-2024.01.1
agent_version 1.6.0
docker_version 24.0.7
disk_total 30.8 GB
disk_used 5.7 GB
healthy true
supported true
board ova
supervisor_api ok
version_api ok
installed_addons Advanced SSH & Web Terminal (17.1.0), Let's Encrypt (5.0.15), Piper (1.4.0), Whisper (1.0.2), openWakeWord (1.8.2)
Dashboards
dashboards 1
resources 0
mode auto-gen
Recorder
oldest_recorder_run 7 February 2024 at 22:05
current_recorder_run 8 February 2024 at 20:38
estimated_db_size 0.75 MiB
database_engine sqlite
database_version 3.44.2

Checklist

  • I have enabled debug logging for my installation.
  • I have filled out the issue template to the best of my ability.
  • This issue only contains 1 issue (if you have multiple issues, open one issue for each issue).
  • This issue is not a duplicate issue of any previous issues..

Describe the issue

I'm trying to use my existing Ollama API, but when I try to set the URL I get the error "Invalid URL".

Reproduction steps

Browse to your Home Assistant instance.
Go to Settings > Devices & Services.
In the bottom right corner, select the Add Integration button.
From the list, select Ollama Conversation.
Insert "http(s)://anything.com" on the URL

Debug logs

2024-02-08 20:38:51.456 WARNING (SyncWorker_2) [homeassistant.loader] We found a custom integration ollama_conversation which has not been tested by Home Assistant. This component might cause stability problems, be sure to disable it if you experience issues with Home Assistant
2024-02-08 20:38:51.457 WARNING (SyncWorker_2) [homeassistant.loader] We found a custom integration hacs which has not been tested by Home Assistant. This component might cause stability problenothing herems, be sure to disable it if you experience issues with Home Assistant
2024-02-08 20:38:55.277 WARNING (MainThread) [homeassistant.components.ssdp] Could not set up UPnP/SSDP server, as a presentation URL could not be determined; Please configure your internal URL in the Home Assistant general configuration

Diagnostics dump

nothing here

provide mode where the model can make changes to the homeassistant state

Checklist

  • I have filled out the template to the best of my ability.
  • This only contains 1 feature request (if you have multiple feature requests, open one feature request for each feature request).
  • This issue is not a duplicate feature request of previous feature requests.

Is your feature request related to a problem? Please describe.

The most desired value proposition for LLM models is to have it make changes at our request.

Just answering questions about the state of our home assistant is fairly boring. We can mostly derive that ourselves with a nice dashboard on a mobile device or wall mounted tablet.

Describe the solution you'd like

Allow configuration for:

  • user/system prompt
  • desired model name
  • which entities to expose

Describe alternatives you've considered

https://github.com/acon96/home-llm

Additional context

n/a

Answers don't make any sense.

System Health details

When i ask it:
Switch off the lights im Wohnzimmer.
I get answers like:

`deactivating Shelly Dimmer 1 and Shelly Dimmer 2 on your command. homeassistant {"service": "turn_off", "target_device": "tasmota_shellydimmer2livingroom"} ```python from homeassistant.device import Device from homeassistant.loader import load_component class TasmotaShellyDimmer2LivingRoom(Device): def turn_off(self, duration: float) -> None: super().turn_off(duration) # Turn off the light upon de

image

Or the same question
Switch off the lights im Wohnzimmer.
i get this response
deactivating Shelly Dimmer 1 as requested. ```homeassistant {"service": "turn_off", "target_device": "switch.TasmotaShellyDimmer2LivingRoom"} ```` `

image

Also in german (my setup language), when i ask "Schalte das Licht im Wohnzimmer aus"
I get the response
okay, schalte das Licht im wohnzimmer aus. homeassistant {"service": "turn_off", "target_device": "TasmotaShellyDimmer2LivingRoom"}

image

However nothing is doing anything.
Isn't it parsed correct?
The entity of the Light would be light.tasmotashellydimmer2livingroom
I dont know why somethimes it says
switch.TasmotaShellyDimmer2LivingRoom or
tasmota_shellydimmer2livingroom or
TasmotaShellyDimmer2LivingRoom

Also i'm not sure if it would be the real entity light.tasmotashellydimmer2livingroom i'm not sure if the response get parsed by homeassistant at all.

Model used: fixt/home-3b-v3:latest
image

System Prompt

This smart home is controlled by Home Assistant.

An overview of the areas and the devices in this smart home:
{%- for area in areas() %}
  {%- set area_info = namespace(printed=false) %}
  {%- for device in area_devices(area) -%}
    {%- if not device_attr(device, "disabled_by") and not device_attr(device, "entry_type") and device_attr(device, "name") %}
      {%- if not area_info.printed %}

{{ area_name(area) }}:
        {%- set area_info.printed = true %}
      {%- endif %}
- {{ device_attr(device, "name") }}{% if device_attr(device, "model") and (device_attr(device, "model") | string) not in (device_attr(device, "name") | string) %} ({{ device_attr(device, "model") }}){% endif %}
    {%- endif %}
  {%- endfor %}
{%- endfor %}

Answer the user's questions about the world truthfully.

If the user wants to control a device, reject the request and suggest using the Home Assistant app.

image

Checklist

  • I have enabled debug logging for my installation.
  • I have filled out the issue template to the best of my ability.
  • This issue only contains 1 issue (if you have multiple issues, open one issue for each issue).
  • This issue is not a duplicate issue of any previous issues..

Describe the issue

Ollama Conversation isn't functioning properly. I'm unsure how to describe the issue more precisely.

Reproduction steps

  1. Setup Ollama (in docker)
  2. Setup open-webui (in docker)
  3. Download fixt/home-3b-v3:latest
  4. Install ollama conversation from hacs (tested with latest stable and latest beta)
  5. setup ollama conversation integration
  • Context Size: 2048
  • Maximum Tokens: 128
  • Mirostat Mode: disabled
  • use mentioned system prompt
  1. Change in Voice Assistant to ollama conversation

Debug logs

There are no relevant logs regarding this issue. Already checked them.

Diagnostics dump

No logs from ollama conversation integration in the logfile

Break response into multiple replies

Checklist

  • I have filled out the template to the best of my ability.
  • This only contains 1 feature request (if you have multiple feature requests, open one feature request for each feature request).
  • This issue is not a duplicate feature request of previous feature requests.

Is your feature request related to a problem? Please describe.

Ollama is slow on old hardware. So waiting for a assist reply is boring and it can come in one long message with multiple sentences.

Describe the solution you'd like

Ollama can stream output token by token. It would be nice to have an option to accumulate ollama response and send it as reply after full stop (or other custom string perhaps?). Keep replying with new messages to user until stream is done.

I'm not sure if this can done from async_process function, but looking at Conversation API you should be able to send replies with the same conversation_id.

Describe alternatives you've considered

no alternatives

Additional context

no additional context

make the timeout editable within the gui

Checklist

  • I have filled out the template to the best of my ability.
  • This only contains 1 feature request (if you have multiple feature requests, open one feature request for each feature request).
  • This issue is not a duplicate feature request of previous feature requests.

Is your feature request related to a problem? Please describe.

I trying to make the mastadon account of my washing machine more interesting with your AI integration.
But i have old hardware and the 60 sec timeout is to brief.

I know waiting 2 minutes for a assist reply takes to long but its relative short generating some content during a 2 hour wash.

Describe the solution you'd like

make the timeout editable within the configuration

Describe alternatives you've considered

dont see a alternative except a wont do.

Additional context

https://geekdom.social/@setaggi

Call to another container

Checklist

  • I have filled out the template to the best of my ability.
  • This only contains 1 feature request (if you have multiple feature requests, open one feature request for each feature request).
  • This issue is not a duplicate feature request of previous feature requests.

Is your feature request related to a problem? Please describe.

It does seem that you can call to a container via IP and port as it is look for an instance that is running locally.

Describe the solution you'd like

It does seem that you can call to a container and it is look for an instance that is running locally.

Describe alternatives you've considered

None

Additional context

none

Set keep_alive parameter

Checklist

  • I have filled out the template to the best of my ability.
  • This only contains 1 feature request (if you have multiple feature requests, open one feature request for each feature request).
  • This issue is not a duplicate feature request of previous feature requests.

Is your feature request related to a problem? Please describe.

The 1st time i run a promp it takes long. 30 ot 40 seconds. Every next is run in 10 seconds.

I tryed some curl commands provided in the ollama faq to preload the model but with no luck.

Perhaps it has to do with the prompt or the session or something :/

But could you add the keep alive paramerer as option. I have a cpu only system but plenty of ram.

Describe the solution you'd like

A option to set the keepalive as described in the ollama faq

Describe alternatives you've considered

Curl commands described in the faq

Additional context

I think it is compleat

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.