GithubHelp home page GithubHelp logo

jekalmin / extended_openai_conversation Goto Github PK

View Code? Open in Web Editor NEW
648.0 648.0 87.0 160 KB

Home Assistant custom component of conversation agent. It uses OpenAI to control your devices.

Python 100.00%
conversation custom-component homeassistant

extended_openai_conversation's People

Contributors

ajediiam avatar floris3 avatar hgross avatar jekalmin avatar kamilk91 avatar legovaer avatar mazon avatar olafrauch avatar pajeronda avatar rkistner avatar scags104 avatar shiipou avatar taczirjak avatar teagan42 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

extended_openai_conversation's Issues

The weather template has the temperature hard-coded as "25"

Hello, and Happy New Year!

First of all, thank you for this incredibly useful project.

I noticed that the weather template has the temperature hard-coded as "25".

I tried adding in the temperature to the template myself but that did not work (Please see below).

- spec:
   ...
        temperature:
          type: string
          description: The temperature at the location
 ...
  function:
    type: template
    value_template: The temperature in {{ location }} is {{ temperature }} {{unit}}

Select media_player

Hi! Thank you for your work. Yours is the best custom integration of the year :)
Could you make it so that even in the UI (in your integration) you can select the media_player where it plays the answer? That would be really cool if you could do it with a template option (e.g.: if you are at home, answer on the speakers in the apartment (Google, Amazon...etc). And if away, then the default (on mobile phone where I'm talking to the agent).

I use an M5 Echo, but it has a very bad speaker and we use it a lot at home from our cell phone to talk to the agent. <-- I would like to replace these speakers.

Some people have already done it uniquely for the original integration, I think you can do it very easily:
https://github.com/rhasspy/wyoming-satellite/issues/18#event-11319388726

If there is a possibility to monitor the response with NodeRed and customize it, that would be a very good feature too. If there is one, please let me know!

Thanks again very much for your work!

How to use rest api to get sensor data

In the initial prompt I set to use REST api passing the bearer token adding

Do not use MQTT.
Do not use Service sensor.get.
Use REST api to get sensor data. For authorization use bearer token with value: <MyBearerToken>

It calls a non existant generic function
Something went wrong: Service sensor.rest_command not found.

What I want to achieve is to get temperature from a particular sensor, in your docs I see that is possible to use REST .

Tried manually with Postman with url
{{base_url}}/states/sensor.temperatura_corridoio
and it gives me the correct json data
{"entity_id":"sensor.temperatura_corridoio","state":"16.57","attributes":{"source":"1.1.14","unit_of_measurement":"°C","device_class":"temperature","friendly_name":"Temperatura corridoio"},"last_changed":"2024-01-08T10:12:10.938297+00:00","last_updated":"2024-01-08T10:12:10.938297+00:00","context":{"id":"01HKM8XW1T3HAC2WHACT81M85H","parent_id":null,"user_id":null}}

How can I set bearer token?
How can I map a particular prompt to use REST function?

Feature Request

Can the ability to create scripts be added? I asked it to create an automation that creates and reads a unique bedtime story. The automation was correctly created but it attempted to create a script to execute as part of the process and was not able to. I asked to see the yaml for the script and it looked to be correct it just was not able to create it.
Screenshot_20240107-132108

You tried to access openai.Model, but this is no longer supported in openai>=1.0.0

I recently updated to the latest version of HA 2024.1. It may not appear to be related, but it could be because the OpenAI version might have caused issues, resulting in breaking changes in your integration. Here is the following error that was outputted in the log.

Unexpected exception

Traceback (most recent call last):
  File "/config/custom_components/extended_openai_conversation/config_flow.py", line 130, in async_step_user
    await validate_input(self.hass, user_input)
  File "/config/custom_components/extended_openai_conversation/config_flow.py", line 104, in validate_input
    await validate_authentication(
  File "/config/custom_components/extended_openai_conversation/helpers.py", line 154, in validate_authentication
    await hass.async_add_executor_job(
  File "/usr/local/lib/python3.11/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/openai/lib/_old_api.py", line 39, in __call__
openai.lib._old_api.APIRemovedInV1: 

You tried to access openai.Model, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.

You can run `openai migrate` to automatically upgrade your codebase to use the 1.0.0 interface. 

Alternatively, you can pin your installation to the old version, e.g. `pip install openai==0.28`

A detailed migration guide is available here: https://github.com/openai/openai-python/discussions/742

Quota Reached ERROR 429 returned

Hi
First and foremost: thanks for developing this

I have a free tier account and I am 100% sure (checked) that my quota is not exceeded.
Indeed in the dashboard on my account I see 0 request etc

However when prompting I get asap a 429 Quota exceeded error

No logs? Can't easily debug max token length exceeded error.

Where do I look to find logs?

I have enabled logging in my configuration.yaml file as documented here, but i don't see any logs in HA's Setting -> System -> Logs .

I am intermittently receiving the following error, and it's hard to debug without logs.

Sorry, I had a problem talking to OpenAI: This model's maximum context length is 4097 tokens. However, you requested 4202 tokens (3772 in the messages, 131 in the functions, and 299 in the completion). Please reduce the length of the messages, functions, or completion.

Amazing integration!

[Feature][Advice request] About conversation_id without possibility to define it in

Hi.

Since conversation process service has no option to provide very important conversation_id which is ULID format (probably?) i wanted to do something to allow do that by HASS variable, or another way.

My plan is to specify where current conversation_id is stored and define events to recreate this id (like time pattern, or message count patter, whatever).

I wanted to make try with constant variable in Python, in your code @jekalmin

I modified __init__.py but to be honest it is a little bit confusing for me.

 async def async_process(
     self, user_input: conversation.ConversationInput
 ) -> conversation.ConversationResult:
     raw_prompt = self.entry.options.get(CONF_PROMPT, DEFAULT_PROMPT)
     exposed_entities = self.get_exposed_entities()

     conversation_id = ulid.from_uuid(uuid.UUID('a061d69e-bde0-4c35-9448-198d3a58d904'));

     if user_input.conversation_id in self.history:
         conversation_id = user_input.conversation_id
         messages = self.history[conversation_id]
    

But it produces exception:

invalid agent ID for dictionary value @ data['agent_id']

Do you have any ideas where exactly should i start ? Which place will be best to pass this Id without making new problems?

Local model via llama-cpp-python support

As llama.cpp is now best backend for opensource models, and llama-cpp-python (used as python software backend for python powered GUIs) have buildin OpenAI API support with function (tools) calling support.

https://llama-cpp-python.readthedocs.io/en/latest/server/#function-calling
https://github.com/abetlen/llama-cpp-python#function-calling

and there are docker support of this tool, I wanted to get support with running this things all together

I have read #17 but that is mostly about LocalAI. LocalAI is using llama-cpp-python as backend, so why not to go shortcut and use llama-cpp-python directly ?

My docker-compose looks like this (with llama-cpp-python git cloned, if you do not need GPU support just use commented #image instead of build:)

version: '3.4'
services:
  llama-cpp-python:
    container_name: llama-cpp-python
    #image: ghcr.io/abetlen/llama-cpp-python:latest
    build: llama-cpp-python/docker/cuda_simple  # docker-compose build --no-cache
    environment:
      #- MODEL=/models/sha256:6ae28029995007a3ee8d0b8556d50f3b59b831074cf19c84de87acf51fb54054
      #- MODEL=/models/openchat_3.5-16k.Q4_K_M.gguf
      #- MODEL=/models/zephyr-7b-beta.Q5_K_M.gguf
      #- MODEL=/models/starling-lm-7b-alpha.Q5_K_M.gguf
      #- MODEL=/models/wizardcoder-python-13b-v1.0.Q4_K_M.gguf
      #- MODEL=/models/deepseek-coder-6.7b-instruct.Q5_K_M.gguf
      #- MODEL=/models/mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf
      #- MODEL=/models/phi-2.Q5_K_M.gguf
      - MODEL=/models/functionary-7b-v1.Q4_K_S.gguf
      - USE_MLOCK=0
    ports:
      - 8008:8000
    volumes:
      - ./models:/models
    restart: on-failure:0
    cap_add:
      - SYS_RESOURCE
    deploy:
        resources:
          reservations:
            devices:
              - driver: nvidia
                device_ids: ['0']
                capabilities: [gpu]
    command: python3 -m llama_cpp.server --n_gpu_layers 33 --n_ctx 18192 --chat_format functionary

But I've got answers like:

turn on "wyspa" light
Something went wrong: Service light.on not found.
where is paris?
Something went wrong: Service location.navigate not found.

Maybe something wrong with my prompt ?

Room/Area weird behavior

Hello and thank you for sharing this!

I've got things set up, but noticed that if I use the "1106" gpt 3.5 turbo model, the assist pipeline just stalls showing "..." then eventually nothing at all. I removed "1106" and it's working now.

However, assist seems confused about where lights are located. If I ask it which lights are in xxx room, it tries to infer that information through the naming convention. Looking at the prompt, I see that the entity area isn't provided, so I tried adding it like this:

entity_id,name,state,area,aliases
{% for entity in exposed_entities -%}
{{ entity.entity_id }},{{ entity.name }},{{ entity.state }},{{ entity.area }},{{entity.aliases | join('/')}}
{% endfor -%}

The entity.area doesn't seem to be populated though. Can it be, or is there some other way to get room-based actions working properly?

Feature Request - make functions optional

Hey!

Would it be possible to make functions optional? For example, some models I use do not support functions and some partially do, and of course OpenAI does support it.

you mentioned this modification to the code that I could use here but then this would also break the OpenAI uses...which works really well, btw.

In the case where the model does not support functions, I would just like to use a the prompt and your integrations acts as a gateway to my localAI instance just acting as a conversation agent (or special prompt) without functions.

thanks

GPT Returning wrong current day/date

when asking about weather or anything in relation to the current day,(for example... "what is the weather like tomorrow?") it will return information for the wrong date. when asked specifically what the current day is, it returns the wrong day.

All customizing the request (to support text-generation-webui, etc)

It's relatively common, especially among the local-first nerds, to use a local API server for OpenAI requests. Text-generation-webui is probably the most popular open-source LLM server package, and supports the OpenAI API (mostly).

It supports additional parameters in the API payload, which allows for changing various settings like the Sampler (in my case, I want to use Mirostat).

Unfortunately, it doesn't support customizing the parameters with engine presets, as far as I can tell.

If extended_openai_conversation supported adding custom parameters, we could use it with various local LLM servers like text-generation-webui.

This could be supported by having a "View as YAML" open in the assistant configuration. Or, if there's a conig file for the configured agent, and it supported arbitrary parameters, a simple link in the configuration UI would do the trick for me, e.g. "To configure custom parameters, visit filename.yaml".

Also, if someone knows of a local LLM server that does support parameter configuration in a way that works with extended_openai_conversation, please mention it!

PS: The addition of the BaseURL field is awesome, it's so exciting to have a fully-local AI running in HA. It's especially exciting to see this project moving so fast!

Organization Header

Hi, i don't know how everyone uses API organizations, but theoretically I don't believe there would be any issues using different models as long as your subscription allows it. I don't know if there is a way to do this, but it would be verry nice to be able to add the OpenAI-Organization header in the API calls.
https://platform.openai.com/docs/api-reference/organization-optional
The only settings I can see are the following:
image

An example of usage would be to use 3.5 turbo with personal subscription but gpt-4-1106-preview with a subsctiption?

convert the username

Hi! I am glad that you can attach username in the chat. Unfortunately this is not possible for us, because our name contains Hungarian (European) characters (etc.: István Pintér)
If I turn it on, I get the following error:

Sorry, I had a problem talking to OpenAI: 'Pintér István' does not match '^[a-zA-Z0-9_-]{1,64}$' - 'messages.1.name'

Is character encoding correct?

Thanks for your work! Happy New Year!

Calandar function uses depricated service

The calendar function uses a deprecated service 'Calendar.list_events'. Use calendar.get_events instead which supports multiple entities.

image


- spec:
    name: get_events
    description: Use this function to get list of calendar events.
    parameters:
      type: object
      properties:
        start_date_time:
          type: string
          description: The start date time in '%Y-%m-%dT%H:%M:%S%z' format
        end_date_time:
          type: string
          description: The end date time in '%Y-%m-%dT%H:%M:%S%z' format
      required:
      - start_date_time
      - end_date_time
  function:
    type: script
    sequence:
   service: calendar.list_events
      data:
        start_date_time: "{{start_date_time}}"
        end_date_time: "{{end_date_time}}"
      target:
        entity_id: calendar.test
      response_variable: _function_result

Problem with intent hitting multiple entities

I'm running into a response "Unexpected error during intent recognition" when running wider questions such as "turn of all lights", while a more narrow line as "turn of all office lights" does work.

I've not done any wider troubleshooting yet, but i've started with removing any groups (light / cover) to assure no custom groupings are causing the issues.

Any one running into the same issue and found a solution?

Here's the RAW log details.

Logger: homeassistant.components.assist_pipeline.pipeline
Source: components/assist_pipeline/pipeline.py:938
Integration: Assist pipeline (documentation, issues)
First occurred: 16:50:11 (2 occurrences)
Last logged: 16:53:23

Unexpected error during intent recognition
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/components/assist_pipeline/pipeline.py", line 938, in recognize_intent
conversation_result = await conversation.async_converse(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/components/conversation/init.py", line 467, in async_converse
result = await agent.async_process(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/extended_openai_conversation/init.py", line 157, in async_process
response = await self.query(user_input, messages, exposed_entities, 0)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/extended_openai_conversation/init.py", line 280, in query
message = await self.execute_function_call(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/extended_openai_conversation/init.py", line 319, in execute_function
arguments = json.loads(message["function_call"]["arguments"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/json/init.py", line 346, in loads
return _default_decoder.decode(s)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/json/decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
^^^^^^^^^^^^^^^^^^^^^^
json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 26 column 6 (char 482)

Image upload for GPT-4V? (Feature request)

GPT Plus members can use the upload function for media files, such as images.

https://platform.openai.com/docs/guides/vision/uploading-base-64-encoded-images

As an example, this would be great to have an automation to take a picture of your bar (or refrigerator) and provide recipe suggestions based on the ingredients....along with a custom prompt.

Here is an example: https://m.facebook.com/groups/HomeAssistant/permalink/3611503665787644/?mibextid=Nif5oz .

I've tried doing this via Python and pyscript with little success.

Thanks!

breaking change on beta HA 2024.1.0b0

Hi,

I'm running running 0.0.10-beta2 for your integration, all working well. I have 2 entries. One is using OpenAI, and the other is using LocalAI with LLM and no functions (using [] in the functions box). Upon updating to beta HA core 2024.1.0b0, the openAI entry is broken:

image

with this error message:

Logger: homeassistant.config_entries
Source: config_entries.py:406
First occurred: 3:27:22 PM (2 occurrences)
Last logged: 3:58:32 PM

Error setting up entry OpenAI Entities for extended_openai_conversation
Traceback (most recent call last):
  File "/usr/src/homeassistant/homeassistant/config_entries.py", line 406, in async_setup
    result = await component.async_setup_entry(hass, self)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/config/custom_components/extended_openai_conversation/__init__.py", line 99, in async_setup_entry
    await validate_authentication(
  File "/config/custom_components/extended_openai_conversation/helpers.py", line 154, in validate_authentication
    await hass.async_add_executor_job(
  File "/usr/local/lib/python3.11/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/openai/lib/_old_api.py", line 39, in __call__
openai.lib._old_api.APIRemovedInV1: 

You tried to access openai.Model, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.

You can run `openai migrate` to automatically upgrade your codebase to use the 1.0.0 interface. 

Alternatively, you can pin your installation to the old version, e.g. `pip install openai==0.28`

A detailed migration guide is available here: https://github.com/openai/openai-python/discussions/742

thanks!

Streaming with chatgpt (new feature)

I am using gpt4 in HA with the Raspiaudio Luxe speaker and long answers need 20-30s before behing played.
Could it be possible to implement the streaming option of the OpenAi API? It is what missing to compete with the others commercial voice assistant.

But then I guess it will request too a modification of the TTS plugin too.

randomly dont find (all) entities

image

Very often an entity cannot be found in for my living room
Fehler: "Something went wrong: Unable to find entity ['light.livingroom_couch']"

If I ask the assistant to show me all known entities for this room
the list varies randomly

any idea

--- version 1.0 and version 1.0.1 beta1 (update) only version 1.0.1 beta1, messed up downgrade

Traceback (most recent call last):
File "/config/custom_components/extended_openai_conversation/init.py", line 173, in async_process
response = await self.query(user_input, messages, exposed_entities, 0)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/extended_openai_conversation/init.py", line 299, in query
message = await self.execute_function_call(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/extended_openai_conversation/init.py", line 344, in execute_function
result = await function_executor.execute(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/extended_openai_conversation/helpers.py", line 197, in execute
return await self.execute_service(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/extended_openai_conversation/helpers.py", line 238, in execute_service
self.validate_entity_ids(hass, entity_id or [], exposed_entities)
File "/config/custom_components/extended_openai_conversation/helpers.py", line 165, in validate_entity_ids
raise EntityNotFound(entity_ids)
custom_components.extended_openai_conversation.exceptions.EntityNotFound: Unable to find entity ['light.livingroom_couch']

Thinks lights are only/on off (but still changes color?)

When trying to change the color of a light, it says to me that it can't change the color, only turn it on and off.
And then it changes the color anyway.

When asking what color is the light, it says it doesn't have this information.

Could it be a bug with how light entities are presented to the AI?

Not Issue, Plex-Emby search

Hello, I love your work, working like a charm

Can you help me to transform the plex search function to emby search?

Feature request: add entities to track used tokens

Create some entities to track token usage. This to keep an eye on how much tokens are used to query chatgpt.

I saw my token usage spike during the first day's and after debugging found out that this was because this was I had too many and obsolete enities exposed. This can get quite big.

Do functions count towards tokens? Or can I add as many that I want?

Thanks for this great integration!!!!
Wouter

Context too Large

Hello,
Thank you for creating this integration. It has been a very awesome integration into my smart home.

However, I am experiencing an issue where the request to openai exceeds my usage limits per request of 10,000 tokens. I know that my limits will go up the longer I use OpenAI's api, but I think it would be a nice feature to either:

  1. Be able to limit the size of the Messages object sent in a request (currently, mine is almost 4000 tokens in size, which locks me out after a few requests). Perhaps by limiting the number of messages permitted to exist in the history.

  2. Have a way to clear that messages object manually when this occurs.

I am not sure how home assistant tracks the conversation_id that determines if the message history is inserted into the messages object, but my experience is as follows:

Closing the voice assistant on a mobile device does not change the conversation ID in the eyes of your python program, however, it does remove the message history from the GUI. Yet, when I make a new request, I see that the messages object is still almost 4000 tokens.

Thanks again for leading the development on this integration.

[Question] Would it be possible to integrate an option to use OpenAI's new Assistants and Threads API?

An option to integrate OpenAI's new Assistants and Threads API would be handy as a way to get it to remember past conversions for folks using the wake word function of Assist in Home Assistant. As of now, when using wake words to communicate it does not retain the past conversation like the Assist Chat does, (Well current conversation I should say with Assist Chat). Maybe have a user adjustable settings to set how long to retain threads for (Day, week, etc.). This would probably require quite the overhaul so it's more of a general question at the moment.

Integration not available in HACS

The instructions state that the integration can be installed via HACS, but it is not available by default. Is something broken or shall I add an instruction to att he repo manually?

Error adding integration

Hi, I am able to add the integration in HACS but when trying to add the integration in Settings I get the next error :

"Config flow could not be loaded: {"message":"Invalid handler specified"}"

Can anyone help me?

Thanks

Use generic OpenAI API for local LLM (ooba text-gen, LocalAI, llama-cpp-python , vllm , etc)

Hi !
i use standalone server with ooba's textgen with Open AI generic API and token . And local fast whisper. Few apps for OpenAI works with api perfectly, and now i can try connect your integration. What i doing wrong ?
This error originated from a custom integration. Tested with\withot auth\token, ver 0.11, 1.0.beta, 1.0, 1.0.1 beta.

Logger: custom_components.extended_openai_conversation
Source: custom_components/extended_openai_conversation/init.py:189
Integration: Extended OpenAI Conversation (documentation, issues)
First occurred: 4:35:02 PM (7 occurrences)
Last logged: 6:33:29 PM

Internal Server Error

Screenshot from 2024-01-16 16-50-26
Screenshot from 2024-01-16 16-59-02

FYI: The best implementation of the interface with openapi I've seen in chatbot ( with mongo as storage, whisper and pictures ) https://github.com/father-bot/chatgpt_telegram_bot/blob/main/bot/openai_utils.py

UPD: This implementation talk\speak via API , but can't control \ functions
https://github.com/drndos/hass-openai-custom-conversation

Error in the extension in HA 2024.1

This is the error I am getting


Error setting up entry Asas for extended_openai_conversation
Traceback (most recent call last):
  File "/usr/src/homeassistant/homeassistant/config_entries.py", line 406, in async_setup
    result = await component.async_setup_entry(hass, self)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/config/custom_components/extended_openai_conversation/__init__.py", line 99, in async_setup_entry
    await validate_authentication(
  File "/config/custom_components/extended_openai_conversation/helpers.py", line 154, in validate_authentication
    await hass.async_add_executor_job(
  File "/usr/local/lib/python3.11/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/openai/lib/_old_api.py", line 39, in __call__
openai.lib._old_api.APIRemovedInV1: 

You tried to access openai.Model, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.

You can run `openai migrate` to automatically upgrade your codebase to use the 1.0.0 interface. 

Alternatively, you can pin your installation to the old version, e.g. `pip install openai==0.28`

A detailed migration guide is available here: `https://github.com/openai/openai-python/discussions/742

Thanks

Reliability of TTS output.

Using the same prompt I can get output to TTS about 1/4 of the time. Should I create a function for TTS functionality to get more reliable results? The particular prompt I am trying it "Using tts.cloud_say send my work events to media_player.basement_speaker."

Add Hungarian translation (attached)

hu.json
Hi! I have created a translation in Hungarian for the configuration interface.

{ "config": { "error": { "cannot_connect": "Sikertelen kapcsolódás", "invalid_auth": "Érvénytelen azonosítás", "unknown": "Váratlan hiba" }, "step": { "user": { "data": { "name": "Név", "api_key": "API Key (kulcs)", "base_url": "Base Url" } } } }, "options": { "step": { "init": { "data": { "max_tokens": "A válaszban visszaküldendő maximális tokenek", "model": "Teljesítési Model", "prompt": "Prompt sablon", "temperature": "Mérséklés", "top_p": "Top P", "max_function_calls_per_conversation": "Maximális funkcióhívás beszélgetésenként", "functions": "Funkciók", "attach_username": "Felhasználónév csatolása az üzenethez" } } } } }

Tested, works fine.
image

Todo List using wrong service

When you call "Add Milk to Shopping List"
I get this error:
This error originated from a custom integration.

Logger: custom_components.extended_openai_conversation
Source: custom_components/extended_openai_conversation/init.py:187
Integration: Extended OpenAI Conversation (documentation, issues)
First occurred: 4:10:49 PM (2 occurrences)
Last logged: 4:11:52 PM

Service todo.add_to_list not found.
Traceback (most recent call last):
File "/config/custom_components/extended_openai_conversation/init.py", line 187, in async_process
response = await self.query(user_input, messages, exposed_entities, 0)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/extended_openai_conversation/init.py", line 313, in query
message = await self.execute_function_call(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/extended_openai_conversation/init.py", line 358, in execute_function
result = await function_executor.execute(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/extended_openai_conversation/helpers.py", line 204, in execute
return await self.execute_service(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/extended_openai_conversation/helpers.py", line 242, in execute_service
raise ServiceNotFound(domain, service)
homeassistant.exceptions.ServiceNotFound: Service todo.add_to_list not found.

I think the issue is it is trying to call todo.add_to_list which does not exist in 2024.1, probably should be todo.add_item

Response limited to 255 characters

Responses are cut off to 255 characters in the Assist chat box.

Snímek obrazovky 2024-01-04 134546

The extension parameter "maximum tokens to return in response" is set to 150. Is there a way how to get full response with 150 tokens?

You tried to access openai.Model, but this is no longer supported in openai>=1.0.0

Hello,
Fresh installation yesterday, during "activating" my Voice Assistant in Settings => Integrations i have log:


2024-01-02 09:52:06.007 ERROR (MainThread) [homeassistant.config_entries] Error setting up entry Jass for extended_openai_conversation
Traceback (most recent call last):
  File "/usr/src/homeassistant/homeassistant/config_entries.py", line 406, in async_setup
    result = await component.async_setup_entry(hass, self)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/config/custom_components/extended_openai_conversation/__init__.py", line 99, in async_setup_entry
    await validate_authentication(
  File "/config/custom_components/extended_openai_conversation/helpers.py", line 154, in validate_authentication
    await hass.async_add_executor_job(
  File "/usr/local/lib/python3.11/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/openai/lib/_old_api.py", line 39, in __call__
openai.lib._old_api.APIRemovedInV1: 
You tried to access openai.Model, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.
You can run `openai migrate` to automatically upgrade your codebase to use the 1.0.0 interface. 
Alternatively, you can pin your installation to the old version, e.g. `pip install openai==0.28`
A detailed migration guide is available here: https://github.com/openai/openai-python/discussions/742

Trouble Incorporating Into Voice Assistant Pipeline

My ultimate goal here is to use this in combination with a voice assistant I interact with over a VoIP device.

As of now, however, whenever I pick up my VoIP phone to speak to the assistant, I just get a voice notification saying that the request to OpenAI is bad (400). There is additional messaging that plays afterwards but it consists of mainly special characters obviously meant to be viewed on a screen. I can attempt to recover more of the message if necessary, but I can definitely say that a key word in the error message is “VoIP Phone”.

Any way to get regular response as well as actions?

First, thanks, this plug-in is working great. Two questions;

  1. Is there any way to still get regular questions answered? If I ask it to turn off the lights it will work, but if I ask like 'Who is Mario?' the plug-in's AI just repeats back 'Who is Mario'. If I ask the same question through the regular API without the plugin, I get a detailed answer about Mario. It's not the end of the world, but it's one of the nice things about having AI at your fingertips is being able to ask anything!

  2. When I ask it to do something it can do, like turn off the lights, it will turn off the lights but give a response like "To turn off the laundry room lights, you can use the "light.laundry_lights" entity and issue the "off" command." even though it turned off the lights. Is there any way to just have it say something like 'Thanks, turned the lights off'? Doing this through the Assist pipeline takes a while to read out that response. Thanks!

Feature Request: Add option to send timestamp with message while processing

Many things are limited to use GPT as Assistant since OpenAI don't understand what time is.

Currently, timestamp is added in prompt to give GPT timestamp.

link

This link shows some great insight to improve GPT.

Pros:

  • By adding timestamp in front of message while processing user message to OpenAI, GPT can have better understandings on real world.

  • User can make GPT to talk to user first, by send blank message with conversation.process with timestamp added.

Ideally, timestamp can be filtered for user in Assist, only show in log for debugging purpose.

Thank you for great development.

Unexpected Error

Screenshot 2024-01-03 at 10 47 47 Hi, every time I try to set up the integration I put in a name and the key and get the "Unexpected Error" message, if I bypass authentication it creates it but then doesn't work (message about unable to parse the intent).

Any ideas on what idiotic mistake I am making :-)

Cheers
Mark.

Integrating ytube_music_player

I'm trying to add some specs for full ytube_music_player integration.

Currently I'm trying to have it play from a list of playlist. I've added this in my prompt:

YouTube Music playlists:
```csv
playlist_id,playlist_name
*playlistid*,My Mix 1
*playlistid*,My Mix 2
*playlistid*,My Mix 3
*playlistid*,My Mix 4
*playlistid*,My Mix 5
*playlistid*,My Mix 6
*playlistid*,My Mix 7
*playlistid*,Chill Mix 1
*playlistid*,Chill Mix 2
*playlistid*,Chill Mix 3
*playlistid*,Energy Mix 1
*playlistid*,Energy Mix 2
*playlistid*,Energy Mix 3

And then this is what I'm trying for the spec:

- spec:
    name: play_music_playlist
    description: Use this function to play music from a playlist.
    parameters:
      type: object
      properties:
        playlist_id:
          type: string
          description: The ID of the playlist
      required:
      - playlist_id
  function:
    type: composite
    sequence:
    - type: script
      sequence:
      - service: media_player.play_media
        data:
          entity_id: media_player.youtube_music
          media_content_id: {{playlist_id}}
          media_content_type: playlist
          announce: true

However I'm getting

Sorry, I had a problem talking to OpenAI: [] is too short - 'functions'

I'm planning to add a function to each ytube_music_player function to have full control of it. Including picking a media player. Is there a way in the spec config to choose an already exposed media player?

entity_id should not be required for `execute_service`

Here an error is thrown if no entity is given. However most services can be called with only area_id. In fact having area_id is pointless if entity_id is required.
I tried tweaking the area_id parameter like this and ChatGPT tries to use correctly but gets the exception:

area_id:
  type: string
  description: The id retrieved from areas. 
    You can specify only area_id without entity_id 
    to act on all entities in that area

Prompt returns the statment

When trying to use this with LocalAI it just spits back at me the prompt i sent it. Please see the below Example

image

localai-api-1 | 4:05PM DBG Request received: localai-api-1 | 4:05PM DBG Configuration read: &{PredictionOptions:{Model:luna-ai-llama2-uncensored.Q8_0.gguf Language: N:0 TopP:1 TopK:80 Temperature:0.5 Maxtokens:150 Echo:false Batch:0 F16:false IgnoreEOS:false RepeatPenalty:0 Keep:0 MirostatETA:0 MirostatTAU:0 Mirostat:0 FrequencyPenalty:0 TFZ:0 TypicalP:0 Seed:0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 UseFastTokenizer:false ClipSkip:0 Tokenizer:} Name:thebloke__luna-ai-llama2-uncensored-gguf__luna-ai-llama2-uncensored.q8_0.gguf F16:true Threads:10 Debug:true Roles:map[] Embeddings:false Backend: TemplateConfig:{Chat:chat ChatMessage: Completion:completion Edit: Functions:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString:auto functionCallNameString: FunctionsConfig:{DisableNoAction:false NoActionFunctionName: NoActionDescriptionName:} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0 MirostatTAU:0 Mirostat:0 NGPULayers:22 MMap:false MMlock:false LowVRAM:false Grammar: StopWords:[] Cutstrings:[] TrimSpace:[] TrimSuffix:[] ContextSize:1024 NUMA:false LoraAdapter: LoraBase: LoraScale:0 NoMulMatQ:false DraftModel: NDraft:0 Quantization: MMProj: RopeScaling: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{CUDA:false PipelineType: SchedulerType: EnableParameters: CFGScale:0 IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder: ControlNet:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} VallE:{AudioPath:} CUDA:false DownloadFiles:[] Description: Usage:} localai-api-1 | 4:05PM DBG Response needs to process functions localai-api-1 | 4:05PM DBG Parameters: &{PredictionOptions:{Model:luna-ai-llama2-uncensored.Q8_0.gguf Language: N:0 TopP:1 TopK:80 Temperature:0.5 Maxtokens:150 Echo:false Batch:0 F16:false IgnoreEOS:false RepeatPenalty:0 Keep:0 MirostatETA:0 MirostatTAU:0 Mirostat:0 FrequencyPenalty:0 TFZ:0 TypicalP:0 Seed:0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 UseFastTokenizer:false ClipSkip:0 Tokenizer:} Name:thebloke__luna-ai-llama2-uncensored-gguf__luna-ai-llama2-uncensored.q8_0.gguf F16:true Threads:10 Debug:true Roles:map[] Embeddings:false Backend: TemplateConfig:{Chat:chat ChatMessage: Completion:completion Edit: Functions:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString:auto functionCallNameString: FunctionsConfig:{DisableNoAction:false NoActionFunctionName: NoActionDescriptionName:} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0 MirostatTAU:0 Mirostat:0 NGPULayers:22 MMap:false MMlock:false LowVRAM:false Grammar:space ::= " "? localai-api-1 | string ::= "\"" ( localai-api-1 | [^"\\] | localai-api-1 | "\\" (["\\/bfnrt] | "u" [0-9a-fA-F] [0-9a-fA-F] [0-9a-fA-F] [0-9a-fA-F]) localai-api-1 | )* "\"" space localai-api-1 | root-0-arguments-list-item-service-data ::= "{" space "\"entity_id\"" space ":" space string "}" space localai-api-1 | root-0-arguments-list-item ::= "{" space "\"domain\"" space ":" space string "," space "\"service\"" space ":" space string "," space "\"service_data\"" space ":" space root-0-arguments-list-item-service-data "}" space localai-api-1 | root-0-arguments-list ::= "[" space (root-0-arguments-list-item ("," space root-0-arguments-list-item)*)? "]" space localai-api-1 | root-0 ::= "{" space "\"arguments\"" space ":" space root-0-arguments "," space "\"function\"" space ":" space root-0-function "}" space localai-api-1 | root-1-function ::= "\"answer\"" localai-api-1 | root-0-arguments ::= "{" space "\"list\"" space ":" space root-0-arguments-list "}" space localai-api-1 | root-0-function ::= "\"execute_services\"" localai-api-1 | root-1-arguments ::= "{" space "\"message\"" space ":" space string "}" space localai-api-1 | root-1 ::= "{" space "\"arguments\"" space ":" space root-1-arguments "," space "\"function\"" space ":" space root-1-function "}" space localai-api-1 | root ::= root-0 | root-1 StopWords:[] Cutstrings:[] TrimSpace:[] TrimSuffix:[] ContextSize:1024 NUMA:false LoraAdapter: LoraBase: LoraScale:0 NoMulMatQ:false DraftModel: NDraft:0 Quantization: MMProj: RopeScaling: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{CUDA:false PipelineType: SchedulerType: EnableParameters: CFGScale:0 IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder: ControlNet:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} VallE:{AudioPath:} CUDA:false DownloadFiles:[] Description: Usage:} localai-api-1 | 4:05PM DBG Prompt (before templating): I want you to act as smart home manager of Home Assistant. localai-api-1 | I will provide information of smart home along with a question, you will truthfully make correction or answer using information provided in one sentence in everyday language. localai-api-1 | localai-api-1 | Current Time: 2024-01-09 16:05:39.239165+00:00 localai-api-1 | localai-api-1 | Available Devices: localai-api-1 | ```csv localai-api-1 | entity_id,name,state,aliases localai-api-1 | scene.office_standard,Office Standard,2024-01-09T08:59:36.686917+00:00, localai-api-1 | scene.office_game,Office Game,2024-01-08T20:36:39.723638+00:00, localai-api-1 | light.bedroom_lamp,Bedroom Lamp,on, localai-api-1 | light.bedside_lamps,Bedside Lamps,on, localai-api-1 | light.shapes_0c36,Shapes 0C36,on, localai-api-1 | light.lines_01a4,Lines 01A4,on, localai-api-1 | climate.hallway,Hallway,heat, localai-api-1 | light.livingroom_corner_2,LivingRoom corner,unavailable, localai-api-1 | light.kitchen_right_up,Kitchen Right UP,on, localai-api-1 | light.desk_downlight,Desk Downlight,on, localai-api-1 | light.controller_rgb_2304a5,Kitchen Left Up,on, localai-api-1 | light.office_light_controller,Office Roof Lights,on,Office Main Light localai-api-1 | light.tv_cabinet,TV Cabinet,off, localai-api-1 | light.kitdownright,Kitchen Right Downlight,on, localai-api-1 | light.kitchen_left_downlight,Kitchen Left Downlight,on, localai-api-1 | light.backwall,BackWall,on, localai-api-1 | light.4,4,on, localai-api-1 | light.controller_rgb_fd2bd8,Bed Downlight,on, localai-api-1 | switch.fps_smasher,Computer,off, localai-api-1 | light.master_bedroom_table_lamp_bathroom,Master Bedroom Table Lamp Bathroom,on, localai-api-1 | light.master_bedroom_table_lamp,Master Bedroom Table Lamp Window,on, localai-api-1 | light.0x4c5bb3fffefcd9d6,En suit shower,off, localai-api-1 | light.ensuit_down,Master Bathroom,off, localai-api-1 | light.hallway,Hallway Downlights,off, localai-api-1 | light.ensuit_downlights,EnSuit Downlights,off, localai-api-1 | switch.tv_power,Air freshener,off, localai-api-1 | light.livingroom_floorlamp,Livingroom Floor Lamp,off, localai-api-1 | ``` localai-api-1 | localai-api-1 | The current state of devices is provided in available devices. localai-api-1 | Use execute_services function only for requested action, not for current states. localai-api-1 | Do not execute service without user's confirmation. localai-api-1 | Do not restate or appreciate what user says, rather make a quick inquiry. localai-api-1 | Turn Computer On localai-api-1 | 4:05PM DBG Prompt (after templating): I want you to act as smart home manager of Home Assistant. localai-api-1 | I will provide information of smart home along with a question, you will truthfully make correction or answer using information provided in one sentence in everyday language. localai-api-1 | localai-api-1 | Current Time: 2024-01-09 16:05:39.239165+00:00 localai-api-1 | localai-api-1 | Available Devices: localai-api-1 | ```csv localai-api-1 | entity_id,name,state,aliases localai-api-1 | scene.office_standard,Office Standard,2024-01-09T08:59:36.686917+00:00, localai-api-1 | scene.office_game,Office Game,2024-01-08T20:36:39.723638+00:00, localai-api-1 | light.bedroom_lamp,Bedroom Lamp,on, localai-api-1 | light.bedside_lamps,Bedside Lamps,on, localai-api-1 | light.shapes_0c36,Shapes 0C36,on, localai-api-1 | light.lines_01a4,Lines 01A4,on, localai-api-1 | climate.hallway,Hallway,heat, localai-api-1 | light.livingroom_corner_2,LivingRoom corner,unavailable, localai-api-1 | light.kitchen_right_up,Kitchen Right UP,on, localai-api-1 | light.desk_downlight,Desk Downlight,on, localai-api-1 | light.controller_rgb_2304a5,Kitchen Left Up,on, localai-api-1 | light.office_light_controller,Office Roof Lights,on,Office Main Light localai-api-1 | light.tv_cabinet,TV Cabinet,off, localai-api-1 | light.kitdownright,Kitchen Right Downlight,on, localai-api-1 | light.kitchen_left_downlight,Kitchen Left Downlight,on, localai-api-1 | light.backwall,BackWall,on, localai-api-1 | light.4,4,on, localai-api-1 | light.controller_rgb_fd2bd8,Bed Downlight,on, localai-api-1 | switch.fps_smasher,Computer,off, localai-api-1 | light.master_bedroom_table_lamp_bathroom,Master Bedroom Table Lamp Bathroom,on, localai-api-1 | light.master_bedroom_table_lamp,Master Bedroom Table Lamp Window,on, localai-api-1 | light.0x4c5bb3fffefcd9d6,En suit shower,off, localai-api-1 | light.ensuit_down,Master Bathroom,off, localai-api-1 | light.hallway,Hallway Downlights,off, localai-api-1 | light.ensuit_downlights,EnSuit Downlights,off, localai-api-1 | switch.tv_power,Air freshener,off, localai-api-1 | light.livingroom_floorlamp,Livingroom Floor Lamp,off, localai-api-1 | ``` localai-api-1 | localai-api-1 | The current state of devices is provided in available devices. localai-api-1 | Use execute_services function only for requested action, not for current states. localai-api-1 | Do not execute service without user's confirmation. localai-api-1 | Do not restate or appreciate what user says, rather make a quick inquiry. localai-api-1 | Turn Computer On localai-api-1 | 4:05PM DBG Grammar: space ::= " "? localai-api-1 | string ::= "\"" ( localai-api-1 | [^"\\] | localai-api-1 | "\\" (["\\/bfnrt] | "u" [0-9a-fA-F] [0-9a-fA-F] [0-9a-fA-F] [0-9a-fA-F]) localai-api-1 | )* "\"" space localai-api-1 | root-0-arguments-list-item-service-data ::= "{" space "\"entity_id\"" space ":" space string "}" space localai-api-1 | root-0-arguments-list-item ::= "{" space "\"domain\"" space ":" space string "," space "\"service\"" space ":" space string "," space "\"service_data\"" space ":" space root-0-arguments-list-item-service-data "}" space localai-api-1 | root-0-arguments-list ::= "[" space (root-0-arguments-list-item ("," space root-0-arguments-list-item)*)? "]" space localai-api-1 | root-0 ::= "{" space "\"arguments\"" space ":" space root-0-arguments "," space "\"function\"" space ":" space root-0-function "}" space localai-api-1 | root-1-function ::= "\"answer\"" localai-api-1 | root-0-arguments ::= "{" space "\"list\"" space ":" space root-0-arguments-list "}" space localai-api-1 | root-0-function ::= "\"execute_services\"" localai-api-1 | root-1-arguments ::= "{" space "\"message\"" space ":" space string "}" space localai-api-1 | root-1 ::= "{" space "\"arguments\"" space ":" space root-1-arguments "," space "\"function\"" space ":" space root-1-function "}" space localai-api-1 | root ::= root-0 | root-1 localai-api-1 | 4:05PM DBG Model already loaded in memory: luna-ai-llama2-uncensored.Q8_0.gguf localai-api-1 | 4:05PM DBG Model 'luna-ai-llama2-uncensored.Q8_0.gguf' already loaded localai-api-1 | 4:05PM DBG GRPC(luna-ai-llama2-uncensored.Q8_0.gguf-127.0.0.1:40445): stderr slot 0 is processing [task id: 2] localai-api-1 | 4:05PM DBG GRPC(luna-ai-llama2-uncensored.Q8_0.gguf-127.0.0.1:40445): stderr slot 0 : kv cache rm - [0, end) localai-api-1 | 4:06PM DBG GRPC(luna-ai-llama2-uncensored.Q8_0.gguf-127.0.0.1:40445): stderr localai-api-1 | 4:06PM DBG GRPC(luna-ai-llama2-uncensored.Q8_0.gguf-127.0.0.1:40445): stderr print_timings: prompt eval time = 5866.82 ms / 701 tokens ( 8.37 ms per token, 119.49 tokens per second) localai-api-1 | 4:06PM DBG GRPC(luna-ai-llama2-uncensored.Q8_0.gguf-127.0.0.1:40445): stderr print_timings: eval time = 50921.90 ms / 25 runs ( 2036.88 ms per token, 0.49 tokens per second) localai-api-1 | 4:06PM DBG GRPC(luna-ai-llama2-uncensored.Q8_0.gguf-127.0.0.1:40445): stderr print_timings: total time = 56788.71 ms localai-api-1 | 4:06PM DBG Function return: { "arguments": { "message": "Turn Computer On" } , "function": "answer"} map[arguments:map[message:Turn Computer On] function:answer] localai-api-1 | 4:06PM DBG nothing to do, computing a reply localai-api-1 | 4:06PM DBG Reply received from LLM: Turn Computer On localai-api-1 | 4:06PM DBG Reply received from LLM(finetuned): Turn Computer On localai-api-1 | 4:06PM DBG Response: {"created":1704814397,"object":"chat.completion","id":"56301b4b-9e94-4699-88a7-44e619222601","model":"thebloke__luna-ai-llama2-uncensored-gguf__luna-ai-llama2-uncensored.q8_0.gguf","choices":[{"index":0,"message":{"role":"assistant","content":"Turn Computer On"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}} localai-api-1 | [192.168.4.44]:35842 200 - POST /v1/chat/completions localai-api-1 | 4:06PM DBG GRPC(luna-ai-llama2-uncensored.Q8_0.gguf-127.0.0.1:40445): stderr slot 0 released (727 tokens in cache)

I'm pretty sure my LocalAI is working as if i ask it how it it it replied as i'd expect
image

Something went wrong: Service cover.open not found.

Here is how the conversation goes. cant seem to control garage doors. Lights and locks work fine
Assistant:How can I assist?
close garage doors
Assistant:Garage doors are closed.
open left garage door
Assistant:Something went wrong: Service cover.open not found.

Voice Assistant Pipeline issue

Preface this by saying this works great in screen for voice and text.

Once this is plugged into a remote device as the pipeline for esp/satellite voice assistants it does not allow the device to pick up the wake word. Once i swap to a different assist pipeline on the device it begins to work. This has been tested on a custom ESP32 setup, an M5 Stack echo and a esp-s3-box all previously working voice assistants with HA.

Unfortunately there are no error logs to provide. It simply just doesnt work...
Thanks in advance for looking into this

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.