GithubHelp home page GithubHelp logo

Comments (20)

jekalmin avatar jekalmin commented on June 6, 2024 1

Thanks for reporting an issue.

Currently, if known error occurs, the error message is responded to Assist without logging.
If unknown error occurs, the error message is logged and the response message is "Unexpected error during intent recognition."

In the next release (0.0.9), I am trying to log as much as possible. (#41)

I will also add logging of OpenAI API call in the next release.

from extended_openai_conversation.

that1guy avatar that1guy commented on June 6, 2024 1

Thanks for reporting an issue.

Currently, if known error occurs, the error message is responded to Assist without logging. If unknown error occurs, the error message is logged and the response message is "Unexpected error during intent recognition."

In the next release (0.0.9), I am trying to log as much as possible. (#41)

I will also add logging of OpenAI API call in the next release.

Excellent. Great new @jekalmin

from extended_openai_conversation.

jekalmin avatar jekalmin commented on June 6, 2024 1

Fixed in 0.0.9-beta3

from extended_openai_conversation.

stefanbode avatar stefanbode commented on June 6, 2024 1

Thanks. I know about the costs. Reduced from 190 to 140 entities. Maybe in the future it may help just to call the items in the domain and not all of them. But this require most likely two calls. One to understand what to filter and the the second call with all the objects relevant. For sure a very interesting space to work on. Anyhow the array is already an improvement.

from extended_openai_conversation.

jekalmin avatar jekalmin commented on June 6, 2024 1

@stefanbode
Just released area capability in 1.0.1.
It might help reducing token consumption for multiple devices.

See https://github.com/jekalmin/extended_openai_conversation/tree/main/examples/function/area

from extended_openai_conversation.

 avatar commented on June 6, 2024 1

Opening this back up for further instruction. I've enabled logging per the documentation but I still can't see logs when I go to Setting -> System -> Logs. I do however see logs when I tail config/home-assistant.log, but wow do I see this custom components info level logs pretty printed in the Home Assistant UI?

My configuration.yaml:

# Log the interactions between Home Assistant and custom chatGPT integration
logger:
  logs:
    custom_components.extended_openai_conversation: info

from extended_openai_conversation.

 avatar commented on June 6, 2024 1

@jekalmin when I click "Load Full Logs" I don't see any extra helpful data and the sheer amount of logs loaded can cause the browser to crash. Ideally we could enable debug logs and filter (top right hand corner) to only see extended openai conversation logs.

My goal is to see the JSON payload extended openai is POSTing to my LLM server.

Hope this feedback is helpful. Appreciate you!

image

from extended_openai_conversation.

saya6k avatar saya6k commented on June 6, 2024

@that1guy Have you checked /config/home-assistant.log?

from extended_openai_conversation.

that1guy avatar that1guy commented on June 6, 2024

@that1guy Have you checked /config/home-assistant.log?

Yes, checked. Nothing of value in here.

from extended_openai_conversation.

jekalmin avatar jekalmin commented on June 6, 2024

Released in 0.0.9.

from extended_openai_conversation.

stefanbode avatar stefanbode commented on June 6, 2024

Am I right this is the same issue? Asking for lights that are ON. Looks like I have to many lights. Using turbo-1106 version and latest 0.0.10 from yesterday. I assume about 30 lights get reported. any help welcome.

Something went wrong: failed to parse arguments {"list":[{"domain":"light","service":"turn_off","service_data":{"entity_id":"light.vorratskellerlicht"}},{"domain":"light","service":"turn_off","service_data":{"entity_id":"light.ankleidezimmer_licht"}},{"domain":"light","service":"turn_off","service_data":{"entity_id":"light.wohnzimmer_licht"}},{"domain":"light","service":"turn_off","service_data":{"entity_id":"light.schlafzimmer_dimmer"}},{"domain":"light","service":"turn_off","service_data":{"entity_id":"light.kueche_licht_herd"}},{"domain":"light","service":"turn_off","service_data":{"entity_id":"light. Increase maximum token to avoid the issue.

from extended_openai_conversation.

stefanbode avatar stefanbode commented on June 6, 2024

I changed to 3.5-16k model and now it runs fine. Anyhow I want to start a discussion if we are able to change the json to make it more token effective. Repeating "light", switch-on and all of these just increases the json. If we organize light on level one, switch state on level 2 and the just iterate on the items this is much smaller. For sure these needs changes on the loops running through the items. I would assume don't change the existing and just make a conversion of the json would me most effective. Is this possible with ChatGPT or do we have to stay with the formatting and accept the amounts of token.

from extended_openai_conversation.

jekalmin avatar jekalmin commented on June 6, 2024

Am I right this is the same issue?

Yes. Since it is fixed, it resulted in the error message you wrote.

I changed to 3.5-16k model and now it runs fine

I'm thinking of changing model from 3.5 to 3.5-16k by default.

Anyhow I want to start a discussion if we are able to change the json to make it more token effective

Can you try changing entity_id of service_data from string to array like below?

Functions

- spec:
    name: execute_services
    description: Execute service of devices in Home Assistant.
    parameters:
      type: object
      properties:
        list:
          type: array
          items:
            type: object
            properties:
              domain:
                type: string
                description: The domain of the service.
              service:
                type: string
                description: The service to be called
              service_data:
                type: object
                description: The service data object to indicate what to control.
                properties:
                  entity_id:
                    type: array
                    items:
                      type: string
                      description: The entity_id retrieved from available devices. It must start with domain, followed by dot character.
                required:
                - entity_id
            required:
            - domain
            - service
            - service_data
  function:
    type: native
    name: execute_service

Response

  {
    "finish_reason": "function_call",
    "index": 0,
    "message": {
      "content": null,
      "function_call": {
        "arguments": "{\"list\":[{\"domain\":\"switch\",\"service\":\"turn_on\",\"service_data\":{\"entity_id\":[\"switch.bedroom_left\",\"switch.bedroom_center\",\"switch.livingroom_l2\",\"switch.officeroom_top\"]}}]}",
        "name": "execute_services"
      },
      "role": "assistant"
    }
  }

from extended_openai_conversation.

stefanbode avatar stefanbode commented on June 6, 2024

Does this clear message help you to understand? Used 3.5-turbo

Sorry, I had a problem talking to OpenAI: This model's maximum context length is 4097 tokens. However, you requested 4169 tokens (3959 in the messages, 60 in the functions, and 150 in the completion). Please reduce the length of the messages, functions, or completion.

Doing the same request with the gpt-3.5- turbo-1106 does work. It does not report an error and seems to execute the command in the right way.

Question: which lights in the living room are on?

from extended_openai_conversation.

jekalmin avatar jekalmin commented on June 6, 2024

It seems you have exposed too many entities, so it exceeds limit of context windows.
gpt-3.5-turbo-1106 model supports up to 16k tokens while gpt-3.5-turbo supports 4k.
However, it's better to lower the size of context since it costs depending on the size.

Try not to expose all entities, but to expose entities that are necessary.

from extended_openai_conversation.

jekalmin avatar jekalmin commented on June 6, 2024

Closing the issue.
Feel free to reopen it.

from extended_openai_conversation.

marnixt avatar marnixt commented on June 6, 2024

I can confirm the same issue that FutureProofHomes has. Cannot see any logs in system/logs but do see info in the home-assistant.log file

small update: I do see errors in home assistant core after I got a technical error

This error originated from a custom integration.
Logger: custom_components.extended_openai_conversation
Source: custom_components/extended_openai_conversation/init.py:196
integration: Extended OpenAI Conversation (documentation, issues)
First occurred: 16:59:40 (2 occurrences)
Last logged: 17:04:17

token length(150) exceeded. Increase maximum token to avoid the issue.
token length(300) exceeded. Increase maximum token to avoid the issue.
Traceback (most recent call last):
File "/config/custom_components/extended_openai_conversation/init.py", line 196, in async_process
query_response = await self.query(user_input, messages, exposed_entities, 0)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/extended_openai_conversation/init.py", line 388, in query
raise TokenLengthExceededError(response.usage.completion_tokens)
custom_components.extended_openai_conversation.exceptions.TokenLengthExceededError: token length(150) exceeded. Increase maximum token to avoid the issue.

from extended_openai_conversation.

jekalmin avatar jekalmin commented on June 6, 2024

Thanks for reporting an issue, @FutureProofHomes , @marnixt !

Did you click LOAD FULL LOGS button?

1

from extended_openai_conversation.

marnixt avatar marnixt commented on June 6, 2024

from extended_openai_conversation.

jekalmin avatar jekalmin commented on June 6, 2024

Thanks @FutureProofHomes as always!

I'm quite not sure if debug logs can be visible on the top part and apply filter.
I will look into it.
Please let me know if anyone has an idea.

from extended_openai_conversation.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.