GithubHelp home page GithubHelp logo

Comments (3)

jekalmin avatar jekalmin commented on June 6, 2024
  1. I just tried this with two models, and the result is following.

    gpt-3.5-turbo-1106 gpt-4-1106-preview
    스크린샷 2023-11-18 오전 12 54 04 스크린샷 2023-11-18 오전 12 52 09

    For example, in order for gpt-3.5-turbo-1106 model to answer general questions, you can modify prompt by adding a sentence like below:

    You are not only limited to answer about smart home, but also general knowledge.

    If your model repeats what you asked, tell model not to repeat using a prompt. The default prompt is made like this:

    1. change prompt
    2. ask question
    3. check behavior
    4. repeat i-iii

    Although it is not ideal, it works in general. Better prompt should be contributed here since the quality depends not only on model but also on prompt.

  2. This can also be fixed by changing the prompt I guess. You can try removing the last two sentences of the default prompt. These two sentences worked when using gpt-3.5-turbo before, but it seems not effective in recent models.

    I put Do not execute service without user's confirmation sentence because I wanted the model to ask me again before taking into action.

    I put Do not restate or appreciate what user says, rather make a quick inquiry. sentence for case like following:

    user: "I'm done using restroom"
    assistant: "Do you want to turn off the light of restroom?"
    user: "Yes"
    assistant: "Turned the light off"

    Tweak with prompt and please share it by contributing to example for better use for everyone.

from extended_openai_conversation.

Someguitarist avatar Someguitarist commented on June 6, 2024

Hmmm, so I think my issue might be related to running out of memory, or something similar. I'm running LocalAI and if I remove the template prompts entirely I get a response, but also if I increase the context size to something like ~4-5k it will actually answer the question once, and on asking the same question a second time it will just repeat the question back, at least on a 1660Ti.

You can close this issue, as I don't think it's related to your plug in at all. I think it's a setting somewhere in LocalAI to increase either VRAM or RAM allocations. If I can get it working I'll post back with what I've changed!

from extended_openai_conversation.

jekalmin avatar jekalmin commented on June 6, 2024

Oh, I have not tested with LocalAI yet :(
Hope to find a way to resolve it!
Thanks :D

from extended_openai_conversation.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.