GithubHelp home page GithubHelp logo

Comments (8)

zenchantlive avatar zenchantlive commented on July 18, 2024

i am having the same issue

from gpt-pilot.

hafizSiddiq7675 avatar hafizSiddiq7675 commented on July 18, 2024

Same issue

from gpt-pilot.

zvone187 avatar zvone187 commented on July 18, 2024

This happens when you have a small limit on the number of tokens per minute. OpenAI puts 10k tokens per minute by default which is too little for GPT Pilot, but you can request a limit increase from OpenAI.

from gpt-pilot.

CyKiller avatar CyKiller commented on July 18, 2024

We should add a step - to improve the pilot and allow for user feedback during confirmations and error handling, we can modify the create_gpt_chat_completion function.

Firstly, instead of just asking the user to press ENTER to confirm, we can use the questionary library to create a more interactive prompt. Secondly, in the case of an error, we can ask the user for advice or feedback before deciding whether to retry the request or not. When an exception occurs, we can have the code now asks the user for advice or feedback using questionary.text. Our input is then printed out. We can replace this print statement with any action we want to perform with the user's feedback. I think that should halt the process which could be in a loop or over token usage where we can contain the control point for now with this more terminal approach for a succeeding request for an answer after an error.

We can then use the user's feedback as we see fit at that task need. For example, we could log it, use it to alter the program's behavior, or even send it back to the server for further analysis. These changes should help make the program more interactive and responsive to our input, and could potentially help avoid issues like infinite loops or excessive token usage with a limit or not.

from gpt-pilot.

Zate avatar Zate commented on July 18, 2024

This happens when you have a small limit on the number of tokens per minute. OpenAI puts 10k tokens per minute by default which is too little for GPT Pilot, but you can request a limit increase from OpenAI.

No they do not up the limit on GPT-4 according to their own docs and forms.

I'd love to see a combination of using GPT 3.5 Turbo for places where it doesnt matter, with GPT4 just used for the important pieces.

Be nice to have it implement some kind of automated handling of the rate limit such as like https://help.openai.com/en/articles/5955604-how-can-i-solve-429-too-many-requests-errors or similar.

from gpt-pilot.

zenchantlive avatar zenchantlive commented on July 18, 2024

from gpt-pilot.

CyKiller avatar CyKiller commented on July 18, 2024

Be nice to have it implement some kind of automated handling of the rate limit such as like https://help.openai.com/en/articles/5955604-how-can-i-solve-429-too-many-requests-errors or similar.

We can test this I guess - we likely need to update the llm_connection.py file to include an exponential backoff mechanism similar to the one described in the OpenAI article. We can try adding a while loop in def stream_gpt_completion to wrap the existing API request code. If a "429: Too Many Requests" error is encountered, we can have the code wait for a sleep time we set and then retry the request. Any advice here would be helpful as I haven't thought it out thoroughly but we can have the sleep time double with each retry, up to a maximum of retries to keep it going in what feels like uninterrupted on our end for now.

This article explains - Note: we will not increase limits on gpt-4, text-davinci-003, gpt-3.5-turbo-16k, or fine-tuned models at this time. .

from gpt-pilot.

nalbion avatar nalbion commented on July 18, 2024

This is fixed now at https://github.com/Pythagora-io/gpt-pilot/blob/main/pilot/utils/llm_connection.py#L152

I do like @CyKiller's suggestion of exponential back-off. Currently it follows the instructions in the response which is always "Please try again in 6ms"

@Zate also suggests using "GPT 3.5 Turbo for places where it doesnt matter" which is also a good idea.

from gpt-pilot.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.