GithubHelp home page GithubHelp logo

irudnyts / openai Goto Github PK

View Code? Open in Web Editor NEW
164.0 164.0 28.0 1.94 MB

An R package-wrapper around OpenAI API

Home Page: https://irudnyts.github.io/openai/

License: Other

R 100.00%
api ml nlp openai package r

openai's People

Contributors

irudnyts avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

openai's Issues

create_completion returns a corrupted list

First of all, thanks for you effort in doing this library! Great work.

I'm trying to repeat the keyword example. Like it does in the sandbox, it should return a list of keywords extracted from the prompt. Note that in my code below, I've heavily truncated the prompt to save space here.

kw_res <- create_completion(
  model = "text-davinci-003",
  max_tokens = 60,
  temperature = 0.5,
  top_p = 1,
  frequency_penalty = 0.8,
  presence_penalty = 0,
  prompt = "Extract keywords from this text:\n\nKun näitä ilmastosuunnitelmia työnnetään kunnille ja kun kaikkien kuntien — ei nyt ihan kaikkien, mutta suurimman osan kunnista — rahat ovat aivan loppu"
)

The function runs without error, but the returned list is somehow corrupted. The keyword list is only a stub, and it is preceded with text that does not seem to be in the right place.

> kw_res$choices$text
[1] "ksen taholta tehdään jotain, niin se ei ole pelkkä näennäistoimintaa vaan se on sellaista, mikä todella vaikuttaa.\n\nKeywords: ilmast"

In the sandbox, with the same argument values, the function returns a dozen keywords or so.

I wonder if the issue is somehow caused by the OpenAI API itself?

Returning error code?

Hi! I have been using this package, and I must say I really liked it and greatly appreciate it.

Is it possible for you to return a return code with the response indicating if the returned item is a real response or an error? After ChatGPT release the server started to give error messages instead of replies occasionally. It would be awesome if it was handled in openai package.

Thanks!

connection timeout

What should I do for this?
Error in curl::curl_fetch_memory(url, handle = handle) :
Timeout was reached: [api.openai.com] Connection timeout after 10007 ms

Error in paste("OpenAI API probably has been changed. If you see this, please", : OpenAI API probably has been changed.

I've been trying to loop create_chat_completion() over different message lists, and I got the following error on the third entry on the list:


Error in paste("OpenAI API probably has been changed. If you see this, please",  : 
  OpenAI API probably has been changed. If you see this, please rise an issue at: https://github.com/irudnyts/openai/issues

Per the instruction, I'm rising an issue here. Any help would be appreciated!

EDIT: I've found a workaround using the retry() function from the retry package, where I substitute the following for the function call in the loop:

retry(
    expr = create_chat_completion(text),
    when = "502|API",
    max_tries = 3,
    interval = runif(1, 6, 10)
  )

Still not entirely sure what the source of the error is, but this seems to work. It might be worth making a more informative error for the future.

unable to load shared object '/Users/jeroen/Lokaal/R/packages/curl/libs/curl.so'

I get this error when I use the example command:
Error in dyn.load(file, DLLpath = DLLpath, ...) :
unable to load shared object '/Users/myname/Lokaal/R/packages/curl/libs/curl.so':
dlopen(/Users/myname/Lokaal/R/packages/curl/libs/curl.so, 0x0006): tried: '/Users/myname/Lokaal/R/packages/curl/libs/curl.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64')), '/System/Volumes/Preboot/Cryptexes/OS/Users/myname/Lokaal/R/packages/curl/libs/curl.so' (no such file), '/Users/myname/Lokaal/R/packages/curl/libs/curl.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64'))

Not compatible with M1 macs yet?

Update for `gpt-3.5-turbo`

Hi- thanks so much for the package. It seems like one of the URLs called in the create_completion() function may need to be updated for gpt-3.5-turbo. When I request it as a model option, I get the following error message:

`Error: OpenAI API request failed [404]:

This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?`

If you have the bandwidth to take this on, I'd be very grateful!

Include option for automatic retries

As it turns out the OpenAI API frequently returns very unhelpful error messages like this one:

The server had an error while processing your request. Sorry about that!

There are also several threads on this issue, for instance in the OpenAI Develper forum.

I think some parameter like max_retries could be really useful to handle such transient errors that arise while applying functions like create_chat_completion().

The httr2 package provides the very useful req_retry() function that allows to define the maximum number of retries, which errors are transient, and how long to wait between retries. Is there any specific reason that you sticked httr for the openai package?

R >= 4.2 dependency

Is it necessary?

Installing CRAN version on 4.1.1

Warning in install.packages :
  package ‘openai’ is not available for this version of R

Installing Github version on 4.1.1

ERROR: this R is version 4.1.1, package 'openai' requires R >= 4.2

Exceed current quota but none were used

I have set the open API key as follows:

Sys.setenv(OPENAI_API_KEY = 'XXXX')

And proceed to try this out but got the following error:

library(openai)

create_completion(

  • model = "babbage-002",
    
  • prompt = "Generate a question and an answer"
    
  • )
    Error: OpenAI API request failed [429]:

You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.

I just logged on and check my quota and I am not over the quota at all? Is there another step that I need to do to try this out?

Getting an error saying "OpenAI API probably has been changed." Might be server overload, problem has been persisting for hours.

My API calls return an error that either says:

Error in paste("OpenAI API probably has been changed. If you see this, please",  : 
  OpenAI API probably has been changed. If you see this, please rise an issue at: https://github.com/irudnyts/openai/issues

or

Error: OpenAI API request failed [503]:

That model is currently overloaded with other requests. You can retry your request, or contact us through our help center at help.openai.com if the error persists. 

I'm sometimes able to process 3-5 calls at a time through the API, but usually, it stops after a single one. Last night I was able to make about 1,000 calls with no issues.

Return entire response to enable rate limit monitoring

The headers of the response contain information about the remaining requests and tokens before limits are exceeded.

Returning the response headers along with the content (or just the entire response as is) could give access to users which need to slow down their requests.

Include messages argument as part of output object?

Hi, great package.
For the sake of reproducibility and for ease of continuing a chat, could the create_chat_completion() returned result object include the inputted settings (in particular the messages argument)?

A bug in create_chat_completion()

I am not sure wether it is a bug or not:
When using:
response <- create_chat_completion(
model = "gpt-3.5-turbo",
......

we set the model to "gpt-3.5-turbo",
but when we see the results:

response
$id
[1] "chatcmpl-6tat7xkifZsnPUi3t1C0SePRpZQ6A"

$object
[1] "chat.completion"

$created
[1] 1678707857

$model
[1] "gpt-3.5-turbo-0301"

Here it shows the model "gpt-3.5-turbo-0301"

I am not sure if it is correct.

openai cannot recognize my API keys

I am trying to use openai but when I include my API keys it cannot be recognized.

Sys.setenv(
    SPOTIFY_CLIENT_ID = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
)
create_completion(
    engine_id = "ada",
    prompt = "Generate a question and an answer"
)

Error: OpenAI API request failed [401]:

You didn't provide an API key. You need to provide your API key in an Authorization header using Bearer auth (i.e. Authorization: Bearer YOUR_KEY), or as the password field (with blank username) if you're accessing the API from your browser and are prompted for a username and password. You can obtain an API key from https://beta.openai.com.

logprobs for chat completions only in python ver

API was recently updated to return logprobs for chat completions but the change appears to be only present in the python version and not the R version. Would really appreciate everything up to date and ported over!

Other models besides OpenAI?

A number of open source models like LLaMa 2 can run in local environments where there's a webserver (like LM Studio, Koboldcpp, etc.) that has identical endpoints to OpenAI. Can we have a flag/option that allows for specifying a different OpenAI-compatible endpoint?

Whisper interface not working in Spanish

I was running a test to compare whisper to google transcript. This was for an audio file in Spanish so called like this:

transcription <- create_transcription(file = "sound-data/29320.mp3",
                                   language = "es",
                                   model = "whisper-1"
                                   )
})

What I get back is:

"Más temprano que tarde. Cuestión Política. Daniel Ciaschetti, doctor en Ciencia Política, docente e investigador de la Universidad de la República. Daniel Ciaschetti, doctor en Ciencia Política, doctor en Ciencia Política, doctor en Ciencia Política, doctor en Ciencia Política, doctor en Ciencia Política,

and so on for a couple of hundred lines. It seems it transcribes the first Sentence, and then got stuck, just repeating the last part of it for the rest of the transcript. I am not sure if this is a problem with the R-package or a bug in the API.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.