GithubHelp home page GithubHelp logo

async-openai's Introduction

async-openai

Async Rust library for OpenAI

Logo created by this repo itself

Overview

async-openai is an unofficial Rust library for OpenAI.

  • It's based on OpenAI OpenAPI spec
  • Current features:
    • Assistants (Beta)
    • Audio
    • Chat
    • Completions (Legacy)
    • Embeddings
    • Files
    • Fine-Tuning
    • Images
    • Microsoft Azure OpenAI Service
    • Models
    • Moderations
    • WASM support (experimental and only available in experiments branch)
  • Support SSE streaming on available APIs
  • All requests including form submissions (except SSE streaming) are retried with exponential backoff when rate limited by the API server.
  • Ergonomic builder pattern for all request objects.

Note on Azure OpenAI Service (AOS): async-openai primarily implements OpenAI spec, and doesn't try to maintain parity with spec of AOS.

Usage

The library reads API key from the environment variable OPENAI_API_KEY.

# On macOS/Linux
export OPENAI_API_KEY='sk-...'
# On Windows Powershell
$Env:OPENAI_API_KEY='sk-...'

Image Generation Example

use async_openai::{
    types::{CreateImageRequestArgs, ImageSize, ResponseFormat},
    Client,
};
use std::error::Error;

#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
    // create client, reads OPENAI_API_KEY environment variable for API key.
    let client = Client::new();

    let request = CreateImageRequestArgs::default()
        .prompt("cats on sofa and carpet in living room")
        .n(2)
        .response_format(ResponseFormat::Url)
        .size(ImageSize::S256x256)
        .user("async-openai")
        .build()?;

    let response = client.images().create(request).await?;

    // Download and save images to ./data directory.
    // Each url is downloaded and saved in dedicated Tokio task.
    // Directory is created if it doesn't exist.
    let paths = response.save("./data").await?;

    paths
        .iter()
        .for_each(|path| println!("Image file path: {}", path.display()));

    Ok(())
}

Scaled up for README, actual size 256x256

Contributing

Thank you for taking the time to contribute and improve the project. I'd be happy to have you!

All forms of contributions, such as new features requests, bug fixes, issues, documentation, testing, comments, examples etc. are welcome.

A good starting point would be to look at existing open issues.

To maintain quality of the project, a minimum of the following is a must for code contribution:

  • Names & Documentation: All struct names, field names and doc comments are from OpenAPI spec. Nested objects in spec without names leaves room for making appropriate name.
  • Tested: Examples are primary means of testing and should continue to work. For new features supporting example is required.
  • Scope: Keep scope limited to APIs available in official documents such as API Reference or OpenAPI spec. Other LLMs or AI Providers offer OpenAI-compatible APIs, yet they may not always have full parity. In such cases, the OpenAI spec takes precedence.
  • Consistency: Keep code style consistent across all the "APIs" that library exposes; it creates a great developer experience.

This project adheres to Rust Code of Conduct

Complimentary Crates

  • openai-func-enums provides procedural macros that make it easier to use this library with OpenAI API's tool calling feature. It also provides derive macros you can add to existing clap application subcommands for natural language use of command line tools. It also supports openai's parallel tool calls and allows you to choose between running multiple tool calls concurrently or own their own OS threads.

License

This project is licensed under MIT license.

async-openai's People

Contributors

64bit avatar adri1wald avatar buraktabn avatar cakecrusher avatar ccollie avatar christopherjmiller avatar czechh avatar djrodgerspryor avatar dmweis avatar frankfralick avatar ifsheldon avatar ironman5366 avatar jonaro00 avatar katya4oyu avatar luketpeterson avatar m1guelpf avatar ming08108 avatar nodir-t avatar oslfmt avatar prosammer avatar retrage avatar sagebati avatar sgopalan98 avatar sharifhsn avatar swnb avatar taoaozw avatar turingbuilder avatar vmg-dev avatar xuanwo avatar xutianyi1999 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

async-openai's Issues

I get the following error when trying to use model gpt-4

Works with turbo but not gpt4. Error message doesn't help much.

error: stream failed: Invalid status code: 404 Not Found

Code

#[derive(Deserialize)]
struct ChatInput {
    content: String,
    model: Option<String>,
}

#[post("/stream", format = "json", data = "<chat_input>")]
async fn stream(chat_input: Json<ChatInput>) -> TextStream![String] {
    let client = Client::new();
    let model = chat_input.model.clone().unwrap_or_else(|| "gpt-3.5-turbo".to_string());

    println!("{}", model);
    let model = "gpt-4-32k".to_string();
    let request = CreateChatCompletionRequestArgs::default()
        .model(model)
        // .max_tokens(512u16)
        .messages([
            ChatCompletionRequestMessageArgs::default()
                .content(chat_input.content.clone())
                .role(Role::User)
                .build()
                .unwrap(),
        ])
        .build()
        .unwrap();

    dbg!(&request);

    let mut stream = client.chat().create_stream(request).await.unwrap();

    TextStream! {
        while let Some(result) = stream.next().await {
            match result {
                Ok(response) => {
                    for chat_choice in &response.choices {
                        if let Some(ref content) = chat_choice.delta.content {
                            yield content.clone();
                        }
                    }
                }
                Err(err) => {
                    yield format!("error: {}", err);
                }
            }
        }
    }
}

Feature Request: Implement Serilaize on response types

I'm trying to cache openai responses so that my tests will produce deterministic results, but the response types defined in this library don't implement Serialize, so I have to maintain a copy of them which does.

Would you be open to adding Serialize to all of the response types? This obviously increases the amount of generated code, but it makes consuming the data in cases like mine much easier.

Enable polymorphism / dynamic dispatch for AzureConfig and OpenAIConfig

Would like to enable something like this:

fn use_client<C: Config>(client: Box<dyn CommonClientTrait<ConfigType = C>>) {
    // You can now call any method defined in the CommonClientTrait on the client
    let models = client.models();
    // ...
}

fn main() {
    let openai_config = OpenAIConfig::default();
    let openai_client = Client::with_config(openai_config);
    use_client(Box::new(openai_client));

    let azure_config = AzureConfig::default();
    let azure_client = Client::with_config(azure_config);
    use_client(Box::new(azure_client));
}

Complimentary crate for async-openai: "openai-func-enums"

async-openai is 😙👌, but I found myself with a lot of function-call related logic to deal with, and much more still to come, and inlining json objects isn't why I came to the Rust party. openai-func-enums lets you compose descriptions of "functions" using enum types and enum variants to represent functions and their arguments, generate the function json, and deserialize into struct instances (types generated by the library) that have properties matching your enum field types, so you can match on the variants and get on with it. Just thought I'd share in case it was useful to anyone else.

Unable to compile project due to compiler errors

Hello,

I've seen the dev day update for this and attempted a basic upgrade to my own project to enable the use of DALL-E 3 rather than DALL-E 2 but I've been unable to compile the changes due to errors that are coming from this library.

The only change that I have made to my project is adding .model(ImageModel::DallE3) to the image request

I've put the cargo.toml and output of cargo build below and can confirm that I'm using rustc 1.73.0 which I updated today

I wouldn't doubt that I did something wrong but I can't figure out what it is.

cargo.toml

[dependencies]
tokio = { version = "1.29.1", features = ["macros", "rt-multi-thread"] }
serenity = { default-features = false, features = ["client", "gateway", "model", "rustls_backend", "cache"], version = "0.11.5"}
async-openai = "0.16.0"
rusqlite = { version = "0.29.0", features = ["bundled"] }
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
reqwest = { version = "0.11", features = ["blocking"]}
uuid = "1.4.1"
rand = "0.8.5"

[profile.release.package."*"]
strip = true
opt-level = "z"

[profile.release]
lto = true

cargo build output (username redacted from file paths)

Compiling async-openai v0.16.0
error: unknown serde variant attribute `untagged`
  --> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\async-openai-0.16.0\src\types\types.rs:59:13
   |
59 |     #[serde(untagged)]
   |             ^^^^^^^^

error: unknown serde variant attribute `untagged`
   --> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\async-openai-0.16.0\src\types\types.rs:309:13
    |
309 |     #[serde(untagged)]
    |             ^^^^^^^^

error: unknown serde variant attribute `untagged`
    --> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\async-openai-0.16.0\src\types\types.rs:1364:13
     |
1364 |     #[serde(untagged)]
     |             ^^^^^^^^

error: unknown serde variant attribute `untagged`
    --> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\async-openai-0.16.0\src\types\types.rs:1619:13
     |
1619 |     #[serde(untagged)]
     |             ^^^^^^^^

error: unknown serde variant attribute `untagged`
    --> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\async-openai-0.16.0\src\types\types.rs:1630:13
     |
1630 |     #[serde(untagged)]
     |             ^^^^^^^^

error[E0277]: the trait bound `types::types::ImageModel: Serialize` is not satisfied
    --> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\async-openai-0.16.0\src\types\types.rs:329:24
     |
329  | #[derive(Debug, Clone, Serialize, Default, Builder, PartialEq)]
     |                        ^^^^^^^^^ the trait `Serialize` is not implemented for `types::types::ImageModel`
...
340  |     /// The model to use for image generation.
     |     ------------------------------------------ required by a bound introduced by this call
     |
     = help: the following other types implement trait `Serialize`:
               &'a T
               &'a mut T
               ()
               (T0, T1)
               (T0, T1, T2)
               (T0, T1, T2, T3)
               (T0, T1, T2, T3, T4)
               (T0, T1, T2, T3, T4, T5)
             and 289 others
     = note: required for `std::option::Option<types::types::ImageModel>` to implement `Serialize`
note: required by a bound in `config::_::_serde::ser::SerializeStruct::serialize_field`
    --> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\serde-1.0.163\src\ser\mod.rs:1901:12
     |
1901 |         T: Serialize;
     |            ^^^^^^^^^ required by this bound in `SerializeStruct::serialize_field`

error[E0277]: the trait bound `types::types::ChatCompletionToolChoiceOption: Serialize` is not satisfied
    --> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\async-openai-0.16.0\src\types\types.rs:1368:17
     |
1368 | #[derive(Clone, Serialize, Default, Debug, Builder, Deserialize, PartialEq)]
     |                 ^^^^^^^^^ the trait `Serialize` is not implemented for `types::types::ChatCompletionToolChoiceOption`
...
1454 |     #[serde(skip_serializing_if = "Option::is_none")]
     |     - required by a bound introduced by this call
     |
     = help: the following other types implement trait `Serialize`:
               &'a T
               &'a mut T
               ()
               (T0, T1)
               (T0, T1, T2)
               (T0, T1, T2, T3)
               (T0, T1, T2, T3, T4)
               (T0, T1, T2, T3, T4, T5)
             and 289 others
     = note: required for `std::option::Option<types::types::ChatCompletionToolChoiceOption>` to implement `Serialize`
note: required by a bound in `config::_::_serde::ser::SerializeStruct::serialize_field`
    --> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\serde-1.0.163\src\ser\mod.rs:1901:12
     |
1901 |         T: Serialize;
     |            ^^^^^^^^^ required by this bound in `SerializeStruct::serialize_field`

error[E0277]: the trait bound `types::types::ChatCompletionFunctionCall: Serialize` is not satisfied
    --> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\async-openai-0.16.0\src\types\types.rs:1368:17
     |
1368 | #[derive(Clone, Serialize, Default, Debug, Builder, Deserialize, PartialEq)]
     |                 ^^^^^^^^^ the trait `Serialize` is not implemented for `types::types::ChatCompletionFunctionCall`
...
1461 |     /// Controls how the model responds to function calls.
     |     ------------------------------------------------------ required by a bound introduced by this call
     |
     = help: the following other types implement trait `Serialize`:
               &'a T
               &'a mut T
               ()
               (T0, T1)
               (T0, T1, T2)
               (T0, T1, T2, T3)
               (T0, T1, T2, T3, T4)
               (T0, T1, T2, T3, T4, T5)
             and 289 others
     = note: required for `std::option::Option<types::types::ChatCompletionFunctionCall>` to implement `Serialize`
note: required by a bound in `config::_::_serde::ser::SerializeStruct::serialize_field`
    --> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\serde-1.0.163\src\ser\mod.rs:1901:12
     |
1901 |         T: Serialize;
     |            ^^^^^^^^^ required by this bound in `SerializeStruct::serialize_field`

error[E0277]: the trait bound `types::types::ChatCompletionToolChoiceOption: Deserialize<'_>` is not satisfied
    --> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\async-openai-0.16.0\src\types\types.rs:1455:22
     |
1455 |     pub tool_choice: Option<ChatCompletionToolChoiceOption>,
     |                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `Deserialize<'_>` is not implemented for `types::types::ChatCompletionToolChoiceOption`
     |
     = help: the following other types implement trait `Deserialize<'de>`:
               &'a Path
               &'a [u8]
               &'a str
               ()
               (T0, T1)
               (T0, T1, T2)
               (T0, T1, T2, T3)
               (T0, T1, T2, T3, T4)
             and 341 others
     = note: required for `std::option::Option<types::types::ChatCompletionToolChoiceOption>` to implement `Deserialize<'_>`
note: required by a bound in `next_element`
    --> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\serde-1.0.163\src\de\mod.rs:1729:12
     |
1729 |         T: Deserialize<'de>,
     |            ^^^^^^^^^^^^^^^^ required by this bound in `SeqAccess::next_element`

error[E0277]: the trait bound `types::types::ChatCompletionFunctionCall: Deserialize<'_>` is not satisfied
    --> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\async-openai-0.16.0\src\types\types.rs:1468:24
     |
1468 |     pub function_call: Option<ChatCompletionFunctionCall>,
     |                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `Deserialize<'_>` is not implemented for `types::types::ChatCompletionFunctionCall`
     |
     = help: the following other types implement trait `Deserialize<'de>`:
               &'a Path
               &'a [u8]
               &'a str
               ()
               (T0, T1)
               (T0, T1, T2)
               (T0, T1, T2, T3)
               (T0, T1, T2, T3, T4)
             and 341 others
     = note: required for `std::option::Option<types::types::ChatCompletionFunctionCall>` to implement `Deserialize<'_>`
note: required by a bound in `next_element`
    --> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\serde-1.0.163\src\de\mod.rs:1729:12
     |
1729 |         T: Deserialize<'de>,
     |            ^^^^^^^^^^^^^^^^ required by this bound in `SeqAccess::next_element`

error[E0277]: the trait bound `types::types::ChatCompletionToolChoiceOption: Deserialize<'_>` is not satisfied
    --> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\async-openai-0.16.0\src\types\types.rs:1455:22
     |
1455 |     pub tool_choice: Option<ChatCompletionToolChoiceOption>,
     |                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `Deserialize<'_>` is not implemented for `types::types::ChatCompletionToolChoiceOption`
     |
     = help: the following other types implement trait `Deserialize<'de>`:
               &'a Path
               &'a [u8]
               &'a str
               ()
               (T0, T1)
               (T0, T1, T2)
               (T0, T1, T2, T3)
               (T0, T1, T2, T3, T4)
             and 341 others
     = note: required for `std::option::Option<types::types::ChatCompletionToolChoiceOption>` to implement `Deserialize<'_>`
note: required by a bound in `next_value`
    --> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\serde-1.0.163\src\de\mod.rs:1868:12
     |
1868 |         V: Deserialize<'de>,
     |            ^^^^^^^^^^^^^^^^ required by this bound in `MapAccess::next_value`

error[E0277]: the trait bound `types::types::ChatCompletionFunctionCall: Deserialize<'_>` is not satisfied
    --> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\async-openai-0.16.0\src\types\types.rs:1468:24
     |
1468 |     pub function_call: Option<ChatCompletionFunctionCall>,
     |                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `Deserialize<'_>` is not implemented for `types::types::ChatCompletionFunctionCall`
     |
     = help: the following other types implement trait `Deserialize<'de>`:
               &'a Path
               &'a [u8]
               &'a str
               ()
               (T0, T1)
               (T0, T1, T2)
               (T0, T1, T2, T3)
               (T0, T1, T2, T3, T4)
             and 341 others
     = note: required for `std::option::Option<types::types::ChatCompletionFunctionCall>` to implement `Deserialize<'_>`
note: required by a bound in `next_value`
    --> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\serde-1.0.163\src\de\mod.rs:1868:12
     |
1868 |         V: Deserialize<'de>,
     |            ^^^^^^^^^^^^^^^^ required by this bound in `MapAccess::next_value`

error[E0277]: the trait bound `types::types::ChatCompletionToolChoiceOption: Deserialize<'_>` is not satisfied
    --> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\async-openai-0.16.0\src\types\types.rs:1454:5
     |
1454 |     #[serde(skip_serializing_if = "Option::is_none")]
     |     ^ the trait `Deserialize<'_>` is not implemented for `types::types::ChatCompletionToolChoiceOption`
     |
     = help: the following other types implement trait `Deserialize<'de>`:
               &'a Path
               &'a [u8]
               &'a str
               ()
               (T0, T1)
               (T0, T1, T2)
               (T0, T1, T2, T3)
               (T0, T1, T2, T3, T4)
             and 341 others
     = note: required for `std::option::Option<types::types::ChatCompletionToolChoiceOption>` to implement `Deserialize<'_>`
note: required by a bound in `config::_::_serde::__private::de::missing_field`
    --> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\serde-1.0.163\src\private\de.rs:22:8
     |
22   |     V: Deserialize<'de>,
     |        ^^^^^^^^^^^^^^^^ required by this bound in `missing_field`

error[E0277]: the trait bound `types::types::ChatCompletionFunctionCall: Deserialize<'_>` is not satisfied
    --> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\async-openai-0.16.0\src\types\types.rs:1461:5
     |
1461 |     /// Controls how the model responds to function calls.
     |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `Deserialize<'_>` is not implemented for `types::types::ChatCompletionFunctionCall`
     |
     = help: the following other types implement trait `Deserialize<'de>`:
               &'a Path
               &'a [u8]
               &'a str
               ()
               (T0, T1)
               (T0, T1, T2)
               (T0, T1, T2, T3)
               (T0, T1, T2, T3, T4)
             and 341 others
     = note: required for `std::option::Option<types::types::ChatCompletionFunctionCall>` to implement `Deserialize<'_>`
note: required by a bound in `config::_::_serde::__private::de::missing_field`
    --> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\serde-1.0.163\src\private\de.rs:22:8
     |
22   |     V: Deserialize<'de>,
     |        ^^^^^^^^^^^^^^^^ required by this bound in `missing_field`

error[E0277]: the trait bound `SpeechModel: Serialize` is not satisfied
    --> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\async-openai-0.16.0\src\types\types.rs:1665:53
     |
1665 | #[derive(Clone, Default, Debug, Builder, PartialEq, Serialize)]
     |                                                     ^^^^^^^^^ the trait `Serialize` is not implemented for `SpeechModel`
...
1675 |     /// One of the available [TTS models](https://platform.openai.com/docs/models/tts): `tts-1` or `tts-1-hd`
     |     --------------------------------------------------------------------------------------------------------- required by a bound introduced by this call
     |
     = help: the following other types implement trait `Serialize`:
               &'a T
               &'a mut T
               ()
               (T0, T1)
               (T0, T1, T2)
               (T0, T1, T2, T3)
               (T0, T1, T2, T3, T4)
               (T0, T1, T2, T3, T4, T5)
             and 289 others
note: required by a bound in `config::_::_serde::ser::SerializeStruct::serialize_field`
    --> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\serde-1.0.163\src\ser\mod.rs:1901:12
     |
1901 |         T: Serialize;
     |            ^^^^^^^^^ required by this bound in `SerializeStruct::serialize_field`

error[E0277]: the trait bound `Voice: Serialize` is not satisfied
    --> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\async-openai-0.16.0\src\types\types.rs:1665:53
     |
1665 | #[derive(Clone, Default, Debug, Builder, PartialEq, Serialize)]
     |                                                     ^^^^^^^^^ the trait `Serialize` is not implemented for `Voice`
...
1678 |     /// The voice to use when generating the audio. Supported voices are `alloy`, `echo`, `fable`, `onyx`, `nova`, and `shimmer`.
     |     ----------------------------------------------------------------------------------------------------------------------------- required by a bound introduced by this call
     |
     = help: the following other types implement trait `Serialize`:
               &'a T
               &'a mut T
               ()
               (T0, T1)
               (T0, T1, T2)
               (T0, T1, T2, T3)
               (T0, T1, T2, T3, T4)
               (T0, T1, T2, T3, T4, T5)
             and 289 others
note: required by a bound in `config::_::_serde::ser::SerializeStruct::serialize_field`
    --> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\serde-1.0.163\src\ser\mod.rs:1901:12
     |
1901 |         T: Serialize;
     |            ^^^^^^^^^ required by this bound in `SerializeStruct::serialize_field`

For more information about this error, try `rustc --explain E0277`.
error: could not compile `async-openai` due to 16 previous errors

text to speech - flac format does not seem to work

Problem

It's very strange; when I use SpeechResponseFormat::Flac, the file created is not valid and cannot be played. Something gets saved, but I'm not sure what it is (I cannot open it). (I did put the matching file extension for each of those types)

However, when I use SpeechResponseFormat::Aac or SpeechResponseFormat::Mp3 explicitly, it works fine. And leaving things to None, works too, as it is mp3.

Code

	let request = CreateSpeechRequestArgs::default()
		.input("Today is a wonderful day to build something people love!")
		.voice(Voice::Alloy)
		.model(SpeechModel::Tts1)
		.response_format(SpeechResponseFormat::Flac) // Mp3, Aac will work fine
		.build()?;

	let response = client.audio().speech(request).await?;

	response.save("./data/audio.flac").await?;

Note: I checked via openai Typescript api, and the flac does work.

Can support for proxy services be increased?

Hi,

Can support for proxy services be increased? It's like this:

async fn execute<O>(&self, request: reqwest::Request) -> Result<O, OpenAIError>
    where
        O: DeserializeOwned,
    {
        let client = match self.proxy {
            Some(p) => reqwest::Client::builder().proxy(reqwest::Proxy::all(p)?).build()?,
            None => reqwest::Client::new(),
        }; 
         ...
}

Failed to deserialized

In a non reproducible way errors like the following occur.

Error: failed to deserialize api response: expected value at line 1 column 1

For more info, these are caused when the responses are awaited.

Aggressive Cargo.toml dependencies cause problems

I noticed the latest release specifies minimum versions of the latest release of all dependency crates. In some cases this crate requires a version of another crate released only a few days ago.

The problem is that some larger projects version-lock certain dependencies, and it can take them many months to migrate to the latest. This means it's impossible to integrate async-openai into a project along-side certain other frameworks.

Obviously, when functionality from the latest version of a dependency is needed, then it's necessary to specify that dependency version, however automatically advancing to the latest version of every crate can cause a lot of unnecessary problems.

Thank you.

how to change base_url?

base_url default is https://api.openai.com,
if the cultivation base_url for other?

.Env file, is it supported?

Add `Deserialize` to `OpenAIConfig` and `AzureConfig`

Please add Deserialize derive to OpenAIConfig and AzureConfig. I store my Azure configs in a json file for testing. Without Deserialize, I need to copy a same struct and add Deserialize for it and convert it to AzureConfig, which is unnecessarily verbose.

Use official types for chat completion stream response from spec

async-openai's existing types for Chat stream were written from observation of OpenAI response.

Recently (16th June 2023) officially types were update for it, and needs to be used in async-openai as well
Ref: https://github.com/openai/openai-openapi/pull/48/files

async-openai has used the type names from spec for majority of types - hence ChatCompletionStreamResponseDelta and CreateChatCompletionStreamResponse names should be used.

Azure OpenAI Errors are unhandled

Here's one example that only gives a JSONDeserialize error.

{
  "error": {
    "code": 429,
    "message":  "Requests to the Creates a completion for the chat message Operation under Azure OpenAI API version 2023-05-15 have exceeded token rate limit of your current OpenAI S0 pricing tier. Please retry after 60 seconds. Please go here: https://aka.ms/oai/quotaincrease if you would like to further increase the default rate limit."
  }
}

JSONDeserialize(Error("missing field type"))

codex example can't find the model

     Running `/Users/robert/src/rust/openai/async-openai/examples/target/debug/codex`
Error: ApiError(ApiError { message: "The model: `code-davinci-002` does not exist", type: "invalid_request_error", param: None, code: Some(String("model_not_found")) })

Wasm support

Thanks for making this crate, it seems very useful :)

Currently the crate always depends on e.g. tokio which means it can't be compiled to wasm for use in frontends (or serverless wasm workers like on AWS/Cloudflare) that want to make OpenAI API requests.
It would be great if this crate could also be compiled to wasm.

Streaming response chunk size is incorrect

Hi,

I've been experimenting with a few libraries in different languages - I've got this working correctly, but with the streamed response, even with your own examples it seems to stream by newlines rather than by token. This means that the time until output is very long, introducing a large delay with a much less responsive overall feel.

Your own example has this clearly shown: (see https://raw.githubusercontent.com/64bit/async-openai/assets/completions-stream/output.svg)

Correctly implemented, the streaming should function in the same way as ChatGPT's web API. If you would like a reference on a library that implements this correctly, you can find this here: https://github.com/OkGoDoIt/OpenAI-API-dotnet#streaming

I hope you can fix this, because I'd prefer to be using rust for this project than C#!

invalid_request_error: 'content' is a required property

Hi @64bit!

TL;DR: async-openai seems to silently create unexpected, invalid requests because the OpenAI API expects the content field be present, even if it is null.

The content field seems to be required in situations that, AFAIK, is only mentioned en passant in the OpenAI API reference documentation:

content (string or null) Required
The contents of the message. content is required for all messages, and may be null for assistant messages with function calls.

(Emphases mine.) As such, I don't know if this should be addressed in async-openai. Depending on how you interpret the documentation, it might be the case that async-openai is in fact violating the API contract. In any case, I had such a bad time debugging that I think it's worth creating an issue.

The issue

The following fails by using curl directly:

$ curl https://api.openai.com/v1/chat/completions -u :$OPENAI_API_KEY -H 'Content-Type: application/json' -d '{"model":"gpt-3.5-turbo","messages":[{"role":"user","content":"What is the weather like in Boston?"},{"role":"assistant","function_call":{"name":"get_current_weather","arguments":"{\"location\":\"Boston, MA\"}"}},{"role":"function","content":"{\"forecast\":[\"sunny\",\"windy\"],\"location\":\"Boston, MA\",\"temperature\":\"72\",\"unit\":null}","name":"get_current_weather"}],"functions":[{"name":"get_current_weather","description":"Get the current weather in a given location","parameters":{"properties":{"location":{"description":"The city and state, e.g. San Francisco, CA","type":"string"},"unit":{"enum":["celsius","fahrenheit"],"type":"string"}},"required":["location"],"type":"object"}}],"temperature":0.0,"max_tokens":null}'
{
  "error": {
    "message": "'content' is a required property - 'messages.1'",
    "type": "invalid_request_error",
    "param": null,
    "code": null
  }
}

The second message {"role":"assistant","function_call":{"name":"get_current_weather","arguments":"{\"location\":\"Boston, MA\"}"}} does not contain a content field. The solution is to insert one. Both "content":null and "content":"" work. So there's a difference between omitting the content field and having "content":null.

Why

I created the failing example above by serializing a CreateChatCompletionRequest directly. The issue lies in skipping serializing the content field if it is None:

/// The contents of the message.
/// `content` is required for all messages except assistant messages with function calls.
#[serde(skip_serializing_if = "Option::is_none")]
pub content: Option<String>,

The current workaround is to set content to an empty string:

aot::ChatCompletionRequestMessageArgs::default()
    .role(aot::Role::Assistant)
    .function_call(aot::FunctionCall { name, arguments })
    .content("")
    .build()?

Again, depending on how you interpret the OpenAI API documentation, this behaviour might break the API contract. What do you think?

Example of multiple function calls

GPT-4 Turbo supports multiple function calls, but I couldn't any examples of this in async-openai yet. It took me a little while to stitch together how to do this since it's a bit divergent from single function calls, so maybe this minimal demonstration will be helpful to someone else:

let request = CreateChatCompletionRequestArgs::default()
    .model("gpt-4-1106-preview")
    .messages([
        ChatCompletionRequestUserMessageArgs::default()
            .content(prompt)
            .build()?
            .into(),
        ChatCompletionRequestSystemMessageArgs::default()
            .content(SYSTEM_PROMPT)
            .build()?
            .into(),
    ])
    .max_tokens(512u16)
    .stream(false)
    .tools([
        ChatCompletionToolArgs::default()
            .r#type(ChatCompletionToolType::Function)
            .function(
                // ... build ChatCompletionFunctions here ...
            )
            .build()?,
        ...
    ])
    .build()?;

if let Some(tool_calls) = response_message.tool_calls {
    tool_calls
        .into_iter()
        .map(|tool_call| ...)
        .collect()?;
}

Add support the new fine-tuning API (`/fine_tuning/jobs`) for GPT3.5-Turbo or laters

Using FineTunes of async-openai with GPT3.5-Turbo as the base model, the following response is returned:

Some("invalid_request_error"): gpt-3.5-turbo can only be fine-tuned on the new fine-tuning API (/fine_tuning/jobs). This API (/fine-tunes) is being deprecated. Please refer to our documentation for more information: https://platform.openai.com/docs/api-reference/fine-tunin

It seems that the async-openai API implementation needs to be updated before it can be used. 🥺

Repro.

  1. CreateFineTuneRequestArgs::default()
  2. .model("gpt-3.5-turbo") (https://docs.rs/async-openai/0.14.3/async_openai/types/struct.CreateFineTuneRequest.html#structfield.model )
  3. .build() and then reqeust with client client.fine_tunes().create(fine_tune_request).await

FIleInput from memory

I am trying to understand how to upload a file with contents from memory and can't figure out how. It appears that file input is only from a file? Do I have to save my bytes to file system to read them back?

My overall goal is to upload a file my program receives, from an incoming network request to an assistant thread.

Allow customization of internal HTTP client

Hello! I'm using your library and I noticed that there's no option to customize the internal HTTP client used for API requests when calling Client::new(). This makes it impossible to implement a proxy for all HTTP requests.

I was wondering if it would be possible to add a customization option for the internal reqwest client, similar to what the chatgpt-api offer. This would allow for greater flexibility and more use cases for this library.

insufficient_quota

I use this example is ok.

import openai
import os
import json

def get_completion(prompt, model="gpt-3.5-turbo"):
    completion = openai.ChatCompletion.create(
        model=model,
        messages=[
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "what is 1 + 1?"}
        ],
        temperature=0.6
    )

    return completion.choices[0].message



names = [obj["id"] for obj in openai.Model.list()["data"]]
json_names = json.dumps(names)
# print(openai.Model.list())

# print(json_names)
print(get_completion("What is 1 + 1?"))

but, I run examples/chat, meet this error.

❯ cargo run
    Finished dev [unoptimized + debuginfo] target(s) in 0.30s
     Running `/Users/davirain/rust/async-openai/examples/target/debug/chat`
Error: ApiError(ApiError { message: "You exceeded your current quota, please check your plan and billing details.", type: "insufficient_quota", param: None, code: None })

Unable to request JSON chat completion due to private field

ChatCompletionResponseFormat was introduced to support JSON output in the chat completions API, but I believe the r#type field should be public instead of private? Apologies if I'm missing something. Happen to open a PR if this is the case.

Boxed Future types (e.g. CompletionResponseStream, etc.) aren't Send

The following code won't compile because CompletionResponseStream isn't marked as Send:

    fn interpret_bool(token_stream: &mut CompletionResponseStream) -> BoxFuture<'_, bool> {
        async move {
            while let Some(response) = token_stream.next().await {
                match response {
                    Ok(response) => {
                        let token_str = &response.choices[0].text.trim();
                        if !token_str.is_empty() {
                            return token_str.contains("yes") || token_str.contains("Yes");
                        }
                    },
                    Err(_) => panic!()
                }
            }
            false
        }.boxed()
    }

This limits the ability to integrate this crate into a project using a multi-threaded runtime. The issue isn't fundamental however, because the underlying Stream trait is Send. I have prototyped a change to address this. I will submit a PR.

Thank you.

Does it support context?

Thank you for this excellent project!
I want to ask if it supports context? By looking through the code, that might have not been supported, or maybe the API doesn't support it.

Tokenizer support

Whenever there are updates to this library, it causes compatibility issues with tiktoken-rs. It could be better to flip the script and have this library depend on tiktoken-rs instead, especially since this library often needs changes to keep up with OpenAI API updates.

zurawiki/tiktoken-rs#50

Trouble with teloxide after upgrading to 0.14.0 from 0.12.0

I recently tried to upgrade my application to the latest version in order to make use of some of the new stuff announced at dev day. I have a telegram bot that uses teloxide, with a use of the image and transcription endpoints. However, I was running into this error with compiling:

the trait bound `fn(teloxide::Bot, teloxide::prelude::Message) -> impl futures_util::Future<Output = Result<(), RequestError>> {respond_to_voice}: Injectable<_, _, _>` is not satisfied
the following other types implement trait `Injectable<Input, Output, FnArgs>`:
  <Asyncify<Func> as Injectable<Input, Output, ()>>
  <Asyncify<Func> as Injectable<Input, Output, (A, B)>>
  <Asyncify<Func> as Injectable<Input, Output, (A, B, C)>>
  <Asyncify<Func> as Injectable<Input, Output, (A, B, C, D)>>
  <Asyncify<Func> as Injectable<Input, Output, (A, B, C, D, E)>>
  <Asyncify<Func> as Injectable<Input, Output, (A, B, C, D, E, F)>>
  <Asyncify<Func> as Injectable<Input, Output, (A, B, C, D, E, F, G)>>
  <Asyncify<Func> as Injectable<Input, Output, (A, B, C, D, E, F, G, H)>>
and 2 others

Instead of trying to upgrade directly to the latest version from 0.12, I tried upgrading incrementally until discovering that the issue appeared with the upgrade to 0.14. It looks like it might be related to the changes around retrying for rate limiting, but I'm at a bit of a loss with how to fix it.

My implementation is pretty simple in terms of how I'm calling the library itself:

pub async fn get_ai_transcription(audio: &String) -> Result<String, CustomError> {
    let client = Client::new();

    let request = CreateTranscriptionRequestArgs::default()
        .file(audio)
        .model("whisper-1")
        .build()?;

    let response = client.audio().transcribe(request).await?;

    Ok(response.text)
}

but I was able to determine that it was occurring due to the let response = client.audio().transcribe(request).await?; line.

After trying a lot of debugging to even get the error message to change, I added this basic test

fn test() {
    fn assert_send(_: impl Send) {}
    assert_send(get_ai_transcription(&"".to_string()));
}

and finally got a more readable error that disappears in 0.13 and reappears around 0.14.

    |
309 |     assert_send(get_ai_transcription(&"".to_string()));
    |                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ future returned by `get_ai_transcription` is not `Send`
    |
    = help: the trait `std::marker::Send` is not implemented for `dyn futures_util::Future<Output = Result<Form, OpenAIError>>`
note: required by a bound in `assert_send`
   --> src/ai.rs:308:28
    |
308 |     fn assert_send(_: impl Send) {}
    |                            ^^^^ required by this bound in `assert_send`

Unfortunately, now I'm a bit stuck. I'm pretty new to Rust, so debugging can still be a bit difficult. Do you have any suggestions or ideas for what I might be able to try?

Failed to deserialize api response: invalid type: null, expected a sequence at line 27763 column 23

Hey!

I'm currently using this library to generate Embeddings but facing this issue when generating them in batches. With a batch size of 25

        let mut req_count = 0;
        for chunk in descriptions_for_embeddings.chunks(25) {
            req_count += 1;
            info!("Request: {}", req_count);
            let embeddings_res = OPENAI_CLIENT
                .embeddings()
                .create(CreateEmbeddingRequest {
                    model: "text-embedding-ada-002".to_string(),
                    input: EmbeddingInput::StringArray(Vec::from(chunk.clone())),
                    user: Some(ctx.task_owner.to_string()),
                })
                .await?;
            embeddings.extend(embeddings_res.data);
        }

But after a few requests anywhere between 12 - 16 requests this just failes for some reason. I'm definitly not hitting any ratelimits either so not sure what is causing this.

Update library to match OpenAI API release 2.0.0

OpenAI released a new version of their API here.

Major changes are removing deprecated endpoints as well as updating the CreateChatCompletionStreamResponse and creating ChatCompletionStreamResponseDelta. Those types are brought up in #75.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.