GithubHelp home page GithubHelp logo

chatgpt_rs's Introduction

Metrics

chatgpt_rs's People

Contributors

brendon-shf avatar dependabot[bot] avatar iepathos avatar ikkerens avatar inszee avatar m1sk9 avatar maxuss avatar nesso-pfl avatar philipp-m avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

chatgpt_rs's Issues

unwrap() is causing the main thread to panic, and SerdeError

By using your example, instead of using unwrap(), I have decided to paste the session key in the ChatGPT::new("SESSION_KEY_HERE"), to avoid panicking.

The program did not produce any errors until running it for about 2-5 seconds. It led me to another error; Error: SerdeError(Error("invalid type: null, expected struct ConversationResponse", line: 1, column: 4))

support more roles

Great project! it would be perfect if more roles and moulde can be supported.

gpt_function Macro cant figure out Result Type

in the following function

pub async fn go_get_remote_response(prompt: &str) -> Result<String, Box<dyn Error>> {
    // send to remote api
    let openai_api_key =
        env::var("OPENAI_API_KEY").expect("OPENAI_API_KEY must be set in .env file");
    let client = ChatGPT::new(openai_api_key)?;
    let mut conversation = client.new_conversation();
    conversation.add_function(read_file());
    conversation.add_function(write_file());

    if let Ok(response) = conversation.send_message_functions(prompt).await {
        let message_choices = response.message_choices;
        loop {
            if let Some(message_choice) = message_choices.first() {
                let message = message_choice.message.clone();
                println!("{:?}: {}", message.role, message.content);
                if message_choice.finish_reason == "stop" {
                    return Ok(message.content);
                }
            }
        }
    }
    Ok("test".to_string())
}

for the function read_file() as follows

/// Read the contents of a file and return them as a string
///
/// * filename - The name of the file to read
/// * Returns - The contents of the file as a string
#[gpt_function]
pub async fn read_file(filename: String) -> Result<String, io::Error> {
    let mut file = File::open(filename).await?;
    let mut contents = String::new();
    file.read_to_string(&mut contents).await?;
    Ok(contents)
}

the first line of the macro /// Read the contents of a file and return them as a string
displays the following error:

type annotations needed for `std::result::Result<std::string::String, E>`rustc ...
test.rs(50, 60): consider giving `result` an explicit type, where the type for type parameter `E` is specified: `: std::result::Result<std::string::String, E>`

This error doesnt make sense to me as Result definitely has an explicity type provided.

I suspect that the gpt_function macro is mowing over the stdlib Result type with the chatgpt::Result which wraps it?

No credentials are available in security package

I try to run my project by calling a rust function through a cpp project in visual studio.

When i pass a custom string, i get the following error:

An error occurred when processing a request: error sending request for url (https://api.openai.com/v/chat/completions): error trying to connect: No credentials are available in the security package (os error -2146893042)

Caused by: error sending request for url (https://api.openai.com/v/chat/completions): error trying to connect: No credentials are available in the security package (os error -2146893042)

Caused by: error trying to connect: No credentials are available in the security package (os error-2146893042) Caused by: No credentials are available in the security package (os error -2146893042)

any help @Maxuss

The `send_message_streaming` function does not raise an error when the maximum context length is exceeded.

Hi!

If a message exceeds the maximum allowed context length for the conversation, the standard send_message function will raise an error, e.g.,:

Error: BackendError { message: "This model's maximum context length is 4097 tokens. However, your messages resulted in 5000 tokens. Please reduce the length of the messages.", error_type: "invalid_request_error" }

However, when the same message is sent using the send_message_streaming function, no error is raised. Instead, the conversation becomes permanently stuck.

Is there a way to work around this issue?

Thanks!

error[E0433]: failed to resolve: use of undeclared type `Url`

chatgpt_rs-1.1.1/./src/client.rs:158:17

    |
158 |                 Url::from_str(self.config.api_url)
    |                 ^^^ not found in this scope
    |
help: consider importing one of these items
    |
1   | use crate::prelude::Url;
    |
1   | use reqwest::Url;
    |
1   | use url::Url;

chatgpt_rs-1.1.1/./src/client.rs:246:17

    |
246 |                 Url::from_str(self.config.api_url)
    |                 ^^^ not found in this scope
    |
help: consider importing one of these items
    |
1   | use crate::prelude::Url;
    |
1   | use reqwest::Url;
    |
1   | use url::Url;

conversation function call failing: error missing name

Hi,
im trying to get the function feature running and it works more or less.

When i send a new message after calling a function is get this error:
BackendError { message: "Missing parameter 'name': messages with role 'function' must have a 'name'.", error_type: "invalid_request_error"

Ive already tested the fix function branch.

Kind regards
Philipp

/// returns a list of links for a search query on google
///
/// * search_query - the search query
#[gpt_function]
async fn search_google(search_query: String) -> Result<Value, chatgpt::err::Error> {
    let google_api = crate::google_api::GoogleApi::new(
        "xxx".to_string(),
        "xxx".to_string(),
    );
    let links = google_api.search(search_query).await.unwrap();
    println!("Links: {:?}", links);
    return Ok(format!("{:?}", links).into());
}
#[tokio::test]
async fn test_gpt_function_search_google() {
    let api_key = "sk-x".to_string(); 
    let gpt_api = GptApi::new(api_key).unwrap();
    let mut conversation = gpt_api.client.new_conversation();
    conversation.add_function(search_google()).unwrap();
    conversation.always_send_functions = true;
    let response = conversation
        .send_message(
            "search google to find the best links for changing a tire",
        )
        .await
        .unwrap();
    println!("Response: {:?}", response);
    let response = conversation
        .send_message("whats the result?")
        .await
        .unwrap();
    println!("Response: {:?}", response);
}

running 1 test
Links: [LinkResult { title: "How to Change a Flat Tire", link: "https://www.bridgestonetire.com/learn/maintenance/how-to-change-a-flat-tire/", snippet: "Apr 1, 2021 ... How to Change a Flat Tire · 1. FIND A SAFE LOCATION. As soon as you realize you have a flat tire, do not abruptly brake or turn. · 2. TURN ON\u{a0}..." }, LinkResult { title: "How to Change a Flat Tire | Driving-Tests.org", link: "https://driving-tests.org/beginner-drivers/how-to-change-tires/", snippet: "How To Change Tires · 1) Pull off the road as soon as possible · 2) Turn on your hazard lights · 3) Apply the parking brake · 4) Apply wheel wedges / large\u{a0}..." }, LinkResult { title: "How to Change a Tire: Swap Your Flat Out Like a Pro", link: "https://www.wikihow.com/Change-a-Tire", snippet: "Steps · Pull over and put your hazards on. · Remove your spare tire and the jack. · Assess the issue with your vehicle. · Elevate your vehicle with a jack." }, LinkResult { title: "Stuck with a flat tire? here's How to Change a Tire in 10 steps | Miller ...", link: "https://www.millerautoplaza.com/stuck-with-a-flat-tire-heres-how-to-change-a-tire-in-10-steps/", snippet: "Sep 22, 2017 ... Stuck with a flat tire? here's How to Change a Tire in 10 steps · 1. Find a Safe Place to Pull Over · 3. Check for Materials · 4. Loosen the Lug\u{a0}..." }, LinkResult { title: "How to Change a Tire - The Home Depot", link: "https://www.homedepot.com/c/ah/how-to-change-a-tire/9ba683603be9fa5395fab908e21cabb", snippet: "Tools to Change a Tire · A manual car jack designed to raise your vehicle high enough to remove the flat tire. · A spare tire. · A lug wrench or torque wrench." }, LinkResult { title: "How to Change a Car Tire | Flat Tire - Consumer Reports", link: "https://www.consumerreports.org/cars/tire-buying-maintenance/how-to-change-a-car-tire-a2760414554/", snippet: "Aug 11, 2023 ... How to Change a Car Tire · The car needs to be on level, solid ground in order for you to safely use the jack. · Turn on your hazard lights." }, LinkResult { title: "EMSK: How to change a flat tire!! : r/everymanshouldknow", link: "https://www.reddit.com/r/everymanshouldknow/comments/1icjyx/emsk_how_to_change_a_flat_tire/", snippet: "Jul 15, 2013 ... EMSK: How to change a flat tire!! · Apply parking brake · Loosen bolts on tire before jacking up · Locate jack and lift points. · Use jack to\u{a0}..." }, LinkResult { title: "How to Change a Tire | Change a flat car tire step by step - YouTube", link: "https://www.youtube.com/watch?v=joBmbh0AGSQ", snippet: "Jan 30, 2008 ... Nothing takes the joy out of a road trip like a flat tire. Do you know how to change it? We didn't, but we've learned from Allan Stanley of\u{a0}..." }, LinkResult { title: "Changing tires :: BeamNG.drive General Discussions", link: "https://steamcommunity.com/app/284160/discussions/0/2217311444328764498/", snippet: "Jul 14, 2017 ... In most cars those are located under the 'Suspension' group. Once you find, you will see that you can further expand the 'wheels' group. On the\u{a0}..." }, LinkResult { title: "60 Percent of People Can't Change a Flat Tire - But Most Can ...", link: "https://www.nbcnews.com/business/consumer/draft-60-percent-people-can-t-change-flat-tire-most-n655501", snippet: "Sep 27, 2016 ... 60 Percent of People Can't Change a Flat Tire - But Most Can Google It." }]
Response: CompletionResponse { message_id: Some("chatcmpl-8V47LSJIYREzPSkEpnxMcN5o9yjVe"), created_timestamp: Some(1702414687), model: "gpt-4-0613", usage: TokenUsage { prompt_tokens: 90, completion_tokens: 20, total_tokens: 110 }, message_choices: [MessageChoice { message: ChatMessage { role: Assistant, content: "", function_call: Some(FunctionCall { name: "search_google", arguments: "{\n  \"search_query\": \"how to change a tire\"\n}" }) }, finish_reason: "function_call", index: 0 }] }
thread 'gpt_api::test_gpt_function_search_google' panicked at src/gpt_api.rs:149:10:
called `Result::unwrap()` on an `Err` value: BackendError { message: "Missing parameter 'name': messages with role 'function' must have a 'name'.", error_type: "invalid_request_error" }
stack backtrace:
   0: rust_begin_unwind
             at /rustc/0e2dac8375950a12812ec65868e42b43ed214ef9/library/std/src/panicking.rs:645:5
   1: core::panicking::panic_fmt
             at /rustc/0e2dac8375950a12812ec65868e42b43ed214ef9/library/core/src/panicking.rs:72:14
   2: core::result::unwrap_failed
             at /rustc/0e2dac8375950a12812ec65868e42b43ed214ef9/library/core/src/result.rs:1649:5
   3: core::result::Result<T,E>::unwrap
             at /rustc/0e2dac8375950a12812ec65868e42b43ed214ef9/library/core/src/result.rs:1073:23
   4: cc_blog_gpt::gpt_api::test_gpt_function_search_google::{{closure}}
             at ./src/gpt_api.rs:146:20
   5: <core::pin::Pin<P> as core::future::future::Future>::poll
             at /rustc/0e2dac8375950a12812ec65868e42b43ed214ef9/library/core/src/future/future.rs:125:9
   6: <core::pin::Pin<P> as core::future::future::Future>::poll
             at /rustc/0e2dac8375950a12812ec65868e42b43ed214ef9/library/core/src/future/future.rs:125:9
   7: tokio::runtime::scheduler::current_thread::CoreGuard::block_on::{{closure}}::{{closure}}::{{closure}}
             at /Users/philipp/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.0/src/runtime/scheduler/current_thread/mod.rs:665:57
   8: tokio::runtime::coop::with_budget
             at /Users/philipp/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.0/src/runtime/coop.rs:107:5
   9: tokio::runtime::coop::budget
             at /Users/philipp/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.0/src/runtime/coop.rs:73:5
  10: tokio::runtime::scheduler::current_thread::CoreGuard::block_on::{{closure}}::{{closure}}
             at /Users/philipp/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.0/src/runtime/scheduler/current_thread/mod.rs:665:25
  11: tokio::runtime::scheduler::current_thread::Context::enter
             at /Users/philipp/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.0/src/runtime/scheduler/current_thread/mod.rs:410:19
  12: tokio::runtime::scheduler::current_thread::CoreGuard::block_on::{{closure}}
             at /Users/philipp/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.0/src/runtime/scheduler/current_thread/mod.rs:664:36
  13: tokio::runtime::scheduler::current_thread::CoreGuard::enter::{{closure}}
             at /Users/philipp/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.0/src/runtime/scheduler/current_thread/mod.rs:743:68
  14: tokio::runtime::context::scoped::Scoped<T>::set
             at /Users/philipp/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.0/src/runtime/context/scoped.rs:40:9
  15: tokio::runtime::context::set_scheduler::{{closure}}
             at /Users/philipp/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.0/src/runtime/context.rs:176:26
  16: std::thread::local::LocalKey<T>::try_with
             at /rustc/0e2dac8375950a12812ec65868e42b43ed214ef9/library/std/src/thread/local.rs:270:16
  17: std::thread::local::LocalKey<T>::with
             at /rustc/0e2dac8375950a12812ec65868e42b43ed214ef9/library/std/src/thread/local.rs:246:9
  18: tokio::runtime::context::set_scheduler
             at /Users/philipp/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.0/src/runtime/context.rs:176:9
  19: tokio::runtime::scheduler::current_thread::CoreGuard::enter
             at /Users/philipp/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.0/src/runtime/scheduler/current_thread/mod.rs:743:27
  20: tokio::runtime::scheduler::current_thread::CoreGuard::block_on
             at /Users/philipp/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.0/src/runtime/scheduler/current_thread/mod.rs:652:19
  21: tokio::runtime::scheduler::current_thread::CurrentThread::block_on::{{closure}}
             at /Users/philipp/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.0/src/runtime/scheduler/current_thread/mod.rs:175:28
  22: tokio::runtime::context::runtime::enter_runtime
             at /Users/philipp/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.0/src/runtime/context/runtime.rs:65:16
  23: tokio::runtime::scheduler::current_thread::CurrentThread::block_on
             at /Users/philipp/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.0/src/runtime/scheduler/current_thread/mod.rs:167:9
  24: tokio::runtime::runtime::Runtime::block_on
             at /Users/philipp/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.0/src/runtime/runtime.rs:348:47
  25: cc_blog_gpt::gpt_api::test_gpt_function_search_google
             at ./src/gpt_api.rs:150:5
  26: cc_blog_gpt::gpt_api::test_gpt_function_search_google::{{closure}}
             at ./src/gpt_api.rs:133:43
  27: core::ops::function::FnOnce::call_once
             at /rustc/0e2dac8375950a12812ec65868e42b43ed214ef9/library/core/src/ops/function.rs:250:5
  28: core::ops::function::FnOnce::call_once
             at /rustc/0e2dac8375950a12812ec65868e42b43ed214ef9/library/core/src/ops/function.rs:250:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
test gpt_api::test_gpt_function_search_google ... FAILED

No response in function calling example

% cargo run --example function_calls sk-xxx
    Finished dev [unoptimized + debuginfo] target(s) in 0.05s
     Running `target/debug/examples/function_calls sk-xxx`
Incoming message for maxus: This is a test message.
Response:

Is this the intended behavior?

edit: Great, I leaked my API key 😁, didn't notice it repeats in the output.

1.2.3 introduced a regression on streams.

when async processing streams with 1.2.3 I often end up in a panic whereas with 1.1.3 everything works fine

async fn process_message_stream(client: ChatGPT, prompt: &str) -> chatgpt::Result<String> {
	let stream = client.send_message_streaming(prompt).await?;

	// Wrapping the buffer in an Arc and Mutex
	let buffer = Arc::new(Mutex::new(Vec::<String>::new()));

	// Iterating over stream contents
	stream
		.for_each({
			// Cloning the Arc to be moved into the outer move closure
			let buffer = Arc::clone(&buffer);
			move |each| {
				// Cloning the Arc again to be moved into the async block
				let buffer_clone = Arc::clone(&buffer);
				async move {
					match each {
						ResponseChunk::Content { delta, response_index: _ } => {
							// Printing part of response without the newline
							print!("{delta}");
							// print!(".");
							// Manually flushing the standard output, as `print` macro does not do
							// that
							stdout().lock().flush().unwrap();
							// Appending delta to buffer
							let mut locked_buffer = buffer_clone.lock().unwrap();
							locked_buffer.push(delta);
						},
						_ => {},
					}
				}
			}
		})
		.await;

	// Use buffer outside of for_each, by locking and dereferencing
	let final_buffer = buffer.lock().unwrap();

	Ok(final_buffer.join(""))
}
Stream closed abruptly!: Transport(reqwest::Error { kind: Body, source: TimedOut })
stack backtrace:
   0: rust_begin_unwind
             at /rustc/82e1608dfa6e0b5569232559e3d385fea5a93112/library/std/src/panicking.rs:645:5
   1: core::panicking::panic_fmt
             at /rustc/82e1608dfa6e0b5569232559e3d385fea5a93112/library/core/src/panicking.rs:72:14
   2: core::result::unwrap_failed
             at /rustc/82e1608dfa6e0b5569232559e3d385fea5a93112/library/core/src/result.rs:1653:5
   3: core::result::Result<T,E>::expect
             at /rustc/82e1608dfa6e0b5569232559e3d385fea5a93112/library/core/src/result.rs:1034:23
   4: chatgpt::client::ChatGPT::process_streaming_response::{{closure}}::{{closure}}
             at /home/fm/.cargo/registry/src/index.crates.io-6f17d22bba15001f/chatgpt_rs-1.2.3/./src/client.rs:301:34
   5: <T as futures_util::fns::FnMut1<A>>::call_mut
             at /home/fm/.cargo/registry/src/index.crates.io-6f17d22bba15001f/futures-util-0.3.28/src/fns.rs:28:9
   6: <futures_util::stream::stream::map::Map<St,F> as futures_core::stream::Stream>::poll_next::{{closure}}
             at /home/fm/.cargo/registry/src/index.crates.io-6f17d22bba15001f/futures-util-0.3.28/src/stream/stream/map.rs:59:33
   7: core::option::Option<T>::map
             at /rustc/82e1608dfa6e0b5569232559e3d385fea5a93112/library/core/src/option.rs:1072:29
   8: <futures_util::stream::stream::map::Map<St,F> as futures_core::stream::Stream>::poll_next
             at /home/fm/.cargo/registry/src/index.crates.io-6f17d22bba15001f/futures-util-0.3.28/src/stream/stream/map.rs:59:21
   9: <futures_util::stream::stream::for_each::ForEach<St,Fut,F> as core::future::future::Future>::poll
             at /home/fm/.cargo/registry/src/index.crates.io-6f17d22bba15001f/futures-util-0.3.28/src/stream/stream/for_each.rs:70:47
  10: tldw::summarizer::process_message_stream::{{closure}}
             at ./src/summarizer.rs:67:4
  11: tldw::summarizer::process_short_input::{{closure}}
             at ./src/summarizer.rs:109:59
  12: tldw::main::{{closure}}
             at ./src/main.rs:76:69
  13: tokio::runtime::park::CachedParkThread::block_on::{{closure}}
             at /home/fm/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/park.rs:282:63
  14: tokio::runtime::coop::with_budget
             at /home/fm/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/coop.rs:107:5
  15: tokio::runtime::coop::budget
             at /home/fm/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/coop.rs:73:5
  16: tokio::runtime::park::CachedParkThread::block_on
             at /home/fm/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/park.rs:282:31
  17: tokio::runtime::context::blocking::BlockingRegionGuard::block_on
             at /home/fm/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/context/blocking.rs:66:9
  18: tokio::runtime::scheduler::multi_thread::MultiThread::block_on::{{closure}}
             at /home/fm/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/scheduler/multi_thread/mod.rs:87:13
  19: tokio::runtime::context::runtime::enter_runtime
             at /home/fm/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/context/runtime.rs:65:16
  20: tokio::runtime::scheduler::multi_thread::MultiThread::block_on
             at /home/fm/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/scheduler/multi_thread/mod.rs:86:9
  21: tokio::runtime::runtime::Runtime::block_on
             at /home/fm/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/runtime.rs:350:45
  22: tldw::main
             at ./src/main.rs:88:2
  23: core::ops::function::FnOnce::call_once
             at /rustc/82e1608dfa6e0b5569232559e3d385fea5a93112/library/core/src/ops/function.rs:250:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
fm:tldw [master]$

`?` couldn't convert the error to `RequestError`

When I try the basic example on the README, I get the following error:


error[E0277]: `?` couldn't convert the error to `RequestError`
  --> src\main.rs:45:58
   |
45 |                     chatgpt_call::ask_chatgpt(text).await?;
   |                                                          ^ the trait `std::convert::From<chatgpt::err::Error>` is not implemented for `RequestError`
   |
   = note: the question mark operation (`?`) implicitly performs a conversion on the error value using the `From` trait
   = help: the following other types implement trait `std::convert::From<T>`:
             <RequestError as std::convert::From<ApiError>>
             <RequestError as std::convert::From<DownloadError>>
             <RequestError as std::convert::From<reqwest::error::Error>>
             <RequestError as std::convert::From<std::io::Error>>
             <RequestError as std::convert::From<teloxide_core::serde_multipart::error::Error>>
   = note: required for `std::result::Result<(), RequestError>` to implement `FromResidual<std::result::Result<Infallible, chatgpt::err::Error>>`

Do you know what's wrong?

Function API support

So functions were recently announced, along with some other new API stuff. I'll see how this can be implemented into current API.

When the streams feature of chatgpt_rs is enabled, the following error is reported Url not declared

I fixed this bug(pr_12), my colleague opened the stream call, and the demo update is as follows, before use the feature of streams ,you must import use futures::stream::{self, StreamExt}; the code first

use chatgpt::prelude::*;
use std::io::{stdout, Write};
use chatgpt::types::ResponseChunk;

use futures::stream::{self, StreamExt};

#[tokio::main]
async fn main() -> Result<()> {
    // Getting the API key here
    let key = args().nth(1).unwrap();

    // Creating a new ChatGPT client.
    // Note that it requires an API key, and uses
    // tokens from your OpenAI API account balance.
    let client = ChatGPT::new(key)?;

    // Sending a message and getting the completion
    let mut stream = client
    .send_message_streaming("Could you name me a few popular Rust backend server frameworks?")
    .await?;

    while let Some(each) = stream.next().await {
        match each {
            ResponseChunk::Content { delta, response_index: _ } => {
                print!("{}", delta);
                stdout().lock().flush().unwrap();
            }
            _ => {}
        }
    }

    Ok(())
}

image

and the results

An Error (e.g. invalid API Key) is not returned when using stream

Hi Maksim, thanks for a wonderful package.

I tried testing some unhappy path, e.g. inputting invalid API Key.

On the basic example, it returns the error saying invalid key.
But on the stream example, it doesn't return anything.

Probably this is due to the nature of stream function in Rust.

My understanding of Rust is still limited currently,
So I have no idea how to fix the issue myself.

Probably you can give some pointer on how to resolve this?

Add support for old models

EDIT: This was a copy paste suggestion how the old models could be implemented fast, but not user friendly. I opened a pull request with a better attempt #9

How to build?

Hi, I'd like to create a PR, but I don't know how to build locally. And there is no CI, which I can look at. I tried:

❯ cargo build
error: Package `chatgpt_rs v1.2.3 (/home/felix/projects/chatgpt_rs)` does not have feature `schemars`.
It has an optional dependency with that name, but that dependency uses the "dep:" syntax in the features table,
so it does not have an implicit feature with that name.

What am I doing wrong?

Support for Dall-E-2?

Hi, this is off topic. But any plans to support the dall-e-2 api? It would be great if I can also stream images answer using this one package.

Are the new models supported?

I am looking on using this crate for building an ChatGPT app with custom login to share ChatGPT plus with friends with authorization but can I chat with GPT-4 or any other models or is this planned to be supported

how to use functions feature within continuous conversation

Thank you for developing this powerful lib!
It always works well, but when I try to use functions feature, I' m in trouble.
I want to fulfill a continuous conversation with functions feature, for instance:

  • ask chatGPT what' s the time
  • chatGPT call my function to get current time
  • then chatGPT reply to me with the time that my function returns

Just like the example: https://openai.com/blog/function-calling-and-other-api-updates
image

Here is my test code:

  // build conversation and send to openAI
  conversation.add_function(get_current_time()).unwrap();
  let res = conversation
      .send_message_functions(ask)  // ask is "what' s the time now?"
      .await
      .unwrap()
      .message_choices;

about get_current_time:

/// get current time
///
/// * time - the time that user asks
#[gpt_function]
async fn get_current_time(time: String) -> Value {
    println!("AI uses param: {time}");
    return Ok(json!({
        "time":"10:30"
    }));
}

my function ( get_current_time ) was called successfully, but I could' t get a expected reply about current time.

message_functions' length is 1, content is empty str
called message().content also return empty str

history like this:
image

maybe need this?( I guess...)
image
and when I called send_message_functions again, error occur:
image

I tried again and again, but couldn't be able to solve it, hope to get help

Retention of questions and answers across the lifecycle

Question.

I'm very grateful for the library you provided.

We are currently trying to create a generic WebAPI server.
We envision an endpoint to receive API requests and would like to call https://api.openai.com/v1/chat/completions once per HTTP POST.

I checked examples on GitHub and found that it asks twice in one conversation and at the end of the lifecycle The result is that the conversation is terminated after the end of the lifecycle.
However, a typical WebAPI server would release the lifecycle after each request.
Therefore, the conversation is not continuing.

Could you please advise on how to ask a new question while continuing the previous question?

function_calls example fails with default `max_tokens` configuration

👋🏻 Thanks for this great crate. Just a heads up, the function_calls example fails with:

 Error("data did not match any variant of untagged enum ServerResponse", line: 0, column: 0) })

I traced it down to this line https://github.com/Maxuss/chatgpt_rs/blob/master/src/client.rs#L418. The debugged response was:

{
  "id": "chatcmpl-...",
  "object": "chat.completion",
  "created": 1694357978,
  "model": "gpt-3.5-turbo-0613",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": null,
        "function_call": {
          "name": "send_message",
          "arguments": "{\n  \"message\": \"This is a test"
        }
      },
      "finish_reason": "length"
    }
  ],
  "usage": {
    "prompt_tokens": 108,
    "completion_tokens": 16,
    "total_tokens": 124
  }
}

Just needed to increase my max_tokens from 16 in the default ModelConfiguration.

Constructor "new_with_config" does not configure timeout like "new_with_config_proxy".

By leaving out the timeout configuration for reqwest, the module has the potential to hang indefinitely. This is undesirable and a risk for production usage. This pattern was implemented for new_with_config_proxy.

See: https://github.com/Maxuss/chatgpt_rs/blob/master/src/client.rs#L55

Compare to: https://github.com/Maxuss/chatgpt_rs/blob/master/src/client.rs#L74

Pull request is pending. :)

Thanks for the library.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.