spellcraftai / openai-streams Goto Github PK
View Code? Open in Web Editor NEWTools for working with OpenAI streams in Node.js and TypeScript.
Home Page: https://openai-streams.vercel.app
License: MIT License
Tools for working with OpenAI streams in Node.js and TypeScript.
Home Page: https://openai-streams.vercel.app
License: MIT License
Just wanted to ask how i can call the gpt4 api using this if not are there any future plans of adding that feature in?
Excuse me, can this sdk couple the logic of the conversation context? Thank you
how the Function TokenParser should be used Consuming streams in Next.js Edge functions?
can you give an example?
thanks
I want to abort requests in some cases (e.g. when user closed dialog). Currently stream.cancel()
doing nothing.
I believe "max_tokens" is missing for CreateChatCompletionRequest in https://github.com/gptlabs/openai-streams/blob/0892cb4e86419b55463d5fb36dbd002ce0e32832/src/lib/pinned.ts#L25
I get an error that doesn't allow me to set max_tokens when using a chat competition (great work on this library by the way!)
Hi, I am wondering if you could add Function calling support that was added in gpt-3.5-turbo-0613
. I've been able to use raw
mode to implement my own using this package but it would be nice to have it like token
mode, but for the arguments
field in function_call
.
https://platform.openai.com/docs/guides/gpt/function-calling
I'm having an issue where it seems some tokens are being dropped. Most frequently, it's the first token but in the attached screenshot below it looks like it might have dropped one in the middle too—the missing space between "I" and "assist".
In order to confirm it wasn't a client-side only issue with my code, I logged each token as it was decoded via the new onParse
callback. Even there the token was dropped.
In my initial search, I found this thread on the OpenAI forums describing a similar issue. If we're running into the same issue here, this may not be a problem with openai-streams
at all, but instead with eventsource-parser
.
I'll post more findings—and hopefully a fix too—once I look into it. I plan on investigating in the next couple days, but if anyone has an idea of what the issue is I'd love to hear any thoughts.
Hello I created stream with openai-streams package but streams still up even I close the page. so can stop the streaming before I close a webpage
Hi,
Thanks for building this great project.
When I deployed the changes using the CJS implementation to my machine, it surfaced an error
Error: No fetch implementation found.
After doing some reading, it seems like I have to install the node-fetch library and import that into the openai-streams library fetch call somehow.
Is there a clean way for doing this?
Running into this issue when invoking an instance of OpenAIEdgeClient in the latest versions 5.20, etc. This is on Vercel's edge functions.
node:fs
Module build failed: UnhandledSchemeError: Reading from "node:fs" is not handled by plugins (Unhandled scheme).
Webpack supports "data:" and "file:" URIs by default.
You may need an additional plugin to handle "node:" URIs.
Import trace for requested module:
node:fs
./node_modules/fetch-blob/from.js
./node_modules/node-fetch/src/index.js
./node_modules/openai-streams/dist/lib/backoff.js
./node_modules/openai-streams/dist/lib/openai/edge.js
./node_modules/openai-streams/dist/lib/openai/index.js
./node_modules/openai-streams/dist/lib/index.js
./node_modules/openai-streams/dist/index.js
Reverting to 5.15 fixed it.
When i use lib, i have this error:
Error [ERR_REQUIRE_ESM]: require() of ES Module /Users/anonyme-user/Documents/Anonyme/anonyme-site/node_modules/openai-streams/dist/lib/openai/node.js from /Users/anonyme-user/Documents/Anonyme/anonyme-site/node_modules/openai-streams/node.cjs not supported.
Instead change the require of node.js in /Users/anonyme-user/Documents/Anonyme/anonyme-site/node_modules/openai-streams/node.cjs to a dynamic import() which is available in all CommonJS modules.
at Object.<anonymous> (/Users/anonyme-user/Documents/Anonyme/anonyme-site/node_modules/openai-streams/node.cjs:1:18)
at openai-streams/node (/Users/anonyme-user/Documents/Anonyme/anonyme-site/.next/server/pages/api/chat.js:62:18)
at __webpack_require__ (/Users/anonyme-user/Documents/Anonyme/anonyme-site/.next/server/webpack-api-runtime.js:33:42)
at eval (webpack-internal:///(api)/./src/pages/api/chat/index.ts:7:77)
at __webpack_require__.a (/Users/anonyme-user/Documents/Anonyme/anonyme-site/.next/server/webpack-api-runtime.js:97:13)
at eval (webpack-internal:///(api)/./src/pages/api/chat/index.ts:1:21)
at (api)/./src/pages/api/chat/index.ts (/Users/anonyme-user/Documents/Anonyme/anonyme-site/.next/server/pages/api/chat.js:122:1)
at __webpack_require__ (/Users/anonyme-user/Documents/Anonyme/anonyme-site/.next/server/webpack-api-runtime.js:33:42)
at __webpack_exec__ (/Users/anonyme-user/Documents/Anonyme/anonyme-site/.next/server/pages/api/chat.js:172:39)
at /Users/anonyme-user/Documents/Anonyme/anonyme-site/.next/server/pages/api/chat.js:173:28
at Object.<anonymous> (/Users/anonyme-user/Documents/Anonyme/anonyme-site/.next/server/pages/api/chat.js:176:3)
at DevServer.runApi (/Users/anonyme-user/Documents/Anonyme/anonyme-site/node_modules/next/dist/server/next-server.js:650:34)
at DevServer.handleApiRequest (/Users/anonyme-user/Documents/Anonyme/anonyme-site/node_modules/next/dist/server/next-server.js:1181:21)
at Object.fn (/Users/anonyme-user/Documents/Anonyme/anonyme-site/node_modules/next/dist/server/next-server.js:1124:46)
at async Router.execute (/Users/anonyme-user/Documents/Anonyme/anonyme-site/node_modules/next/dist/server/router.js:315:32)
at async DevServer.runImpl (/Users/anonyme-user/Documents/Anonyme/anonyme-site/node_modules/next/dist/server/base-server.js:601:29)
at async DevServer.run (/Users/anonyme-user/Documents/Anonyme/anonyme-site/node_modules/next/dist/server/dev/next-dev-server.js:922:20)
at async DevServer.handleRequestImpl (/Users/anonyme-user/Documents/Anonyme/anonyme-site/node_modules/next/dist/server/base-server.js:533:20) {
digest: undefined
}
Version: 6.1.0
Hello, for the Chat one, I seem to be getting a Uint8Array instead of delta objects as shown in the documentation. Do I just add `mode: 'tokens' as an option? Its not shown in the README.
Hello,
I am not quite sure how to use this library in Node. When I try to read data from the stream it says stream.on is not a function
. Here is my example code:
import { OpenAI } from "openai-streams";
const stream = OpenAI(
"chat",
{ model: "gpt-3.5-turbo", messages: [{ role: "user", content: "my question ..." }] },
{ apiKey: "" }
);
stream.on("data", (data) => {
console.log("DATA:", data);
});
stream.on("end", () => {
console.log("STREAM END");
});
Am I on the wrong track?
❯ node --loader tsx
(node:64749) ExperimentalWarning: Custom ESM Loaders is an experimental feature and might change at any time
(Use `node --trace-warnings ...` to show where the warning was created)
Welcome to Node.js v19.4.0.
Type ".help" for more information.
> import('openai-streams/node')
Promise {
<pending>,
[Symbol(async_id_symbol)]: 109,
[Symbol(trigger_async_id_symbol)]: 89
}
> Uncaught:
Error: Cannot find module '/Users/alec/my-project/node_modules/.pnpm/[email protected]/node_modules/yield-stream/dist/index.cjs'
at createEsmNotFoundErr (node:internal/modules/cjs/loader:1075:15)
at finalizeEsmResolution (node:internal/modules/cjs/loader:1068:15)
at resolveExports (node:internal/modules/cjs/loader:551:14)
at Module._findPath (node:internal/modules/cjs/loader:620:31)
at Module._resolveFilename (node:internal/modules/cjs/loader:1039:27)
at u.default._resolveFilename (/Users/alec/my-project/node_modules/.pnpm/@[email protected]/node_modules/@esbuild-kit/cjs-loader/dist/index.js:1:1519)
at Module._load (node:internal/modules/cjs/loader:898:27)
at Module.require (node:internal/modules/cjs/loader:1120:19)
at require (node:internal/modules/helpers:112:18) {
code: 'MODULE_NOT_FOUND',
path: '/Users/alec/my-project/node_modules/.pnpm/[email protected]/node_modules/yield-stream/package.json'
}
Adding "type": "module"
to the package.json of openai-streams
fixes it.
Thanks for this awesome library!
It's been working well, but today I started seeing the following error when I return new Response(stream)
from a Next 13 route handler:
TypeError: Response body object should not be disturbed or locked
It will always return successfully the first time, but on any subsequent attempt to invoke the route, I get the error about the response body.
I'm unfortunately not very experienced with returning a ReadableStream via a Next, so I was hoping someone might see an obvious flaw in my Next 13 Beta (/app
directory) route handler. However, I'm not doing anything different than your docs show:
import { type NextRequest } from 'next/server'
import { OpenAI } from "openai-streams";
export const runtime = 'experimental-edge'; // Run on Edge Functions
export async function POST(request: NextRequest) {
try {
const stream = await OpenAI(...)
return new Response(stream)
} catch (error) {
console.error(error)
}
}
I'm invoking this route handler on the front-end by using the useTextBuffer
hook from your nextjs-openai
library.
Check package errors: https://publint.dev/openai-streams
Because I just received a large number of release notifications, it seems that they are all for compatibility with cjs
.
You can use tsup
or unbuild
to be compatible with both cjs
and esm
.
You can refer to the following articles for more details:
https://antfu.me/posts/publish-esm-and-cjs
https://antfu.me/posts/types-for-sub-modules
the env var name OPENAI_API_KEY is not the api key used for our project, and so adding this in and duplicating the openai key just for this one library is very silly.
In my opinion, automatically snagging anything from the user's env increases complexity by transferring the complexity from the code itself directly into the users mind because it forces the user to remember this sneaky rule.
Please consider changing this. I understand not wanting to introduce a non-backwards compatible change, but perhaps you can add another param to the function to allow custom options, including a custom env var.
Hello, nice library!
How should be the use of the functions ChatParser and ChatStream?
import { OpenAI } from "openai-streams";
export default async function handler() {
const stream = await OpenAI(
"chat",
{
model: "gpt-3.5-turbo",
messages: [
{ "role": "system", "content": "You are a helpful assistant that translates English to French." },
{ "role": "user", "content": "Translate the following English text to French: \"Hello world!\"" }
],
},
);
ChatStream(stream, {mode: 'tokens'})
}
export const config = {
runtime: "edge"
};
Also, I don't know how to deal if the stream is not streaming. Can you help with some use case example?
Sorry for the dumb question but I'm only noob.
For consuming streams in Next.js API Route (Node) this is what I wrote.
The problem is that if there's no response from openAI or if I write a wrong model, it never seems to catch any error
const stream = await OpenAI("chat", {
model: "gpt-4",
messages: [
{ "role": "system", "content": "You are a helpful assistant that translates English to French." },
{ "role": "user", "content": "Translate the following English text to French: \"Hello world!\"" }
],
});
stream.on("data", (chunk) => {
sendFormattedChunk(chunk, res);
});
stream.on("end", () => {
res.end();
});
stream.on("error", (error) => {
res.status(500).send("Error: " + error.message);
});
export function sendFormattedChunk(chunk: Uint8Array, res: NextApiResponse) {
const decoded = DECODER.decode(chunk);
const jsonFragments = decoded.split(/}(?=\{)/);
for (const jsonFragment of jsonFragments) {
try {
const parsedJson = JSON.parse(
jsonFragment + (jsonFragment.endsWith("}") ? "" : "}")
);
if (parsedJson.content) {
res.write(parsedJson.content);
}
if (!parsedJson.content) {
}
} catch (error) {
console.log(error);
res.end();
}
}
}
Thanks
Only named exports may use 'export type'.
1 export type * from "openai";
The requested module 'openai-streams/node' does not provide an export named 'OpenAI'
Stuck with this error.
NodeJS 18, MacOS
Trying to use in firebase functions with typescript, getting:
⬢ functions: Failed to load function definition from source: FirebaseError: Failed to load function definition from source: Failed to generate manifest from function source: Error [ERR_REQUIRE_ESM]: require() of ES Module jo_api/functions/node_modules/openai-streams/dist/index.js from jo_api/functions/lib/index.js not supported.
Instead change the require of jo_api/functions/node_modules/openai-streams/dist/index.js in jo_api/functions/lib/index.js to a dynamic import() which is available in all CommonJS modules.
This wasn't as simple as just overriding globalThis
because node-fetch
returns a Node.js ReadableStream
where Node would return a WHATWG ReadableStream
.
See: https://stackoverflow.com/questions/57664058/response-body-getreader-is-not-a-function
We should either:
"type": "module"
to the package.json
file, ornode.js
to node.mjs
import { CreateCompletionResponse, OpenAI } from "openai-streams";
import { yieldStream } from "yield-stream";
async function repro() {
const stream = await OpenAI(
"completions",
{
model: "text-davinci-003",
prompt: "Repeat 'The quick brown fox jumps over the lazy dog' back to me.",
max_tokens: 10,
n: 5,
},
{
mode: "raw",
}
);
const DECODER = new TextDecoder();
let responses: { [key: number]: string } = {};
try {
for await (const serialized of yieldStream(stream)) {
const resp = (JSON.parse(DECODER.decode(serialized)) as CreateCompletionResponse)
.choices[0];
responses[resp.index!] = (responses[resp.index!] ?? "") + resp.text!;
console.log(responses);
}
} catch (e) {
console.error(e);
}
}
repro();
As response index 1 populates and reaches its max, the stream throws and the rest of the completions are blocked because the stream is closed.
I am using openai-stream library to build a chat application. We need support continue generating response feature like the one we have in ChatGPT. Looking at the code of the library I understand that OpenAIError
is thrown when finish_reason is equal to length
https://github.com/SpellcraftAI/openai-streams/blob/canary/src/lib/streaming/streams.ts#L80.
In my code I don't get handle to this error in the catch block. The console.error
line is never called. The code looks like as shown below
try {
const stream = await OpenAI("chat", {
model: "gpt-3.5-turbo",
messages: [
{
role: "system",
content: DEFAULT_SYSTEM_PROMPT,
},
...messagesToSend,
],
max_tokens: 1000,
temperature: DEFAULT_TEMPERATURE,
});
return new Response(stream);
} catch (e) {
console.error("Error: ", e)
}
Can you guide me how to handle errors?
The docs ought to:
Is anyone else having this issue?
I'm using openai-streams/node and in my local environment the response works correctly (streaming piece by piece) but when I push to Vercel the response sends then entire stream back at once (rather than chunks), very odd.
i was poking around the internet to cargo a direct to http stream because >muh corp vended libraries
lmao
guess who i see
Error: TypeError: fetch failed
[0] at Object.fetch (node:internal/deps/undici/undici:11457:11)
[0] at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
[0] at async s (file:///C:/Users/David/OneDrive/baikeAISomeTest/beikeadminui/node_modules/openai-streams/dist/lib/backoff.js:1:103)
[0] at async Module.$ (file:///C:/Users/David/OneDrive/baikeAISomeTest/beikeadminui/node_modules/openai-streams/dist/lib/openai/edge.js:1:588)
[0] at async C:\Users\David\OneDrive\baikeAISomeTest\beikeadminui\server.js:31:22 {
[0] cause: ConnectTimeoutError: Connect Timeout Error
[0] at onConnectTimeout (node:internal/deps/undici/undici:8422:28)
[0] at node:internal/deps/undici/undici:8380:50
[0] at Immediate._onImmediate (node:internal/deps/undici/undici:8409:37)
[0] at process.processImmediate (node:internal/timers:476:21) {
[0] code: 'UND_ERR_CONNECT_TIMEOUT'
[0] }
[0] }
Thanks for your working on this awesome lib.
I'm not that familiar with the streaming api, just tested a few cases, in the case, the onDone
function randomely stopped.
. I guess controller.close()
will terminate the whole process, even if the onDone
is running?
https://github.com/SpellcraftAI/openai-streams/blob/07182c7313/src/lib/streaming/streams.ts#L57
So, should we invoke the onDone
before the controller.close();
?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.