GithubHelp home page GithubHelp logo

spellcraftai / openai-streams Goto Github PK

View Code? Open in Web Editor NEW
287.0 287.0 25.0 351 KB

Tools for working with OpenAI streams in Node.js and TypeScript.

Home Page: https://openai-streams.vercel.app

License: MIT License

JavaScript 1.25% TypeScript 78.94% CSS 19.81%
openai streams

openai-streams's People

Contributors

calum-bird avatar ctjlewis avatar glips avatar mrrio avatar neo773 avatar notunderctrl avatar transmissions11 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

openai-streams's Issues

gpt4 access

Just wanted to ask how i can call the gpt4 api using this if not are there any future plans of adding that feature in?

Function TokenParser

how the Function TokenParser should be used Consuming streams in Next.js Edge functions?
can you give an example?
thanks

Abort streams

I want to abort requests in some cases (e.g. when user closed dialog). Currently stream.cancel() doing nothing.

Some tokens are being dropped

I'm having an issue where it seems some tokens are being dropped. Most frequently, it's the first token but in the attached screenshot below it looks like it might have dropped one in the middle too—the missing space between "I" and "assist".

CleanShot 2023-05-22 at 10 07 42@2x

In order to confirm it wasn't a client-side only issue with my code, I logged each token as it was decoded via the new onParse callback. Even there the token was dropped.

In my initial search, I found this thread on the OpenAI forums describing a similar issue. If we're running into the same issue here, this may not be a problem with openai-streams at all, but instead with eventsource-parser.

I'll post more findings—and hopefully a fix too—once I look into it. I plan on investigating in the next couple days, but if anyone has an idea of what the issue is I'd love to hear any thoughts.

Is it possible cancel a stream

Hello I created stream with openai-streams package but streams still up even I close the page. so can stop the streaming before I close a webpage

No fetch implementation found

Hi,

Thanks for building this great project.

When I deployed the changes using the CJS implementation to my machine, it surfaced an error

Error: No fetch implementation found.

After doing some reading, it seems like I have to install the node-fetch library and import that into the openai-streams library fetch call somehow.

Is there a clean way for doing this?

Module build failed: UnhandledSchemeError: Reading from "node:fs" is not handled by plugins (Unhandled scheme).

Running into this issue when invoking an instance of OpenAIEdgeClient in the latest versions 5.20, etc. This is on Vercel's edge functions.

node:fs
Module build failed: UnhandledSchemeError: Reading from "node:fs" is not handled by plugins (Unhandled scheme).
Webpack supports "data:" and "file:" URIs by default.
You may need an additional plugin to handle "node:" URIs.
Import trace for requested module:
node:fs
./node_modules/fetch-blob/from.js
./node_modules/node-fetch/src/index.js
./node_modules/openai-streams/dist/lib/backoff.js
./node_modules/openai-streams/dist/lib/openai/edge.js
./node_modules/openai-streams/dist/lib/openai/index.js
./node_modules/openai-streams/dist/lib/index.js
./node_modules/openai-streams/dist/index.js

Reverting to 5.15 fixed it.

Error [ERR_REQUIRE_ESM]: require() of ES Module

When i use lib, i have this error:

Error [ERR_REQUIRE_ESM]: require() of ES Module /Users/anonyme-user/Documents/Anonyme/anonyme-site/node_modules/openai-streams/dist/lib/openai/node.js from /Users/anonyme-user/Documents/Anonyme/anonyme-site/node_modules/openai-streams/node.cjs not supported.
Instead change the require of node.js in /Users/anonyme-user/Documents/Anonyme/anonyme-site/node_modules/openai-streams/node.cjs to a dynamic import() which is available in all CommonJS modules.
    at Object.<anonymous> (/Users/anonyme-user/Documents/Anonyme/anonyme-site/node_modules/openai-streams/node.cjs:1:18)
    at openai-streams/node (/Users/anonyme-user/Documents/Anonyme/anonyme-site/.next/server/pages/api/chat.js:62:18)
    at __webpack_require__ (/Users/anonyme-user/Documents/Anonyme/anonyme-site/.next/server/webpack-api-runtime.js:33:42)
    at eval (webpack-internal:///(api)/./src/pages/api/chat/index.ts:7:77)
    at __webpack_require__.a (/Users/anonyme-user/Documents/Anonyme/anonyme-site/.next/server/webpack-api-runtime.js:97:13)
    at eval (webpack-internal:///(api)/./src/pages/api/chat/index.ts:1:21)
    at (api)/./src/pages/api/chat/index.ts (/Users/anonyme-user/Documents/Anonyme/anonyme-site/.next/server/pages/api/chat.js:122:1)
    at __webpack_require__ (/Users/anonyme-user/Documents/Anonyme/anonyme-site/.next/server/webpack-api-runtime.js:33:42)
    at __webpack_exec__ (/Users/anonyme-user/Documents/Anonyme/anonyme-site/.next/server/pages/api/chat.js:172:39)
    at /Users/anonyme-user/Documents/Anonyme/anonyme-site/.next/server/pages/api/chat.js:173:28
    at Object.<anonymous> (/Users/anonyme-user/Documents/Anonyme/anonyme-site/.next/server/pages/api/chat.js:176:3)
    at DevServer.runApi (/Users/anonyme-user/Documents/Anonyme/anonyme-site/node_modules/next/dist/server/next-server.js:650:34)
    at DevServer.handleApiRequest (/Users/anonyme-user/Documents/Anonyme/anonyme-site/node_modules/next/dist/server/next-server.js:1181:21)
    at Object.fn (/Users/anonyme-user/Documents/Anonyme/anonyme-site/node_modules/next/dist/server/next-server.js:1124:46)
    at async Router.execute (/Users/anonyme-user/Documents/Anonyme/anonyme-site/node_modules/next/dist/server/router.js:315:32)
    at async DevServer.runImpl (/Users/anonyme-user/Documents/Anonyme/anonyme-site/node_modules/next/dist/server/base-server.js:601:29)
    at async DevServer.run (/Users/anonyme-user/Documents/Anonyme/anonyme-site/node_modules/next/dist/server/dev/next-dev-server.js:922:20)
    at async DevServer.handleRequestImpl (/Users/anonyme-user/Documents/Anonyme/anonyme-site/node_modules/next/dist/server/base-server.js:533:20) {
  digest: undefined
}

Version: 6.1.0

How to set mode = 'tokens'?

Hello, for the Chat one, I seem to be getting a Uint8Array instead of delta objects as shown in the documentation. Do I just add `mode: 'tokens' as an option? Its not shown in the README.

How to use in Node?

Hello,

I am not quite sure how to use this library in Node. When I try to read data from the stream it says stream.on is not a function. Here is my example code:

import { OpenAI } from "openai-streams";

const stream = OpenAI(
  "chat",
  { model: "gpt-3.5-turbo", messages: [{ role: "user", content: "my question ..." }] },
  { apiKey: "" }
);

stream.on("data", (data) => {
  console.log("DATA:", data);
});

stream.on("end", () => {
  console.log("STREAM END");
});

Am I on the wrong track?

This package fails with `node --loader tsx`

❯ node --loader tsx
(node:64749) ExperimentalWarning: Custom ESM Loaders is an experimental feature and might change at any time
(Use `node --trace-warnings ...` to show where the warning was created)
Welcome to Node.js v19.4.0.
Type ".help" for more information.
> import('openai-streams/node')
Promise {
  <pending>,
  [Symbol(async_id_symbol)]: 109,
  [Symbol(trigger_async_id_symbol)]: 89
}
> Uncaught:
Error: Cannot find module '/Users/alec/my-project/node_modules/.pnpm/[email protected]/node_modules/yield-stream/dist/index.cjs'
    at createEsmNotFoundErr (node:internal/modules/cjs/loader:1075:15)
    at finalizeEsmResolution (node:internal/modules/cjs/loader:1068:15)
    at resolveExports (node:internal/modules/cjs/loader:551:14)
    at Module._findPath (node:internal/modules/cjs/loader:620:31)
    at Module._resolveFilename (node:internal/modules/cjs/loader:1039:27)
    at u.default._resolveFilename (/Users/alec/my-project/node_modules/.pnpm/@[email protected]/node_modules/@esbuild-kit/cjs-loader/dist/index.js:1:1519)
    at Module._load (node:internal/modules/cjs/loader:898:27)
    at Module.require (node:internal/modules/cjs/loader:1120:19)
    at require (node:internal/modules/helpers:112:18) {
  code: 'MODULE_NOT_FOUND',
  path: '/Users/alec/my-project/node_modules/.pnpm/[email protected]/node_modules/yield-stream/package.json'
}

Adding "type": "module" to the package.json of openai-streams fixes it.

TypeError: Response body object should not be disturbed or locked

Thanks for this awesome library!

It's been working well, but today I started seeing the following error when I return new Response(stream) from a Next 13 route handler:

TypeError: Response body object should not be disturbed or locked

It will always return successfully the first time, but on any subsequent attempt to invoke the route, I get the error about the response body.

I'm unfortunately not very experienced with returning a ReadableStream via a Next, so I was hoping someone might see an obvious flaw in my Next 13 Beta (/app directory) route handler. However, I'm not doing anything different than your docs show:

import { type NextRequest } from 'next/server'
import { OpenAI } from "openai-streams";

export const runtime = 'experimental-edge'; // Run on Edge Functions

export async function POST(request: NextRequest) {
  try {
    const stream = await OpenAI(...)
    return new Response(stream)
  } catch (error) {
    console.error(error)
  }
}

I'm invoking this route handler on the front-end by using the useTextBuffer hook from your nextjs-openai library.

use of custom env var name

the env var name OPENAI_API_KEY is not the api key used for our project, and so adding this in and duplicating the openai key just for this one library is very silly.

In my opinion, automatically snagging anything from the user's env increases complexity by transferring the complexity from the code itself directly into the users mind because it forces the user to remember this sneaky rule.

Please consider changing this. I understand not wanting to introduce a non-backwards compatible change, but perhaps you can add another param to the function to allow custom options, including a custom env var.

ChatParser and ChatStream

Hello, nice library!
How should be the use of the functions ChatParser and ChatStream?

import { OpenAI } from "openai-streams";

export default async function handler() {
  const stream = await OpenAI(
    "chat",
  {
    model: "gpt-3.5-turbo",
    messages: [
      { "role": "system", "content": "You are a helpful assistant that translates English to French." },
      { "role": "user", "content": "Translate the following English text to French: \"Hello world!\"" }
    ],
  },
  );

  ChatStream(stream, {mode: 'tokens'})
}

export const config = {
  runtime: "edge"
};

Also, I don't know how to deal if the stream is not streaming. Can you help with some use case example?
Sorry for the dumb question but I'm only noob.

For consuming streams in Next.js API Route (Node) this is what I wrote.
The problem is that if there's no response from openAI or if I write a wrong model, it never seems to catch any error

 const stream = await OpenAI("chat", {
    model: "gpt-4",
     messages: [
      { "role": "system", "content": "You are a helpful assistant that translates English to French." },
      { "role": "user", "content": "Translate the following English text to French: \"Hello world!\"" }
    ],
    });

  stream.on("data", (chunk) => {
    sendFormattedChunk(chunk, res);
  });
  stream.on("end", () => {
    res.end();
  });

  stream.on("error", (error) => {
    res.status(500).send("Error: " + error.message);
  });


export function sendFormattedChunk(chunk: Uint8Array, res: NextApiResponse) {
  const decoded = DECODER.decode(chunk);
  const jsonFragments = decoded.split(/}(?=\{)/);
  for (const jsonFragment of jsonFragments) {
    try {
      const parsedJson = JSON.parse(
        jsonFragment + (jsonFragment.endsWith("}") ? "" : "}")
      );
      if (parsedJson.content) {
        res.write(parsedJson.content);
      }
      if (!parsedJson.content) {
      }
    } catch (error) {
      console.log(error);
      res.end();
    }
  }
}

Thanks

Error [ERR_REQUIRE_ESM]: require() of ES Module

Trying to use in firebase functions with typescript, getting:

⬢ functions: Failed to load function definition from source: FirebaseError: Failed to load function definition from source: Failed to generate manifest from function source: Error [ERR_REQUIRE_ESM]: require() of ES Module jo_api/functions/node_modules/openai-streams/dist/index.js from jo_api/functions/lib/index.js not supported.
Instead change the require of jo_api/functions/node_modules/openai-streams/dist/index.js in jo_api/functions/lib/index.js to a dynamic import() which is available in all CommonJS modules.

"Node globals cannot be used in browser"

The call stack traces to this library.
Error occured when running yarn dev and opening up localhost:3000
node version: 18
OS: macOS M1 chip
framework: NextJS (should be obvious but mentioning just in case)

image

ESM Support

We should either:

  • Add "type": "module" to the package.json file, or
  • Rename node.js to node.mjs

stream forcibly exits early with n > 1 and low max tokens

import { CreateCompletionResponse, OpenAI } from "openai-streams";
import { yieldStream } from "yield-stream";

async function repro() {
  const stream = await OpenAI(
    "completions",
    {
      model: "text-davinci-003",
      prompt: "Repeat 'The quick brown fox jumps over the lazy dog' back to me.",
      max_tokens: 10,
      n: 5,
    },
    {
      mode: "raw",
    }
  );

  const DECODER = new TextDecoder();

  let responses: { [key: number]: string } = {};

  try {
    for await (const serialized of yieldStream(stream)) {
      const resp = (JSON.parse(DECODER.decode(serialized)) as CreateCompletionResponse)
        .choices[0];

      responses[resp.index!] = (responses[resp.index!] ?? "") + resp.text!;

      console.log(responses);
    }
  } catch (e) {
    console.error(e);
  }
}

repro();

image

As response index 1 populates and reaches its max, the stream throws and the rest of the completions are blocked because the stream is closed.

How to support `continue generating` response

I am using openai-stream library to build a chat application. We need support continue generating response feature like the one we have in ChatGPT. Looking at the code of the library I understand that OpenAIError is thrown when finish_reason is equal to length https://github.com/SpellcraftAI/openai-streams/blob/canary/src/lib/streaming/streams.ts#L80.

In my code I don't get handle to this error in the catch block. The console.error line is never called. The code looks like as shown below

try {
  const stream = await OpenAI("chat", {
    model: "gpt-3.5-turbo",
    messages: [
      {
        role: "system",
        content: DEFAULT_SYSTEM_PROMPT,
      },
      ...messagesToSend,
    ],
    max_tokens: 1000,
    temperature: DEFAULT_TEMPERATURE,
  });

  return new Response(stream);

} catch (e) {
  console.error("Error: ", e)
}

Can you guide me how to handle errors?

incomplete documentation

The docs ought to:

  • mention how to import the library
  • clarify that the OpenAI func being used in the docs is something that was imported from the library

openai-streams/node not working on Vercel

Is anyone else having this issue?

I'm using openai-streams/node and in my local environment the response works correctly (streaming piece by piece) but when I push to Vercel the response sends then entire stream back at once (rather than chunks), very odd.

lol

i was poking around the internet to cargo a direct to http stream because >muh corp vended libraries
lmao
guess who i see

Error: TypeError: fetch failed

Error: TypeError: fetch failed
[0] at Object.fetch (node:internal/deps/undici/undici:11457:11)
[0] at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
[0] at async s (file:///C:/Users/David/OneDrive/baikeAISomeTest/beikeadminui/node_modules/openai-streams/dist/lib/backoff.js:1:103)
[0] at async Module.$ (file:///C:/Users/David/OneDrive/baikeAISomeTest/beikeadminui/node_modules/openai-streams/dist/lib/openai/edge.js:1:588)
[0] at async C:\Users\David\OneDrive\baikeAISomeTest\beikeadminui\server.js:31:22 {
[0] cause: ConnectTimeoutError: Connect Timeout Error
[0] at onConnectTimeout (node:internal/deps/undici/undici:8422:28)
[0] at node:internal/deps/undici/undici:8380:50
[0] at Immediate._onImmediate (node:internal/deps/undici/undici:8409:37)
[0] at process.processImmediate (node:internal/timers:476:21) {
[0] code: 'UND_ERR_CONNECT_TIMEOUT'
[0] }
[0] }

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.