GithubHelp home page GithubHelp logo

openai-edge's Introduction

OpenAI Edge

A TypeScript module for querying OpenAI's API using fetch (a standard Web API) instead of axios. This is a drop-in replacement for the official openai module (which has axios as a dependency).

As well as reducing the bundle size, removing the dependency means we can query OpenAI from edge environments. Edge functions such as Next.js Edge API Routes are very fast and, unlike lambda functions, allow streaming data to the client.

The latest version of this module has feature parity with the official v3.3.0.

Update July 2023: The official openai library will use fetch in v4, hopefully making openai-edge redundant. You can try it in beta now, more info here: openai/openai-node#182

Installation

yarn add openai-edge

or

npm install openai-edge

Responses

Every method returns a promise resolving to the standard fetch response i.e. Promise<Response>. Since fetch doesn't have built-in support for types in its response data, openai-edge includes an export ResponseTypes which you can use to assert the correct type on the JSON response:

import { Configuration, OpenAIApi, ResponseTypes } from "openai-edge"

const configuration = new Configuration({
  apiKey: "YOUR-API-KEY",
})
const openai = new OpenAIApi(configuration)

const response = await openai.createImage({
  prompt: "A cute baby sea otter",
  size: "512x512",
  response_format: "url",
})

const data = (await response.json()) as ResponseTypes["createImage"]

const url = data.data?.[0]?.url

console.log({ url })

With Azure

To use with Azure OpenAI Service you'll need to include an api-key header and an api-version query parameter:

const config = new Configuration({
  apiKey: AZURE_OPENAI_API_KEY,
  baseOptions: {
    headers: {
      "api-key": AZURE_OPENAI_API_KEY,
    },
  },
  basePath: `https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME`,
  defaultQueryParams: new URLSearchParams({
    "api-version": AZURE_OPENAI_API_VERSION,
  }),
})

Without global fetch

This module has zero dependencies and it expects fetch to be in the global namespace (as it is in web, edge and modern Node environments). If you're running in an environment without a global fetch defined e.g. an older version of Node.js, please pass fetch when creating your instance:

import fetch from "node-fetch"

const openai = new OpenAIApi(configuration, undefined, fetch)

Without global FormData

This module also expects to be in an environment where FormData is defined. If you're running in Node.js, that means using v18 or later.

Available methods

  • cancelFineTune
  • createAnswer
  • createChatCompletion (including support for functions)
  • createClassification
  • createCompletion
  • createEdit
  • createEmbedding
  • createFile
  • createFineTune
  • createImage
  • createImageEdit
  • createImageVariation
  • createModeration
  • createSearch
  • createTranscription
  • createTranslation
  • deleteFile
  • deleteModel
  • downloadFile
  • listEngines
  • listFiles
  • listFineTuneEvents
  • listFineTunes
  • listModels
  • retrieveEngine
  • retrieveFile
  • retrieveFineTune
  • retrieveModel

Edge route handler examples

Here are some sample Next.js Edge API Routes using openai-edge.

1. Streaming chat with gpt-3.5-turbo

Note that when using the stream: true option, OpenAI responds with server-sent events. Here's an example react hook to consume SSEs and here's a full NextJS example.

import type { NextRequest } from "next/server"
import { Configuration, OpenAIApi } from "openai-edge"

const configuration = new Configuration({
  apiKey: process.env.OPENAI_API_KEY,
})
const openai = new OpenAIApi(configuration)

const handler = async (req: NextRequest) => {
  const { searchParams } = new URL(req.url)

  try {
    const completion = await openai.createChatCompletion({
      model: "gpt-3.5-turbo",
      messages: [
        { role: "system", content: "You are a helpful assistant." },
        { role: "user", content: "Who won the world series in 2020?" },
        {
          role: "assistant",
          content: "The Los Angeles Dodgers won the World Series in 2020.",
        },
        { role: "user", content: "Where was it played?" },
      ],
      max_tokens: 7,
      temperature: 0,
      stream: true,
    })

    return new Response(completion.body, {
      headers: {
        "Access-Control-Allow-Origin": "*",
        "Content-Type": "text/event-stream;charset=utf-8",
        "Cache-Control": "no-cache, no-transform",
        "X-Accel-Buffering": "no",
      },
    })
  } catch (error: any) {
    console.error(error)

    return new Response(JSON.stringify(error), {
      status: 400,
      headers: {
        "content-type": "application/json",
      },
    })
  }
}

export const config = {
  runtime: "edge",
}

export default handler

2. Text completion with Davinci

import type { NextRequest } from "next/server"
import { Configuration, OpenAIApi, ResponseTypes } from "openai-edge"

const configuration = new Configuration({
  apiKey: process.env.OPENAI_API_KEY,
})
const openai = new OpenAIApi(configuration)

const handler = async (req: NextRequest) => {
  const { searchParams } = new URL(req.url)

  try {
    const completion = await openai.createCompletion({
      model: "text-davinci-003",
      prompt: searchParams.get("prompt") ?? "Say this is a test",
      max_tokens: 7,
      temperature: 0,
      stream: false,
    })

    const data = (await completion.json()) as ResponseTypes["createCompletion"]

    return new Response(JSON.stringify(data.choices), {
      status: 200,
      headers: {
        "content-type": "application/json",
      },
    })
  } catch (error: any) {
    console.error(error)

    return new Response(JSON.stringify(error), {
      status: 400,
      headers: {
        "content-type": "application/json",
      },
    })
  }
}

export const config = {
  runtime: "edge",
}

export default handler

3. Creating an Image with DALL·E

import type { NextRequest } from "next/server"
import { Configuration, OpenAIApi, ResponseTypes } from "openai-edge"

const configuration = new Configuration({
  apiKey: process.env.OPENAI_API_KEY,
})
const openai = new OpenAIApi(configuration)

const handler = async (req: NextRequest) => {
  const { searchParams } = new URL(req.url)

  try {
    const image = await openai.createImage({
      prompt: searchParams.get("prompt") ?? "A cute baby sea otter",
      n: 1,
      size: "512x512",
      response_format: "url",
    })

    const data = (await image.json()) as ResponseTypes["createImage"]

    const url = data.data?.[0]?.url

    return new Response(JSON.stringify({ url }), {
      status: 200,
      headers: {
        "content-type": "application/json",
      },
    })
  } catch (error: any) {
    console.error(error)

    return new Response(JSON.stringify(error), {
      status: 400,
      headers: {
        "content-type": "application/json",
      },
    })
  }
}

export const config = {
  runtime: "edge",
}

export default handler

openai-edge's People

Contributors

bhbs avatar dan-kwiat avatar robswei avatar stephenasunciondev avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

openai-edge's Issues

Performance Issue with Next.js 13.4.7 Edge Functions in AI Chatbot Application

Title: Performance Issue with Next.js 13 Edge Functions in AI Chatbot Application

Description

I am encountering significant performance issues with my Next.js 13 application that uses edge functions and edge runtime for integrating an AI chatbot using OpenAI's API. Despite using the same code as in the vercel-labs/ai-chatbot example repository, my application's response times are considerably slower, both when running locally and when deployed on Vercel Pro.

Expected Behavior

Given that the codebase is identical to the vercel-labs/ai-chatbot repository, I expect similar performance in terms of response time and streaming efficiency.

Actual Behavior

The chatbot responses in my application are substantially slower compared to the vercel-labs/ai-chatbot example, despite being run under similar conditions. This slow performance persists both on a local setup and when the application is deployed on Vercel Pro.

Steps to Reproduce

  1. Clone the vercel-labs/ai-chatbot repository from GitHub.
  2. Set up the application to run with Next.js 13 using edge functions and edge runtime.
  3. Implement the chatbot functionality using the OpenAI API as per the example in the repository.
  4. Compare the response time of the chatbot in this setup with the original repository's deployment on chat.vercel.ai.

Additional Context

  • The primary function in question is the edge runtime route responsible for handling chatbot interactions. The code is as follows:
    import { kv } from '@vercel/kv'
    import { OpenAIStream, StreamingTextResponse } from 'ai'
    import { Configuration, OpenAIApi } from 'openai-edge'
    
    import { auth } from '@/auth'
    import { nanoid } from '@/lib/utils'
    
    export const runtime = 'edge'
    
    const configuration = new Configuration({
      apiKey: process.env.OPENAI_API_KEY
    })
    
    const openai = new OpenAIApi(configuration)
    
    export async function POST(req: Request) {
      const json = await req.json()
      const { messages, previewToken } = json
      const userId = (await auth())?.user.id
    
      if (!userId) {
        return new Response('Unauthorized', {
          status: 401
        })
      }
    
      if (previewToken) {
        configuration.apiKey = previewToken
      }
    
    
    
    
      const res = await openai.createChatCompletion({
        model: 'gpt-3.5-turbo-16k',
        messages,
        temperature: 0.7,
        stream: true
      })
    
      const stream = OpenAIStream(res, {
        async onCompletion(completion) {
          const title = json.messages[0].content.substring(0, 100)
          const id = json.id ?? nanoid()
          const createdAt = Date.now()
          const path = `/chat/${id}`
          const payload = {
            id,
            title,
            userId,
            createdAt,
            path,
            messages: [
              ...messages,
              {
                content: completion,
                role: 'assistant'
              }
            ]
          }
          await kv.hmset(`chat:${id}`, payload)
          await kv.zadd(`user:chat:${userId}`, {
            score: createdAt,
            member: `chat:${id}`
          })
        }
      })
    
      return new StreamingTextResponse(stream)
    }
  • There are no apparent differences in the code or setup that could account for this performance discrepancy.
  • The issue occurs regardless of whether the application is running locally or is deployed on Vercel Pro.

Environment

  • Next.js version: 13
  • Deployment platform: Vercel Pro
  • Local environment OS and version: [Insert your local OS and version]
  • Node.js version: [Insert your Node.js version]

Possible Causes

  • Network latency or configuration issues specific to my environment.
  • Potential differences in resource allocation between my setup and the original example.
  • Unidentified bottlenecks in the edge runtime or edge functions implementation.

I am seeking guidance or suggestions on how to diagnose and resolve this performance issue. Any insights or recommendations would be greatly appreciated.

does openai-edge support agents

similar to openai settings

import http from 'http';
import HttpsProxyAgent from 'https-proxy-agent';

// Configure the default for all requests:
const openai = new OpenAI({
  httpAgent: new HttpsProxyAgent(process.env.PROXY_URL),
});

// Override per-request:
await openai.models.list({
  baseURL: 'http://localhost:8080/test-api',
  httpAgent: new http.Agent({ keepAlive: false }),
})

Allow query parameters (Azure OpenAI support)

When using OpenAI through Azure, a query parameter api-version is required to call any of the OpenAI endpoints. (https://learn.microsoft.com/en-us/azure/cognitive-services/openai/reference).

The options object of the fetch api does not support adding params (like e.g. fetch), as they are set directly via the URL given to fetch.

This can easily be done with the official openai package. As a drop-in replacement I think adding support would be a viable option.

TS can't find the types with `"moduleResolution": "nodenext",`

My project has this TSConfig set:

"moduleResolution": "nodenext",

When I try to load openai-edge, I get this error: microsoft/TypeScript#52363.

I was able to resolve it by publishing my own fork of this repo with these changes. The main change is to make dist look like:

find dist
dist
dist/api.d.ts
dist/common.d.ts
dist/configuration.js
dist/index.js
dist/response-types.d.ts
dist/configuration.d.ts
dist/base.js
dist/common.js
dist/base.d.ts
dist/form-data.js
dist/api.js
dist/form-data.d.ts
dist/index.d.ts
dist/response-types.js

I don't think what I did is necessary the best way to fix it, but it did work for me, and thus I share it here in case it helps you or anyone else.

image

ReferenceError: FormData is not defined (fails Next.js build)

Calling openai.createChatCompletion() from a Next.js route handler causes the build to break with the following missing dependency:

- info Collecting page data ..ReferenceError: FormData is not defined
    at Object.19445 (/home/<username>/repos/<repo>/.next/server/chunks/901.js:2202:30)
    at __webpack_require__ (/home/<username>/repos/<repo>/.next/server/webpack-runtime.js:25:43)
    at Module.91102 (/home/<username>/repos/<repo>/.next/server/app/api/<endpoint>/route.js:76:12)
    at __webpack_require__ (/home/<username>/repos/<repo>/.next/server/webpack-runtime.js:25:43)
    at __webpack_exec__ (/home/<username>/repos/<repo>/.next/server/app/api/<endpoint>/route.js:301:39)
    at /home/<username>/repos/<repo>/.next/server/app/api/<endpoint>/route.js:302:78
    at Function.__webpack_require__.X (/home/<username>/repos/<repo>/.next/server/webpack-runtime.js:138:21)
    at /home/<username>/repos/<repo>/.next/server/app/api/<endpoint>/route.js:302:47
    at Object.<anonymous> (/home/<username>/repos/<repo>/.next/server/app/api/<endpoint>/route.js:305:3)
    at Module._compile (node:internal/modules/cjs/loader:1105:14)

> Build error occurred
Error: Failed to collect page data for /api/webhooks
    at /home/<username>/repos/<repo>/node_modules/next/dist/build/utils.js:1161:15
    at processTicksAndRejections (node:internal/process/task_queues:96:5) {
  type: 'Error'
}

Here's the code snipped from my function I'm calling from an API route:

const response = await openai.createChatCompletion({
    model: "gpt-3.5-turbo",
    messages: [
      {
        role: "system",
        content:
          "You are a helpful assistant.",
      },
      { role: "user", content: prompt },
    ],
  });
  const completionResponse =
    (await response.json()) as ResponseTypes["createChatCompletion"];
  return completionResponse.choices[0].message?.content || "<fallback message>";

I temporarily resolved this by using the official library.

Is there something specific I should be doing for this function call to not break the build?

Thanks!

Client Side Example

Could you provide a simple example of how you would use this on the client side? My requests are working using the library but I haven't figured out how to read the chunks and get the "text" response as it is fetched from the application.

localhost run dev not working

Followed the tutorial and started locally, but I get this error

const configuration = new Configuration({
  apiKey: 'openai-key',
})
const openai = new OpenAIApi(configuration)

export const chatRouter = router({
  createChat: publicProcedure
    .input(
      z.object({
        messages: z.array(
          z.object({
            name: z.string().optional(),
            content: z.string(),
            role: z.string(),
          }),
        ),
      }),
    )
    .query(async (req) => {
      const res = await openai.createChatCompletion({
        model: 'gpt-3.5-turbo',
        messages: req.input.messages as ChatCompletionRequestMessage[],
      })
      console.log('res', res) //null
    }),
})
image
- error node_modules/next/dist/esm/server/lib/patch-fetch.js (253:23) @ originFetch
- error fetch failed

node version:v17.6.0

Is it possible to configure base url of Openai API like the official openai lib?

In some areas like the mainland of China, Openai API cannot be called directly. It would be necessary to call Openai API through a proxy. To do that, a proxy url should be set to substitute api.openai.com. For the official lib, I can directly set openai.api_base to the proxy url. Is it possible to do so in this lib? If so, how to do that? Thx.

Weird console.logs

Hello, following the upgrade to version 1.2.1, I have noticed some unusual console logs from this package within my application. These logs seem to originate from the path openai-edge/dist/index.js.

Here are some samples of these logs:

CONSTRUCTOOOOOOR
createChatCompletion 1
createRequestFunction outer
createRequestFunction inner
{ url: '/chat/completions' }

Unexpected token in JSON at position 11

Sometimes when calling functions, I get Unexpected token in JSON at position 11.
This happens during OpenAI function calling, I do not think it has to do with bad JSON generated by GPT-4 because the error is always at position 11 and can't pinpoint exactly why or when it happens.

- error Error [SyntaxError]: Unexpected token 
 in JSON at position 11
    at JSON.parse (<anonymous>)
    at Object.eval (webpack-internal:///(rsc)/./node_modules/.pnpm/[email protected][email protected][email protected][email protected][email protected]/node_modules/ai/dist/index.mjs:216:51)
    at Generator.next (<anonymous>)
    at eval (webpack-internal:///(rsc)/./node_modules/.pnpm/[email protected][email protected][email protected][email protected][email protected]/node_modules/ai/dist/index.mjs:56:65)
    at new Promise (<anonymous>)
    at __async (webpack-internal:///(rsc)/./node_modules/.pnpm/[email protected][email protected][email protected][email protected][email protected]/node_modules/ai/dist/index.mjs:40:12)
    at Object.flush (webpack-internal:///(rsc)/./node_modules/.pnpm/[email protected][email protected][email protected][email protected][email protected]/node_modules/ai/dist/index.mjs:211:20)
    at g (eval at requireWithFakeGlobalScope (..../node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/dist/compiled/edge-runtime/index.js:1:970334), <anonymous>:102:35)
    at w (eval at requireWithFakeGlobalScope (..../node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/dist/compiled/edge-runtime/index.js:1:970334), <anonymous>:107:14)
    at Object.eval [as flush] (eval at requireWithFakeGlobalScope (.../.pnpm/[email protected][email protected][email protected]/node_modules/next/dist/compiled/edge-runtime/index.js:1:970334), <anonymous>:1945:29) {
  digest: undefined
}

It's not an issue, but can I modify baseURL API into my local server?

I have run LocalAI on my local machine, and now I want to change the base URL into http://localhost:8080. is it possible?

I have already try this. but it doesn't work:

import { Configuration, OpenAIApi } from 'openai-edge';

const config = new Configuration({
  basePath: 'http://localhost:8080'
});

const OpenAI = new OpenAIApi(config);

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.