GithubHelp home page GithubHelp logo

shreyashankar / gpt3-sandbox Goto Github PK

View Code? Open in Web Editor NEW
2.9K 108.0 884.0 5.67 MB

The goal of this project is to enable users to create cool web demos using the newly released OpenAI GPT-3 API with just a few lines of Python.

License: MIT License

HTML 6.35% CSS 3.86% JavaScript 53.66% Python 36.13%

gpt3-sandbox's Introduction

GPT-3 Sandbox: Turn your ideas into demos in a matter of minutes

Initial release date: 19 July 2020

Note that this repository is not under any active development; just basic maintenance.

Description

The goal of this project is to enable users to create cool web demos using the newly released OpenAI GPT-3 API with just a few lines of Python.

This project addresses the following issues:

  1. Automatically formatting a user's inputs and outputs so that the model can effectively pattern-match
  2. Creating a web app for a user to deploy locally and showcase their idea

Here's a quick example of priming GPT to convert English to LaTeX:

# Construct GPT object and show some examples
gpt = GPT(engine="davinci",
          temperature=0.5,
          max_tokens=100)
gpt.add_example(Example('Two plus two equals four', '2 + 2 = 4'))
gpt.add_example(Example('The integral from zero to infinity', '\\int_0^{\\infty}'))
gpt.add_example(Example('The gradient of x squared plus two times x with respect to x', '\\nabla_x x^2 + 2x'))
gpt.add_example(Example('The log of two times x', '\\log{2x}'))
gpt.add_example(Example('x squared plus y squared plus equals z squared', 'x^2 + y^2 = z^2'))

# Define UI configuration
config = UIConfig(description="Text to equation",
                  button_text="Translate",
                  placeholder="x squared plus 2 times x")

demo_web_app(gpt, config)

Running this code as a python script would automatically launch a web app for you to test new inputs and outputs with. There are already 3 example scripts in the examples directory.

You can also prime GPT from the UI. for that, pass show_example_form=True to UIConfig along with other parameters.

Technical details: the backend is in Flask, and the frontend is in React. Note that this repository is currently not intended for production use.

Background

GPT-3 (Brown et al.) is OpenAI's latest language model. It incrementally builds on model architectures designed in previous research studies, but its key advance is that it's extremely good at "few-shot" learning. There's a lot it can do, but one of the biggest pain points is in "priming," or seeding, the model with some inputs such that the model can intelligently create new outputs. Many people have ideas for GPT-3 but struggle to make them work, since priming is a new paradigm of machine learning. Additionally, it takes a nontrivial amount of web development to spin up a demo to showcase a cool idea. We built this project to make our own idea generation easier to experiment with.

This developer toolkit has some great resources for those experimenting with the API, including sample prompts.

Requirements

Coding-wise, you only need Python. But for the app to run, you will need:

  • API key from the OpenAI API beta invite
  • Python 3
  • yarn
  • Node 16

Instructions to install Python 3 are here, instructions to install yarn are here and we recommend using nvm to install (and manage) Node (instructions are here).

Setup

First, clone or fork this repository. Then to set up your virtual environment, do the following:

  1. Create a virtual environment in the root directory: python -m venv $ENV_NAME
  2. Activate the virtual environment: source $ENV_NAME/bin/activate (for MacOS, Unix, or Linux users) or .\ENV_NAME\Scripts\activate (for Windows users)
  3. Install requirements: pip install -r api/requirements.txt
  4. To add your secret key: create a file anywhere on your computer called openai.cfg with the contents OPENAI_KEY=$YOUR_SECRET_KEY, where $YOUR_SECRET_KEY looks something like 'sk-somerandomcharacters' (including quotes). If you are unsure what your secret key is, navigate to the API Keys page and click "Copy" next to a token displayed under "Secret Key". If there is none, click on "Create new secret key" and then copy it.
  5. Set your environment variable to read the secret key: run export OPENAI_CONFIG=/path/to/config/openai.cfg (for MacOS, Unix, or Linux users) or set OPENAI_CONFIG=/path/to/config/openai.cfg (for Windows users)
  6. Run yarn install in the root directory

If you are a Windows user, to run the demos, you will need to modify the following line inside api/demo_web_app.py: subprocess.Popen(["yarn", "start"]) to subprocess.Popen(["yarn", "start"], shell=True).

To verify that your environment is set up properly, run one of the 3 scripts in the examples directory: python examples/run_latex_app.py.

A new tab should pop up in your browser, and you should be able to interact with the UI! To stop this app, run ctrl-c or command-c in your terminal.

To create your own example, check out the "getting started" docs.

Interactive Priming

The real power of GPT-3 is in its ability to learn to specialize to tasks given a few examples. However, priming can at times be more of an art than a science. Using the GPT and Example classes, you can easily experiment with different priming examples and immediately see their GPT on GPT-3's performance. Below is an example showing it improve incrementally at translating English to LaTeX as we feed it more examples in the python interpreter:

>>> from api import GPT, Example, set_openai_key
>>> gpt = GPT()
>>> set_openai_key(key)
>>> prompt = "integral from a to b of f of x"
>>> print(gpt.get_top_reply(prompt))
output: integral from a to be of f of x

>>> gpt.add_example(Example("Two plus two equals four", "2 + 2 = 4"))
>>> print(gpt.get_top_reply(prompt))
output:

>>> gpt.add_example(Example('The integral from zero to infinity', '\\int_0^{\\infty}'))
>>> print(gpt.get_top_reply(prompt))
output: \int_a^b f(x) dx

Contributions

We actively encourage people to contribute by adding their own examples or even adding functionalities to the modules. Please make a pull request if you would like to add something, or create an issue if you have a question. We will update the contributors list on a regular basis.

Please do not leave your secret key in plaintext in your pull request!

Authors

The following authors have committed 20 lines or more (ordered according to the Github contributors page):

  • Shreya Shankar
  • Bora Uyumazturk
  • Devin Stein
  • Gulan
  • Michael Lavelle

gpt3-sandbox's People

Contributors

abhinav-tb avatar bora-uyumazturk avatar buyumaz avatar cveinnt avatar dependabot[bot] avatar devstein avatar gulan28 avatar jthoward64 avatar juliatessler avatar michaellavelle avatar omarcr avatar shreyashankar avatar tayabsoomro avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gpt3-sandbox's Issues

help with check the env

I have followed the set up but at the step of checking the environment by running python examples/run_latex_app.py, I came across the error as follows: I wonder if anyone could help solve it
Error: error:0308010C:digital envelope routines::unsupported
at new Hash (node:internal/crypto/hash:67:19)
at Object.createHash (node:crypto:130:10)
at module.exports (C:\Users\jiz52\gpt3-sandbox\node_modules\webpack\lib\util\createHash.js:135:53)
at NormalModule._initBuildHash (C:\Users\jiz52\gpt3-sandbox\node_modules\webpack\lib\NormalModule.js:417:16)
at handleParseError (C:\Users\jiz52\gpt3-sandbox\node_modules\webpack\lib\NormalModule.js:471:10)
at C:\Users\jiz52\gpt3-sandbox\node_modules\webpack\lib\NormalModule.js:503:5
at C:\Users\jiz52\gpt3-sandbox\node_modules\webpack\lib\NormalModule.js:358:12
at C:\Users\jiz52\gpt3-sandbox\node_modules\loader-runner\lib\LoaderRunner.js:373:3
at iterateNormalLoaders (C:\Users\jiz52\gpt3-sandbox\node_modules\loader-runner\lib\LoaderRunner.js:214:10)
at iterateNormalLoaders (C:\Users\jiz52\gpt3-sandbox\node_modules\loader-runner\lib\LoaderRunner.js:221:10)
C:\Users\jiz52\gpt3-sandbox\node_modules\react-scripts\scripts\start.js:19
throw err;
^

Error: error:0308010C:digital envelope routines::unsupported
at new Hash (node:internal/crypto/hash:67:19)
at Object.createHash (node:crypto:130:10)
at module.exports (C:\Users\jiz52\gpt3-sandbox\node_modules\webpack\lib\util\createHash.js:135:53)
at NormalModule._initBuildHash (C:\Users\jiz52\gpt3-sandbox\node_modules\webpack\lib\NormalModule.js:417:16)
at C:\Users\jiz52\gpt3-sandbox\node_modules\webpack\lib\NormalModule.js:452:10
at C:\Users\jiz52\gpt3-sandbox\node_modules\webpack\lib\NormalModule.js:323:13
at C:\Users\jiz52\gpt3-sandbox\node_modules\loader-runner\lib\LoaderRunner.js:367:11
at C:\Users\jiz52\gpt3-sandbox\node_modules\loader-runner\lib\LoaderRunner.js:233:18
at context.callback (C:\Users\jiz52\gpt3-sandbox\node_modules\loader-runner\lib\LoaderRunner.js:111:13)
at C:\Users\jiz52\gpt3-sandbox\node_modules\babel-loader\lib\index.js:59:103 {
opensslErrorStack: [ 'error:03000086:digital envelope routines::initialization error' ],
library: 'digital envelope routines',
reason: 'unsupported',
code: 'ERR_OSSL_EVP_UNSUPPORTED'
}

Node.js v17.1.0
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.

GPT-3 public API

Unable to create and get gpt-3 API to hands with the code samples

Conflicting dependencies

Running pip install -r api/requirements.txt produces the following error:

ERROR: Cannot install requests==2.24.0 and urllib3==1.26.5 because these package versions have conflicting dependencies.

The conflict is caused by:
    The user requested urllib3==1.26.5
    requests 2.24.0 depends on urllib3!=1.25.0, !=1.25.1, <1.26 and >=1.21.1

To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict

Python 3.10.1, pip 21.3.1, MacOS Monterey 12.2

๐Ÿ˜”

But I live in Iran and I have no hope of building it. From those who can do something in this field, I can provide it to them for free.

implement autoreloading of GPT configuration

currently, if you want to see how GPT does with more examples, you need to add examples in the main script and then relaunch the flask back end. Ideally, they would be able to make changes to the GPT configuration and see the model improve as they include more priming examples.

One way of doing this would be to add a UI component that sends a post to update the underlying GPT model but this would result in potentially a cluttered UI (not ideal for demos).

Question and Answer example

Hi,
Firstly thanks for this cool repo. I am trying to build a question and answer app using GPT3. How do I save a document file or feed the example with say 500 words text based on which users will question and the API will search the file or given text and provide the answer. Endpoint for is given in the GPT3 example but I am finding it tough to integrate it with this repo example. Any help is appreciated. Below the given code:

doc_list=["sample text to be searched."]

response = openai.Answer.create(
search_model="davinci",
model="curie",
question="What is sample text",
documents=doc_list,
examples_context="In 2017, U.S. life expectancy was 78.6 years.",
examples=[["What is human life expectancy in the United States?","78 years."]],
max_tokens=10,
stop=["\n", "<|endoftext|>"],
)

print(response)

Thanks,
Raji

getting-started.md

sir can you please help me with this

raise RuntimeError(
RuntimeError: The environment variable 'OPENAI_CONFIG' is not set and as such configuration could not be loaded. Set this variable and make it point to a configuration file

Legel

Does sharing the API key legal?

ModuleNotFoundError: No module named 'api'

Whenever I run from api import GPT, Example, set_openai_key I always get this error Traceback (most recent call last): File "<stdin>", line 1, in <module> ModuleNotFoundError: No module named 'api'
Even though I have already installed everything and followed the instructions properly..

Trouble when running examples

Hi!

I'm trying to run any of the examples in the examples folder, but I'm stuck with the same screen that shows the default text for the UI (I uploaded an screenshot of what I got from examples/run_latex_app.py). Also, inputting anything there and trying to submit does nothing.

image

I have followed all the Setup steps in README and, from what I gather, the text in the UI should be different from what I see. Do you have any ideas of what I could have done wrong?

P.S.: my OpenAI console shows that my key hasn't been used. And I exported the config file as indicated in README.

help with proxy/port issue

Hi,

I'm trying to get the sandbox running on an AWS ubuntu dev server that I have and I am running into a problem because this server is already using ports 5000, 5001, 5007, and 5008 for other flask/gunicorn applications. I can't quite figure out how to set things up to use a different port, say 5009. THe key variables seem to be the proxy value set in package.json and the port & host values optionally provided in demo_web_app.py at app(run). Which of them needs to be set to the new port value?

more streamlined management of secret API key

Right now it's done via a .cfg file and setting the environment variable LATEX_TRANSLATOR_CONFIG (name is an artifact from original version of this app). Figure out if there's a more intuitive way of doing this.

Named Entity Recognition example

Hi, thanks for developing such a great tool.

Just wondering if you could add an example for training GPT-3 for Named Entity Recognition (NER) tasks?

I'm not sure how I can use the add_example function to specify the answer to a question for NER tasks:

...
gpt.add_example(Example('Tom was born in 1942', '[(Tom, Name), (1942, Year)]'))
...

Thanks!

Error 404

Browser:
Not Found
The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.

Terminal:
127.0.0.1 - - [03/Aug/2022 19:13:33] "GET / HTTP/1.1" 404 -
127.0.0.1 - - [03/Aug/2022 19:13:36] "GET / HTTP/1.1" 404 -
127.0.0.1 - - [03/Aug/2022 19:13:41] "GET / HTTP/1.1" 404 -
127.0.0.1 - - [03/Aug/2022 19:13:42] "GET /favicon.ico HTTP/1.1" 404

Description:
I followed the Readme file's instructions, but I couldn't replicate the example and always ended up with this. Can anyone help me with this?

GTP-3 Beta API Invites

Are current API users able to invite people? If so, I would greatly appreciate someone sending me an invite:

my email

thanks either way

Do I need to prime the model everytime I make a request to it ?

Like, every single time I want to get something from the model, do I need to give it examples ?

On is it possible I create pretext/profile/save the examples someplace such that I do not have to send it to the model everytime I make a request ?

Upon making a request would the examples I try and train it with prior eat up the tokens as well ?

Just a question, couldn't think of a better forum to ask. Please close if you don't find it appropriate.

Thanks.

openai.error.RateLimitError not handled

I run a fine-tuned model that sometimes takes a while to load. The UI does not reflect that, it just stays "silent".

output in terminal is:
[2022-06-05 11:48:07,665] ERROR in app: Exception on /translate [POST]
(...)
openai.error.RateLimitError: That model is still being loaded. Please try again shortly.

Can the UI be modified to deliver that message to the user?

Typo in Set up

python -m venv $ENV_NAME

should be

python3 -m venv $ENV_NAME

as the venv command only exists in python3

Gpt2 with this way

I wonder that we can apply this pipeline for GPT2 model without finetuning or training, just give gpt2 a prompt and it gives back the result. Thanks

I can't set the OPENAI_CONFIG Environment Variable

I use Windows 10 and Powershell, my problem is i can't set the OPENAI_CONFIG variable, whenever i try to run an example .py, it says "The environment variable 'OPENAI_CONFIG' is not set "
I tried $env:OPENAI_CONFIG= , set OPENAI_CONFIG= but none of it works. I also tried different file path formats to point out the openai.cfg file that i created. But i had no luck.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.