This project provides a REST API for interacting with Chat GPT, a conversational AI powered by GPT. It allows you to send prompts and receive responses from the Chat GPT model.
Return a list of conversations.
GET /conversations
Response: 200
[
{
"id": "d6bd2590-01f4-32b7-b1y3-c5ea1fr7r819",
"title": "Conversation 1",
"create_time": "2023-05-28T17:13:08.233528+00:00",
"update_time": "2023-05-28T17:13:24+00:00",
"mapping": null,
"current_node": null
},
...
]
Create a new conversation and ask the first question.
POST /conversations/new
Parameter | Type | Required | Description |
---|---|---|---|
prompt |
string | Yes | The prompt message for the conversation |
title |
string | No | The title of the conversation |
model |
string | No | The model to use for the conversation |
autocontinue |
string | No | Whether to automatically continue the conversation, must be true or false |
Response: 201
{
"response": {
"author": {
"role": "assistant",
"name": null,
"metadata": {}
},
"message": "¡Sure, I can help you with that! What is your name?",
"conversation_id": "c6b0aeef-b322-4f47-a9f8-7e52ebca942a",
"parent_id": "07cd64d2-da49-4b39-a652-a69b31eede06",
"model": "text-davinci-002-render-sha",
"finish_details": "stop",
"end_turn": false,
"recipient": "all",
"citations": []
}
}
Ask a question to the conversation.
POST /conversations/{conversation_id}/ask
Parameter | Type | Required | Description |
---|---|---|---|
prompt |
string | Yes | The prompt message for the conversation |
parent_id |
string | No | UUID for the message to continue on |
model |
string | No | The model to use for the conversation |
autocontinue |
string | No | Whether to automatically continue the conversation, must be true or false |
Response: 201
{
"response": {
"author": {
"role": "assistant",
"name": null,
"metadata": {}
},
"message": "¡Sure, I can help you with that! What is your name?",
"conversation_id": "c6b0aeef-b322-4f47-a9f8-7e52ebca942a",
"parent_id": "07cd64d2-da49-4b39-a652-a69b31eede06",
"model": "text-davinci-002-render-sha",
"finish_details": "stop",
"end_turn": false,
"recipient": "all",
"citations": []
}
}
Delete a conversation.
DELETE /conversations/{conversation_id}
Response: 204 No Content
Delete all conversations.
DELETE /conversations/all
Response: 204 No Content
Get all messages from a conversation.
GET /conversations/{conversation_id}/messages
Response: 200
[
{
"author": {
"role": "assistant",
"name": null,
"metadata": {}
},
"message": "¡Sure, I can help you with that! What is your name?",
"conversation_id": "c6b0aeef-b322-4f47-a9f8-7e52ebca942a",
"parent_id": "07cd64d2-da49-4b39-a652-a69b31eede06",
"model": "text-davinci-002-render-sha",
"finish_details": "stop",
"end_turn": false,
"recipient": "all",
"citations": []
},
...
]
The API requires authentication using an access token. You need to include the Authorization header with the value set to your access token in each request.
Header | Description |
---|---|
Authorization |
Access token for authentication |
For obtaining an access token, you need be logged in to your OpenAI account and go to the sesion endpoint.
https://chat.openai.com/api/auth/session
Copy the value of the accessToken
field and use it as the value of the Authorization header.
OpenAI has a rate limit of 50 requests per hour. If you exceed this limit, you will receive a 500
response code with the error message Rate limit exceeded
.
To get started with the Chat GPT API, follow these steps:
-
Install the required dependencies by running:
pip install -r requirements.txt
-
Run the server:
python manage.py runserver
-
You can now send requests to the API at
http://localhost:8000
This project is licensed under the terms of the GPL-2.0 license. See the LICENSE file for details.
This project is in no way affiliated with, associated with, authorized by, endorsed by, or officially affiliated in any way with OpenAI, Inc. (www.openai.com). This project uses a public npm package called revChatGPT to interact with the OpenAI API. I am not responsible for possible bans or suspensions of your OpenAI account. Use this project at your own risk.