GithubHelp home page GithubHelp logo

function-call-extender's Introduction

README.md

Overview

This project showcases a novel approach to generate extensive text outputs from OpenAI models, such as GPT-3.5-turbo-16k, GPT-4, and GPT-4-32k, by leveraging function calls to bypass the models' tendencies to cap their token count. The application demonstrates how to reliably produce long-form content, significantly exceeding the typical output length.

Table of Contents

Project Structure

  1. main.py: User interface for interacting with OpenAI models.
  2. function_call_builder.py: Constructs function calls and interacts with OpenAI API.
  3. ai.py: Contains functions for token counting and chat completion.

Installation

Ensure Python 3.6+ is installed, then run:

pip install openai tiktoken

Usage

Running the Application

python main.py

Interacting with the Application

  • Premade Example: Run examples from a JSON file, examples.json

  • Input Your Details: Manually input details to generate a function call and receive a response.

How It Works

Main Application

main.py guides the user through the process, interacting with FunctionCallBuilder to generate function calls.

Function Call Builder

function_call_builder.py dynamically constructs function call definitions and handles the invocation of these calls, managing extensive text outputs.

AI Interaction

ai.py contains functions for token counting and chat completion, crucial for interacting with OpenAI API and managing token limits.

License

This project is open source and freely available for use.

function-call-extender's People

Contributors

patrickmaub avatar

Stargazers

Antonios Marios Christonasis avatar  avatar  avatar  avatar Hansol lee avatar

Watchers

 avatar

function-call-extender's Issues

thoughts on the 4096 limits coming in

Hey, interesting repo. Unfortunately looks like openai's 3.5-turbo-1106 model limits output to 4096 tokens. As does the 4-turbo model.
Claude has also changed every model's output to 4096 tokens.

OpenaAI's 0613 models will be deprecated in July this year. Azure as well. Soon we will have no major provider (of not OSS LLMs) providing >4096 generation tokens capabilities. IMO this is going to hit a lot of different use cases hard.

Have you had any thoughts on this?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.