GithubHelp home page GithubHelp logo

rgbkrk / chatlab Goto Github PK

View Code? Open in Web Editor NEW
119.0 9.0 12.0 2.46 MB

⚡️🧪 Fast LLM Tool Calling Experimentation, big and smol

Home Page: https://chatlab.dev

License: Other

Makefile 0.06% Python 13.65% Jupyter Notebook 76.70% Shell 0.05% JavaScript 1.33% MDX 6.42% TypeScript 0.89% CSS 0.89%
chatbot chatgpt interpreter jupyter jupyter-lab jupyter-notebooks openai noteable hacktoberfest

chatlab's Introduction

ChatLab

Chat Experiments, Simplified

💬🔬

ChatLab is a Python package that makes it easy to experiment with OpenAI's chat models. It provides a simple interface for chatting with the models and a way to register functions that can be called from the chat model.

Best yet, it's interactive in the notebook!

Notebooks to get started with

Introduction

import chatlab
import random

def flip_a_coin():
    '''Returns heads or tails'''
    return random.choice(['heads', 'tails'])

chat = chatlab.Chat()
chat.register(flip_a_coin)

await chat("Please flip a coin for me")
 𝑓  Ran `flip_a_coin`

Input:

{}

Output:

"tails"
It landed on tails!

In the notebook, text will stream into a Markdown output and function inputs and outputs are a nice collapsible display, like with ChatGPT Plugins.

TODO: Include GIF/mp4 of this in action

Installation

pip install chatlab

Configuration

You'll need to set your OPENAI_API_KEY environment variable. You can find your API key on your OpenAI account page. I recommend setting it in an .env file when working locally.

On hosted notebook environments, set it in your Secrets to keep it safe from prying LLM eyes.

What can Chats enable you to do?

💬

Where Chats take it next level is with Chat Functions. You can

  • declare a function
  • register the function in your Chat
  • watch as Chat Models call your functions!

You may recall this kind of behavior from ChatGPT Plugins. Now, you can take this even further with your own custom code.

As an example, let's give the large language models the ability to tell time.

from datetime import datetime
from pytz import timezone, all_timezones, utc
from typing import Optional
from pydantic import BaseModel

def what_time(tz: Optional[str] = None):
    '''Current time, defaulting to UTC'''
    if tz is None:
        pass
    elif tz in all_timezones:
        tz = timezone(tz)
    else:
        return 'Invalid timezone'

    return datetime.now(tz).strftime('%I:%M %p')

class WhatTime(BaseModel):
    tz: Optional[str] = None

Let's break this down.

what_time is the function we're going to provide access to. Its docstring forms the description for the model while the schema comes from the pydantic BaseModel called WhatTime.

import chatlab

chat = chatlab.Chat()

# Register our function
chat.register(what_time, WhatTime)

After that, we can call chat with direct strings (which are turned into user messages) or using simple message makers from chatlab named user and system.

await chat("What time is it?")
 𝑓  Ran `what_time`

Input:

{}

Output:

"11:19 AM"
The current time is 11:19 AM.

Interface

The chatlab package exports

Chat

The Chat class is the main way to chat using OpenAI's models. It keeps a history of your chat in Chat.messages.

Chat.submit

submit is how you send all the currently built up messages over to OpenAI. Markdown output will display responses from the assistant.

await chat.submit('What would a parent who says "I have to play zone defense" mean? ')
# Markdown response inline
chat.messages
[{'role': 'user',
  'content': 'What does a parent of three kids mean by "I have to play zone defense"?'},
 {'role': 'assistant',
  'content': 'When a parent of three kids says "I have to play zone defense," it means that they...

Chat.register

You can register functions with Chat.register to make them available to the chat model. The function's docstring becomes the description of the function while the schema is derived from the pydantic.BaseModel passed in.

from pydantic import BaseModel

class WhatTime(BaseModel):
    tz: Optional[str] = None

def what_time(tz: Optional[str] = None):
    '''Current time, defaulting to UTC'''
    if tz is None:
        pass
    elif tz in all_timezones:
        tz = timezone(tz)
    else:
        return 'Invalid timezone'

    return datetime.now(tz).strftime('%I:%M %p')

chat.register(what_time, WhatTime)

Chat.messages

The raw messages sent and received to OpenAI. If you hit a token limit, you can remove old messages from the list to make room for more.

chat.messages = chat.messages[-100:]

Messaging

human/user

These functions create a message from the user to the chat model.

from chatlab import human

human("How are you?")
{ "role": "user", "content": "How are you?" }

narrate/system

system messages, also called narrate in chatlab, allow you to steer the model in a direction. You can use these to provide context without being seen by the user. One common use is to include it as initial context for the conversation.

from chatlab import narrate

narrate("You are a large bird")
{ "role": "system", "content": "You are a large bird" }

Development

This project uses poetry for dependency management. To get started, clone the repo and run

poetry install -E dev -E test

We use ruff and mypy.

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

chatlab's People

Contributors

cvaske avatar danabauer avatar rgbkrk avatar shouples avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chatlab's Issues

Question about context being passed to chatlab functions.

Question

When using chatlab inside of FastAPI, I may have authentication data that I would need to be passed to a chatlab function that's been registered. For instance, let's say I had an auth-token that I resolve to a user object, and I want to send the user_id to the chatlab function, how could I make that happen? Is that even possible right now?

I feel like I must be missing something very obvious here but I am unsure.

Pydantic issues a cryptic warning when functions are registered

  • ChatLab version: 2.0
  • Python version: 3.12
  • Operating System: macOS

Description

When starting a new chat with a registered function, under many situations I get this warning:

/Users/cvaske/.pyenv/versions/3.12.1/lib/python3.12/site-packages/pydantic/main.py:1406: RuntimeWarning: fields may not start with an underscore, ignoring "__required__"
  warnings.warn(f'fields may not start with an underscore, ignoring "{f_name}"', RuntimeWarning)

What I Did

import chatlab

def add_two_numbers(a: float, b: float) -> float:
    """Add two numbers together. Raises an exception when the numbers are in the wrong order."""
    if b < a:
        return a + b
    raise Exception("I can't do math")

chat = chatlab.Chat(model=chatlab.models.GPT_4_0125_PREVIEW, chat_functions=[add_two_numbers])
await chat("Please add 1 + 2 for me")

[builtins.python] Capture Display Updates

When running code that does display updates, captured output will show each individual result in the output pane:

image

However, we really just need to take the last of a displayed item. For example, captured outputs will have the keys transient and update. Transient will contain the display id to group on like {'display_id': '9123d4e517b11d15'} while update will be True or False.

Create strategies for continuing amidst token limit issues

Sometimes ChatLab is off to the races cranking on analysis and then it runs out of tokens.

image

We need to do two things:

  • Count the number of tokens with tiktoken
  • Remove earlier messages with a strategy. Allow users to pick from various strategies while we figure out a good default

The simplest strategy could be "remove the first message until under the token limit". Advanced strategies could include taking out the system message or even parts of it.

List the currently available models for ChatCompletion

Since sometimes users see

APIError: That model is currently overloaded with other requests.

They're going to want easy access to the model names. We can pull these from openai directly, but we could also have them at the ready. Either way, let's make it easier to try other models.

Stream = False issue

Hello,
I run this code as an example:

chat.register(get_car_price)  # register this function
chat.register(get_top_stories)  # register this function
chat.register(what_time)
chat.register(get_current_weather,weather_parameters)

async def main():
	await chat.submit("What is the weather in San Francisco?")


# Call the async function
asyncio.run(main())

The result is streamed fine:

display_id='d6d40efa-b175-4b57-a24b-9a5efd736a7b' content='' finished=True has_displayed=False
display_id='16450bdf-0ec4-42c2-b93f-ccf4e930c607' content='' finished=False has_displayed=False
display_id='16450bdf-0ec4-42c2-b93f-ccf4e930c607' content='' finished=False has_displayed=True
display_id='16450bdf-0ec4-42c2-b93f-ccf4e930c607' content='The' finished=False has_displayed=True
display_id='16450bdf-0ec4-42c2-b93f-ccf4e930c607' content='The weather' finished=False has_displayed=True
display_id='16450bdf-0ec4-42c2-b93f-ccf4e930c607' content='The weather in' finished=False has_displayed=True
display_id='16450bdf-0ec4-42c2-b93f-ccf4e930c607' content='The weather in San' finished=False has_displayed=True
display_id='16450bdf-0ec4-42c2-b93f-ccf4e930c607' content='The weather in San Francisco' finished=False has_displayed=True
display_id='16450bdf-0ec4-42c2-b93f-ccf4e930c607' content='The weather in San Francisco,' finished=False has_displayed=True
display_id='16450bdf-0ec4-42c2-b93f-ccf4e930c607' content='The weather in San Francisco, CA' finished=False has_displayed=True
display_id='16450bdf-0ec4-42c2-b93f-ccf4e930c607' content='The weather in San Francisco, CA is' finished=False has_displayed=True
display_id='16450bdf-0ec4-42c2-b93f-ccf4e930c607' content='The weather in San Francisco, CA is currently' finished=False has_displayed=True
display_id='16450bdf-0ec4-42c2-b93f-ccf4e930c607' content='The weather in San Francisco, CA is currently sun' finished=False has_displayed=True
display_id='16450bdf-0ec4-42c2-b93f-ccf4e930c607' content='The weather in San Francisco, CA is currently sunny' finished=False has_displayed=True
display_id='16450bdf-0ec4-42c2-b93f-ccf4e930c607' content='The weather in San Francisco, CA is currently sunny and' finished=False has_displayed=True
display_id='16450bdf-0ec4-42c2-b93f-ccf4e930c607' content='The weather in San Francisco, CA is currently sunny and wind' finished=False has_displayed=True
display_id='16450bdf-0ec4-42c2-b93f-ccf4e930c607' content='The weather in San Francisco, CA is currently sunny and windy' finished=False has_displayed=True
display_id='16450bdf-0ec4-42c2-b93f-ccf4e930c607' content='The weather in San Francisco, CA is currently sunny and windy with' finished=False has_displayed=True
display_id='16450bdf-0ec4-42c2-b93f-ccf4e930c607' content='The weather in San Francisco, CA is currently sunny and windy with a' finished=False has_displayed=True
display_id='16450bdf-0ec4-42c2-b93f-ccf4e930c607' content='The weather in San Francisco, CA is currently sunny and windy with a temperature' finished=False has_displayed=True
display_id='16450bdf-0ec4-42c2-b93f-ccf4e930c607' content='The weather in San Francisco, CA is currently sunny and windy with a temperature of' finished=False has_displayed=True
display_id='16450bdf-0ec4-42c2-b93f-ccf4e930c607' content='The weather in San Francisco, CA is currently sunny and windy with a temperature of ' finished=False has_displayed=True
display_id='16450bdf-0ec4-42c2-b93f-ccf4e930c607' content='The weather in San Francisco, CA is currently sunny and windy with a temperature of 7' finished=False has_displayed=True
display_id='16450bdf-0ec4-42c2-b93f-ccf4e930c607' content='The weather in San Francisco, CA is currently sunny and windy with a temperature of 72' finished=False has_displayed=True
display_id='16450bdf-0ec4-42c2-b93f-ccf4e930c607' content='The weather in San Francisco, CA is currently sunny and windy with a temperature of 72 degrees' finished=False has_displayed=True
display_id='16450bdf-0ec4-42c2-b93f-ccf4e930c607' content='The weather in San Francisco, CA is currently sunny and windy with a temperature of 72 degrees F' finished=False has_displayed=True
display_id='16450bdf-0ec4-42c2-b93f-ccf4e930c607' content='The weather in San Francisco, CA is currently sunny and windy with a temperature of 72 degrees Fahren' finished=False has_displayed=True
display_id='16450bdf-0ec4-42c2-b93f-ccf4e930c607' content='The weather in San Francisco, CA is currently sunny and windy with a temperature of 72 degrees Fahrenheit' finished=False has_displayed=True
display_id='16450bdf-0ec4-42c2-b93f-ccf4e930c607' content='The weather in San Francisco, CA is currently sunny and windy with a temperature of 72 degrees Fahrenheit.' finished=False has_displayed=True

BUT if I run with this change:
await chat.submit("What is the weather in San Francisco?",stream=False)

I got errors:

Traceback (most recent call last):
  File "D:\!Programs\llm-with-functionary\main.py", line 102, in <module>
    asyncio.run(main())
  File "C:\Users\krist\AppData\Local\Programs\Python\Python311\Lib\asyncio\runners.py", line 190, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "C:\Users\krist\AppData\Local\Programs\Python\Python311\Lib\asyncio\runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\krist\AppData\Local\Programs\Python\Python311\Lib\asyncio\base_events.py", line 653, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "D:\!Programs\llm-with-functionary\main.py", line 98, in main
    await chat.submit("What is the weather in San Francisco?",stream=False)
  File "D:\!Programs\llm-with-functionary\venv\Lib\site-packages\chatlab\chat.py", line 356, in submit
    await self.submit(stream=stream, **kwargs)
  File "D:\!Programs\llm-with-functionary\venv\Lib\site-packages\chatlab\chat.py", line 313, in submit
    full_response = await client.chat.completions.create(
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\!Programs\llm-with-functionary\venv\Lib\site-packages\openai\resources\chat\completions.py", line 1159, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
  File "D:\!Programs\llm-with-functionary\venv\Lib\site-packages\openai\_base_client.py", line 1790, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\!Programs\llm-with-functionary\venv\Lib\site-packages\openai\_base_client.py", line 1493, in request
    return await self._request(
           ^^^^^^^^^^^^^^^^^^^^
  File "D:\!Programs\llm-with-functionary\venv\Lib\site-packages\openai\_base_client.py", line 1569, in _request
    return await self._retry_request(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\!Programs\llm-with-functionary\venv\Lib\site-packages\openai\_base_client.py", line 1615, in _retry_request
    return await self._request(
           ^^^^^^^^^^^^^^^^^^^^
  File "D:\!Programs\llm-with-functionary\venv\Lib\site-packages\openai\_base_client.py", line 1569, in _request
    return await self._retry_request(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\!Programs\llm-with-functionary\venv\Lib\site-packages\openai\_base_client.py", line 1615, in _retry_request
    return await self._request(
           ^^^^^^^^^^^^^^^^^^^^
  File "D:\!Programs\llm-with-functionary\venv\Lib\site-packages\openai\_base_client.py", line 1584, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.InternalServerError: Internal Server Error

Is this an issue or I am doing something wrong?

Unkown Finish Reason: 'tool_calls'

  • ChatLab version: 1.1.1
  • Python version: 3.10.4
  • Operating System: Win 11

Description

I was just trying to get funtionary with chatlab to work and tried the flip a coin example:
Without the chat.register() call or without asking the ai to call the function it works perfectly, but with the "Flip a coin" chat an unknown finish reason "tool_calls" displays.

What I Did

import asyncio
from chatlab import Chat
import os
import random
os.environ['OPENAI_API_KEY'] = "functionary"

def flip_a_coin():
    """this function is used to flip a coin"""
    return random.choice(["heads", "tails"])

chat = Chat(model="meetkai/functionary-7b-v2", base_url="http://localhost:8000/v1")
chat.register(flip_a_coin)
asyncio.run(chat("Flip a coin for me"))

UNKNOWN FINISH REASON: 'tool_calls'. If you see this message, report it as an issue to https://github.com/rgbkrk/chatlab/issues

Generated type for tuples not compatible with OpenAI

Example

from typing import Tuple, List

def color_shades(rgb_color: Tuple[int, int, int], num_shades: int) -> List[Tuple[int, int, int]]:
    """
    Generate a list of shades for a given RGB color.

    Args:
        rgb_color: A tuple of three integers representing an RGB color.
        num_shades: The number of shades to generate.

    Returns:
        A list of tuples, each containing three integers representing an RGB color.
    """
    shades = [
        (
            max(0, min(255, int(rgb_color[0] * (1 - j / num_shades)))),
            max(0, min(255, int(rgb_color[1] * (1 - j / num_shades)))),
            max(0, min(255, int(rgb_color[2] * (1 - j / num_shades)))),
        )
        for j in range(num_shades)
    ]
    return shades

from chatlab import Chat

chat = Chat(
    chat_functions=[color_shades]
)

await chat("Compute some shades of periwinkle.")
BadRequestError: Error code: 400 - {'error': {'message': "Invalid schema for function 'color_shades': [{'type': 'integer'}, {'type': 'integer'}, {'type': 'integer'}] is not of type 'object', 'boolean'", 'type': 'invalid_request_error', 'param': None, 'code': None}}

Plan for 2.0

  • Switch to tool calling by default
  • Message about it
  • Provide a way to switch to function calling to revert to the old behavior
  • Include the most recent model strings
  • Remove some of the decorators and other deprecated bits

Documentation Plans

The initial website for stable releases is up at chatlab.dev 🎉 and the site for unstable releases is pre.chatlab.dev. For the current alpha release, docs will be maintained in this repository in ./website while the previous stable release is at https://github.com/rgbkrk/chatlab-docs.

Here's some initial thinking on the doc site layout:

  • Reference
    • Selecting different model options
    • ChatLab Decorators
  • Guides
    • Registering Functions
    • Declaring Function Schemas directly
    • Declaring Pydantic Models for Function Schemas
    • Creating basic agents
    • Custom Python Objects with Repr LLM

Allow customized details about function calling

Let's add on to the ChatLab metadata decorator (see #38) to allow people to modify what the little details tab shows.

 𝑓  Flipped a coin!

Input:

{}

Output:

"tails"

As part of this we'll want metadata to be applied for the following states around the function (for use by our display calls)

  • calling - While the function is "trying" to be called
  • succeeded - what to show when the function successfully finishes
  • failed - what to show when the function fails (an exception)

At its most basic, it could work like this:

@on_success("Flipped a coin!")
@on_failure("Failed to flip a coin. :(")
def flip_a_coin():
    '''Returns heads or tails'''
    return random.choice(['heads', 'tails'])

Advanced Formatting options

Since we work with named parameters, we could apply the arguments to strings with named parameters.

import requests
import chatlab

class PokemonFetchError(Exception):
    """An error for when we fail to fetch a pokemon, tailored for LLM output"""
    def __init__(self, pokemon_name):
        self.pokemon_name = pokemon_name
        self.message = f"Failed to fetch information for Pokemon '{self.pokemon_name}'. Please make sure the Pokemon name is correct."
        super().__init__(self.message)

@on_success("Go {name}!")
@on_failure("Failed to fetch Pokemon {name}")
def fetch_pokemon(name: str):
    """Fetch information about a pokemon by name"""
    url = f"https://pokeapi.co/api/v2/pokemon/{name}"
    try:
        response = requests.get(url)
        response.raise_for_status()  # will raise an HTTPError if the response status code is 4xx or 5xx
        return response.json()
    except requests.HTTPError:
        raise PokemonFetchError(name)
Get me information about murkrow.
 𝑓  Go murkrow!

Input:

{ "name": "murkrow" }

Output:

<<information about murkrow here>>

Add temperature to openai.ChatCompletion.create

  • ChatLab version:1.0.0a25
  • Python version:3.10
  • Operating System:

Description

I think we should allow to set temperature in openai.ChatCompletion.create . And the default value for temperature should be 0, as this is function calling, the response should be stable and consistent.

What I Did

We can add: ** kwargs to def submit and get temperature from **kwargs

resp = openai.ChatCompletion.create(
                model=self.model,
                messages=full_messages,
                **self.function_registry.api_manifest(),
                stream=stream,
                temperature=kwargs.get("temperature", 0),
            )

allow schema generation for functions with pydantic model and UUID arguments

  • ChatLab version: 1.0.0-alpha.25
  • Python version: 3.10.12
  • Operating System: Windows 11 + WSL (Ubuntu 22.04)

Description

I'm testing various functions and methods with generate_function_schema() and running into situations where the JSON serialization check is raising an exception. Details below:

What I Did

With pydantic model argument:

from pydantic import BaseModel

class SimpleModel(BaseModel):
    abc: str
    xyz: int

def sample_function_with_model(
    foo: str,
    bar: int,
    baz: SimpleModel
) -> None:
    """test"""
    print(f"{foo=}, {bar=}, {baz=}")

schema = generate_function_schema(sample_function_with_model)
schema
     10     """test"""
     11     print(f"{foo=}, {bar=}, {baz=}")
---> 13 schema = generate_function_schema(sample_function_with_model)
     14 schema

File ~/dev/.venv/lib/python3.10/site-packages/chatlab/registry.py:160, in generate_function_schema(function, parameter_schema)
    158 sig = inspect.signature(function)
    159 for name, param in sig.parameters.items():
--> 160     prop_schema, is_required = process_parameter(name, param)
    161     schema_properties[name] = prop_schema
    162     if is_required:

File ~/dev/.venv/lib/python3.10/site-packages/chatlab/registry.py:128, in process_parameter(name, param)
    126 def process_parameter(name, param):
    127     """Process a function parameter for use in a JSON schema."""
--> 128     prop_schema, is_required = process_type(param.annotation, param.default == inspect.Parameter.empty)
    129     if param.default != inspect.Parameter.empty:
    130         prop_schema["default"] = param.default

File ~/dev/.venv/lib/python3.10/site-packages/chatlab/registry.py:123, in process_type(annotation, is_required)
    118     return {
    119         "type": JSON_SCHEMA_TYPES[annotation],
    120     }, is_required
    122 else:
--> 123     raise Exception(f"Type annotation must be a JSON serializable type ({ALLOWED_TYPES})")

Exception: Type annotation must be a JSON serializable type ([<class 'int'>, <class 'str'>, <class 'bool'>, <class 'float'>, <class 'list'>, <class 'dict'>, typing.List, typing.Dict])

With uuid argument:

import uuid

def sample_function_with_uuid(
    foo: str,
    bar: int,
    baz: uuid.UUID
) -> None:
    """test"""
    print(f"{foo=}, {bar=}, {baz=}")

schema = generate_function_schema(sample_function_with_uuid)
schema
      8     """test"""
      9     print(f"{foo=}, {bar=}, {baz=}")
---> 11 schema = generate_function_schema(sample_function_with_uuid)
     12 schema

File ~/dev/.venv/lib/python3.10/site-packages/chatlab/registry.py:160, in generate_function_schema(function, parameter_schema)
    158 sig = inspect.signature(function)
    159 for name, param in sig.parameters.items():
--> 160     prop_schema, is_required = process_parameter(name, param)
    161     schema_properties[name] = prop_schema
    162     if is_required:

File ~/dev/.venv/lib/python3.10/site-packages/chatlab/registry.py:128, in process_parameter(name, param)
    126 def process_parameter(name, param):
    127     """Process a function parameter for use in a JSON schema."""
--> 128     prop_schema, is_required = process_type(param.annotation, param.default == inspect.Parameter.empty)
    129     if param.default != inspect.Parameter.empty:
    130         prop_schema["default"] = param.default

File ~/dev/.venv/lib/python3.10/site-packages/chatlab/registry.py:123, in process_type(annotation, is_required)
    118     return {
    119         "type": JSON_SCHEMA_TYPES[annotation],
    120     }, is_required
    122 else:
--> 123     raise Exception(f"Type annotation must be a JSON serializable type ({ALLOWED_TYPES})")

Exception: Type annotation must be a JSON serializable type ([<class 'int'>, <class 'str'>, <class 'bool'>, <class 'float'>, <class 'list'>, <class 'dict'>, typing.List, typing.Dict])

Expected schema:

{
        "name": "sample_function_with_uuid",
        "description": "test",
        "parameters": {
            "type": "object",
            "properties": {
                "foo": {"type": "string"},
                "bar": {"type": "integer"},
                "baz": {"type": "string", "format": "uuid"},
            },
            "required": ["foo", "bar", "baz"],
        },
    }

Unknown Finish Reason

  • ChatLab version: 1.3.0
  • Python version: 3.10.12
  • Operating System: CentOS

Description

Hi all! I'm trying to run the tutorial from functionary here after launching a vLLM server. The code returns:

INFO:     127.0.0.1:48176 - "POST /v1/chat/completions HTTP/1.1" 200 OK
UNKNOWN FINISH REASON: 'tool_calls'. If you see this message, report it as an issue to https://github.com/rgbkrk/chatlab/issues
USER: What is the price of the car named 'Rhino'?

Any thoughts on what could be the problem? Thanks!

What I Did

import chatlab
import asyncio

def get_car_price(car_name: str):
    """this function is used to get the price of the car given the name
    :param car_name: name of the car to get the price
    """
    car_price = {
        "rhino": {"price": "$20000"},
        "elephant": {"price": "$25000"}
    }
    for key in car_price:
        if key in car_name.lower():
            return {"price": car_price[key]}
    return {"price": "unknown"}

chat = chatlab.Chat(model="meetkai/functionary-small-v2.2", base_url="http://localhost:8000/v1", api_key="functionary")
chat.register(get_car_price)
asyncio.run(chat.submit("What is the price of the car named 'Rhino'?", stream=False))

for message in chat.messages:
    role = message["role"].upper()
    if "function_call" in message:
        func_name = message["function_call"]["name"]
        func_param = message["function_call"]["arguments"]
        print(f"{role}: call function: {func_name}, arguments:{func_param}")
    else:
        content = message["content"]
        print(f"{role}: {content}")

Hallucinations

If you see them, report them here. I'll start.

List so far:

  • git.list_status

Convert Notebook examples to documentation

A wealth of notebooks in here can be converted to full MDX within website/. Each one notebook is a worthy PR to get started with as a new page in src/docs/. These are the current notebooks in need of conversion to narrative linked documentation with interactives:

Tasks:

For each notebook:

  1. Convert the Jupyter notebook to a Markdown file.
  2. Convert the Markdown file to an MDX file, adding JSX components where necessary.
  3. Add the MDX file to the website/docs/
  4. Create a PR for each notebook conversion.

Acceptance Criteria:

  • Each Jupyter notebook is successfully converted to an MDX file.
  • Each MDX file is added to the website/docs/ directory of the website.
  • A PR is created for each notebook conversion.

Alternate approach: automation

Originally I was setting up some automated tooling to bring in notebooks into docusaurus. Doing that effectively requires more work in docusaurus. If anyone's ready for that, then by all means!

vdom does not display in VS Code

It looks like the vdom renderer in VS Code needs some TLC. In order to show the output I have to click to "Change Presentation" on the left hand side of the outputs:

vdom-not-working-in-vscode.mp4

Exceptions from function calls break the chat

  • ChatLab version: 2.0
  • Python version: 3.12
  • Operating System: macOS

Description

I wanted to continue a chat even after a function had generated an exception.

What I Did

import chatlab

def add_two_numbers(a: float, b: float) -> float:
    """Add two numbers together. Raises an exception when the numbers are in the wrong order."""
    if b < a:
        return a + b
    raise Exception("I can't do math")

chat = chatlab.Chat(model=chatlab.models.GPT_4_0125_PREVIEW, chat_functions=[add_two_numbers])
await chat("Please add 1 + 2 for me")

This generates the exception:

---------------------------------------------------------------------------
Exception                                 Traceback (most recent call last)
Cell In[11], line 10
      7     raise Exception("I can't do math")
      9 chat = chatlab.Chat(model=chatlab.models.GPT_4_0125_PREVIEW, chat_functions=[add_two_numbers])
---> 10 await chat("Please add 1 + 2 for me")

File ~/Wharf/src/chatlab/chatlab/chat.py:125, in Chat.__call__(self, stream, *messages, **kwargs)
    123 async def __call__(self, *messages: Union[ChatCompletionMessageParam, str], stream=True, **kwargs):
    124     """Send messages to the chat model and display the response."""
--> 125     return await self.submit(*messages, stream=stream, **kwargs)

File ~/Wharf/src/chatlab/chatlab/chat.py:350, in Chat.submit(self, stream, *messages, **kwargs)
    347 self.append(assistant_tool_calls(tool_arguments))
    348 for tool_argument in tool_arguments:
    349     # Oh crap I need to append the big assistant call of it too. May have to assume we've done it by here.
--> 350     function_called = await tool_argument.call(self.function_registry)
    351     # TODO: Format the tool message
    352     self.append(function_called.get_tool_called_message())

File ~/Wharf/src/chatlab/chatlab/views/tools.py:146, in ToolArguments.call(self, function_registry)
    144 # Execute the function and get the result
    145 try:
--> 146     output = await function_registry.call(function_name, function_args)
    147 except FunctionArgumentError as e:
    148     self.finished = True

File ~/Wharf/src/chatlab/chatlab/registry.py:474, in FunctionRegistry.call(self, name, arguments)
    472     result = await function(**prepared_arguments)
    473 else:
--> 474     result = function(**prepared_arguments)
    475 return result

Cell In[11], line 7, in add_two_numbers(a, b)
      5 if b < a:
      6     return a + b
----> 7 raise Exception("I can't do math")

Exception: I can't do math

and all future calls to the chat generate a 400 error code from OpenAI:

await chat("what went wrong there?")

---------------------------------------------------------------------------
BadRequestError                           Traceback (most recent call last)
Cell In[10], line 1
----> 1 await chat("what went wrong there?")

File ~/Wharf/src/chatlab/chatlab/chat.py:125, in Chat.__call__(self, stream, *messages, **kwargs)
    123 async def __call__(self, *messages: Union[ChatCompletionMessageParam, str], stream=True, **kwargs):
    124     """Send messages to the chat model and display the response."""
--> 125     return await self.submit(*messages, stream=stream, **kwargs)

File ~/Wharf/src/chatlab/chatlab/chat.py:302, in Chat.submit(self, stream, *messages, **kwargs)
    299 # Due to the strict response typing based on `Literal` typing on `stream`, we have to process these
    300 # two cases separately
    301 if stream:
--> 302     streaming_response = await client.chat.completions.create(
    303         **chat_create_kwargs,
    304         stream=True,
    305     )
    307     self.append(*messages)
    309     finish_reason, function_call_request, tool_arguments = await self.__process_stream(streaming_response)

File ~/.pyenv/versions/3.12.1/lib/python3.12/site-packages/openai/resources/chat/completions.py:1291, in AsyncCompletions.create(self, messages, model, frequency_penalty, function_call, functions, logit_bias, logprobs, max_tokens, n, presence_penalty, response_format, seed, stop, stream, temperature, tool_choice, tools, top_logprobs, top_p, user, extra_headers, extra_query, extra_body, timeout)
   1242 @required_args(["messages", "model"], ["messages", "model", "stream"])
   1243 async def create(
   1244     self,
   (...)
   1289     timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
   1290 ) -> ChatCompletion | AsyncStream[ChatCompletionChunk]:
-> 1291     return await self._post(
   1292         "/chat/completions",
   1293         body=maybe_transform(
   1294             {
   1295                 "messages": messages,
   1296                 "model": model,
   1297                 "frequency_penalty": frequency_penalty,
   1298                 "function_call": function_call,
   1299                 "functions": functions,
   1300                 "logit_bias": logit_bias,
   1301                 "logprobs": logprobs,
   1302                 "max_tokens": max_tokens,
   1303                 "n": n,
   1304                 "presence_penalty": presence_penalty,
   1305                 "response_format": response_format,
   1306                 "seed": seed,
   1307                 "stop": stop,
   1308                 "stream": stream,
   1309                 "temperature": temperature,
   1310                 "tool_choice": tool_choice,
   1311                 "tools": tools,
   1312                 "top_logprobs": top_logprobs,
   1313                 "top_p": top_p,
   1314                 "user": user,
   1315             },
   1316             completion_create_params.CompletionCreateParams,
   1317         ),
   1318         options=make_request_options(
   1319             extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
   1320         ),
   1321         cast_to=ChatCompletion,
   1322         stream=stream or False,
   1323         stream_cls=AsyncStream[ChatCompletionChunk],
   1324     )

File ~/.pyenv/versions/3.12.1/lib/python3.12/site-packages/openai/_base_client.py:1578, in AsyncAPIClient.post(self, path, cast_to, body, files, options, stream, stream_cls)
   1564 async def post(
   1565     self,
   1566     path: str,
   (...)
   1573     stream_cls: type[_AsyncStreamT] | None = None,
   1574 ) -> ResponseT | _AsyncStreamT:
   1575     opts = FinalRequestOptions.construct(
   1576         method="post", url=path, json_data=body, files=await async_to_httpx_files(files), **options
   1577     )
-> 1578     return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)

File ~/.pyenv/versions/3.12.1/lib/python3.12/site-packages/openai/_base_client.py:1339, in AsyncAPIClient.request(self, cast_to, options, stream, stream_cls, remaining_retries)
   1330 async def request(
   1331     self,
   1332     cast_to: Type[ResponseT],
   (...)
   1337     remaining_retries: Optional[int] = None,
   1338 ) -> ResponseT | _AsyncStreamT:
-> 1339     return await self._request(
   1340         cast_to=cast_to,
   1341         options=options,
   1342         stream=stream,
   1343         stream_cls=stream_cls,
   1344         remaining_retries=remaining_retries,
   1345     )

File ~/.pyenv/versions/3.12.1/lib/python3.12/site-packages/openai/_base_client.py:1429, in AsyncAPIClient._request(self, cast_to, options, stream, stream_cls, remaining_retries)
   1426         await err.response.aread()
   1428     log.debug("Re-raising status error")
-> 1429     raise self._make_status_error_from_response(err.response) from None
   1431 return self._process_response(
   1432     cast_to=cast_to,
   1433     options=options,
   (...)
   1436     stream_cls=stream_cls,
   1437 )

BadRequestError: Error code: 400 - {'error': {'message': "An assistant message with 'tool_calls' must be followed by tool messages responding to each 'tool_call_id'. The following tool_call_ids did not have response messages: call_KTJddJ7ScPS972aOPs2Owwdl", 'type': 'invalid_request_error', 'param': 'messages.[2].role', 'code': None}}

Ideally, the exception result would be added to the message log, and allow the chat to continue. (And perhaps even allow the model to try a fix for the exception...)

Handle incomplete messages

When ChatGPT stops short of finishing a function call like this:

image

We need to mark it as incomplete. Maybe the UI can read ▶ 𝑓 Error: Incomplete call of method in that case (or some other means).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.