GithubHelp home page GithubHelp logo

chatblade's People

Contributors

anakaiti avatar callumlocke avatar csabahenk avatar danthedaniel avatar deadolus avatar erjanmx avatar fliepeltje avatar npiv avatar phromo avatar velfi avatar wilmerwang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chatblade's Issues

Crash on ctrl+c

It seems like chatblade crashes if it receives a ctrl+c key. Since this is a cli app, I would expect it to handle the exit request a little more gracefully, and not with a 20 line traceback to some python modules.

gpt-4 does not exist

Linux command: chatblade -c 4 "example text"
Return: openai error: The model: gpt-4 does not exist
According to readme this is wrong behavior I think

error providing prompt file

trying to use prompt file with chatblade (tried installation with via pip, brew, tea)

chatblade -p testing hello 

$ Prompt ~/.config/chatblade/testing not found in ~/.config/chatblade/testing.yaml

echo $PROMPT > ~/.config/chatblade/testing

chatblade -p testing hello

Traceback (most recent call last):
  File "/opt/homebrew/bin/chatblade", line 8, in <module>
    sys.exit(main())
             ^^^^^^
  File "/opt/homebrew/Cellar/chatblade/0.0.2/libexec/lib/python3.11/site-packages/chatblade/__main__.py", line 5, in main
    cli.cli()
  File "/opt/homebrew/Cellar/chatblade/0.0.2/libexec/lib/python3.11/site-packages/chatblade/cli.py", line 164, in cli
    handle_input(query, params)
  File "/opt/homebrew/Cellar/chatblade/0.0.2/libexec/lib/python3.11/site-packages/chatblade/cli.py", line 139, in handle_input
    messages = chat.init_conversation(query, prompt_config["system"])
                                             ~~~~~~~~~~~~~^^^^^^^^^^
TypeError: 'NoneType' object is not subscriptable
error

$ Prompt ~/.config/chatblade/testing not found in ~/.config/chatblade/testing.yaml

All previous answers get printed for new chatblade calls with existing session

I'm not sure if this is a bug or I have just specified that I want this functionality somewhere by accident:
I am using

chatblade \
	--session "$SESSION_NAME" \
	--raw \
	--no-format \
	--only \
	"$query"

to call chatblade every time I have a new query (instead of using the interactive mode). What I have noticed, is that, every time I ask another question, chatblade prints all of the answers to previous questions as well. E.g. if I have asked three questions so far and ask a fourth (all with seperate/new invocations of the above command I use to launch chatblade), then it will output the answers to the last three questions and the output to the foruth question. This is quite annoying, I would only like to see the answer to my latest question (I do want it to remember the last questions/answers of the session, hence I specify it manually).
Thanks for this awesome tool!

It doesn't accept any API key

What am I doing wrong?
chatblade --openai-api-key
returns:
no query or option given. nothing to do...
if:
chatblade --openai-api-key key
returns:
openai error: Incorrect API key provided: key. You can find your API key at https://platform.openai.com/account/api-keys.

About setting it up as a ENV VARIABLE, can someone help, please?
I'm on macOS Ventura, M1

TiA

Feature: option to rebind submission key in interactive mode

I think that it would be great to be able to change the key binding of the "submit" key (default: <Enter>) when working with chatblade in interactive mode. I would like to be able to use <Shift-Enter> or something comparable.

Implementation question(s):

  • Would you prefer that this be exposed as a command-line argument? Alternatively, if it were to be an option in ~/.config/chatblade, how would we differentiate that from saved prompts?
  • Would we still be able to use the rich Prompt.ask method for querying user input? Upon first glance I don't see a way to override key bindings.

Certificate verify failed: self signed certificate in certificate chain

I'm getting cert errors running chatblade currently:

Traceback (most recent call last):
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/urllib3/connectionpool.py", line 467, in _make_request
    self._validate_conn(conn)
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/urllib3/connectionpool.py", line 1092, in _validate_conn
    conn.connect()
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/urllib3/connection.py", line 642, in connect
    sock_and_verified = _ssl_wrap_socket_and_match_hostname(
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/urllib3/connection.py", line 783, in _ssl_wrap_socket_and_match_hostname
    ssl_sock = ssl_wrap_socket(
               ^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/urllib3/util/ssl_.py", line 469, in ssl_wrap_socket
    ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls, server_hostname)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/urllib3/util/ssl_.py", line 513, in _ssl_wrap_socket_impl
    return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/[email protected]/3.11.4_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py", line 517, in wrap_socket
    return self.sslsocket_class._create(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/[email protected]/3.11.4_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py", line 1075, in _create
    self.do_handshake()
  File "/opt/homebrew/Cellar/[email protected]/3.11.4_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py", line 1346, in do_handshake
    self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1002)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/urllib3/connectionpool.py", line 790, in urlopen
    response = self._make_request(
               ^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/urllib3/connectionpool.py", line 491, in _make_request
    raise new_e
urllib3.exceptions.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1002)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/requests/adapters.py", line 486, in send
    resp = conn.urlopen(
           ^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/urllib3/connectionpool.py", line 874, in urlopen
    return self.urlopen(
           ^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/urllib3/connectionpool.py", line 874, in urlopen
    return self.urlopen(
           ^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/urllib3/connectionpool.py", line 844, in urlopen
    retries = retries.increment(
              ^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/urllib3/util/retry.py", line 515, in increment
    raise MaxRetryError(_pool, url, reason) from reason  # type: ignore[arg-type]
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='api.openai.com', port=443): Max retries exceeded with url: /v1/chat/completions (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1002)')))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/openai/api_requestor.py", line 596, in request_raw
    result = _thread_context.session.request(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/requests/sessions.py", line 589, in request
    resp = self.send(prep, **send_kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/requests/sessions.py", line 703, in send
    r = adapter.send(request, **kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/requests/adapters.py", line 517, in send
    raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='api.openai.com', port=443): Max retries exceeded with url: /v1/chat/completions (Caused by SSLError(SSLCertVerificationError(1, '[SSL:CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1002)')))

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/homebrew/bin/chatblade", line 8, in <module>
    sys.exit(main())
             ^^^^^^
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/chatblade/__main__.py", line 5, in main
    cli.cli()
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/chatblade/cli.py", line 152, in cli
    handle_input(query, params)
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/chatblade/cli.py", line 81, in handle_input
    messages = fetch_and_cache(messages, params)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/chatblade/cli.py", line 14, in fetch_and_cache
    result = chat.query_chat_gpt(messages, params)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/chatblade/chat.py", line 121, in query_chat_gpt
    result = openai.ChatCompletion.create(messages=dict_messages, **config)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
                           ^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/openai/api_requestor.py", line 288, in request
    result = self.request_raw(
             ^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/openai/api_requestor.py", line 609, in request_raw
    raise error.APIConnectionError(
openai.error.APIConnectionError: Error communicating with OpenAI: HTTPSConnectionPool(host='api.openai.com', port=443): Max retries exceeded with url: /v1/chat/completions (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1002)')))

I've tested other openai API endpoints manually with curl and am not currently getting these errors. I've tried uninstalling chatblade (via brew) and reinstalling to try to upgrade dependencies but that didn't help.

Using CTRL-D in interactive mode throws an EOFError

query (type 'quit' to exit): : Foo
╭──────────────────────────────────────────────────────── assistant ────────────────────────────────────────────────────────╮
│ Bar                                                                                                                       │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
query (type 'quit' to exit): : <CTRL-D>
Traceback (most recent call last):
  File "/usr/bin/chatblade", line 33, in <module>
    sys.exit(load_entry_point('chatblade==0.0.2', 'console_scripts', 'chatblade')())
  File "/usr/lib/python3.10/site-packages/chatblade/__main__.py", line 5, in main
    cli.cli()
  File "/usr/lib/python3.10/site-packages/chatblade/cli.py", line 164, in cli
    handle_input(query, params)
  File "/usr/lib/python3.10/site-packages/chatblade/cli.py", line 144, in handle_input
    start_repl(None, params)
  File "/usr/lib/python3.10/site-packages/chatblade/cli.py", line 116, in start_repl
    query = Prompt.ask("[yellow]query (type 'quit' to exit): [/yellow]")
  File "/usr/lib/python3.10/site-packages/rich/prompt.py", line 141, in ask
    return _prompt(default=default, stream=stream)
  File "/usr/lib/python3.10/site-packages/rich/prompt.py", line 274, in __call__
    value = self.get_input(self.console, prompt, self.password, stream=stream)
  File "/usr/lib/python3.10/site-packages/rich/prompt.py", line 203, in get_input
    return console.input(prompt, password=password, stream=stream)
  File "/usr/lib/python3.10/site-packages/rich/console.py", line 2123, in input
    result = input()
EOFError

CTRL-D is often used to quit interactive prompts, so it shouldn't throw an error.

Better error handling

When I run this example command:

chatblade "is casio f-91w a good watch?" -c 4

I receive following output:

Traceback (most recent call last):
  File "/home/bdmbdsm/.local/share/virtualenvs/chat-X6P-Bc3p/bin/chatblade", line 8, in <module>
    sys.exit(main())
  File "/home/bdmbdsm/.local/share/virtualenvs/chat-X6P-Bc3p/lib/python3.10/site-packages/chatblade/__main__.py", line 5, in main
    cli.cli()
  File "/home/bdmbdsm/.local/share/virtualenvs/chat-X6P-Bc3p/lib/python3.10/site-packages/chatblade/cli.py", line 164, in cli
    handle_input(query, params)
  File "/home/bdmbdsm/.local/share/virtualenvs/chat-X6P-Bc3p/lib/python3.10/site-packages/chatblade/cli.py", line 154, in handle_input
    messages = fetch_and_cache(messages, params)
  File "/home/bdmbdsm/.local/share/virtualenvs/chat-X6P-Bc3p/lib/python3.10/site-packages/chatblade/cli.py", line 107, in fetch_and_cache
    response_msg, _ = chat.query_chat_gpt(messages, params)
  File "/home/bdmbdsm/.local/share/virtualenvs/chat-X6P-Bc3p/lib/python3.10/site-packages/chatblade/chat.py", line 44, in query_chat_gpt
    result = openai.ChatCompletion.create(messages=dict_messages, **config)
  File "/home/bdmbdsm/.local/share/virtualenvs/chat-X6P-Bc3p/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
  File "/home/bdmbdsm/.local/share/virtualenvs/chat-X6P-Bc3p/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
  File "/home/bdmbdsm/.local/share/virtualenvs/chat-X6P-Bc3p/lib/python3.10/site-packages/openai/api_requestor.py", line 226, in request
    resp, got_stream = self._interpret_response(result, stream)
  File "/home/bdmbdsm/.local/share/virtualenvs/chat-X6P-Bc3p/lib/python3.10/site-packages/openai/api_requestor.py", line 619, in _interpret_response
    self._interpret_response_line(
  File "/home/bdmbdsm/.local/share/virtualenvs/chat-X6P-Bc3p/lib/python3.10/site-packages/openai/api_requestor.py", line 682, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: The model: `gpt-4` does not exist

Perhaps that's not a high priority thing, but it would be nice to show that error through rich & without all that stacktrace.

Feature request: Stream output

Would be better if the output can be streamed so the user can get immediate result from his query without waiting until the entire response ready.

FileNotFoundError on MacOS as it is expecting ~/.cache directory

$ chatblade how can I extract a still frame from a video at 22:01 with ffmpeg
Traceback (most recent call last):
  File "/opt/homebrew/bin/chatblade", line 8, in <module>
    sys.exit(main())
             ^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/chatblade/__main__.py", line 5, in main
    cli.cli()
  File "/opt/homebrew/lib/python3.11/site-packages/chatblade/cli.py", line 170, in cli
    messages = fetch_and_cache(messages, params)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/chatblade/cli.py", line 135, in fetch_and_cache
    to_cache(messages)
  File "/opt/homebrew/lib/python3.11/site-packages/chatblade/cli.py", line 113, in to_cache
    with open(path, "wb") as f:
         ^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '/Users/myusername/.cache/chatblade'

Running mkdir ~/.cache fixed the issue but just filing this so we can add a check in the code to see if the cache directory exists.

cannot install on debian

pip install chatblade
error: externally-managed-environment

× This environment is externally managed
╰─> To install Python packages system-wide, try apt install
python3-xyz, where xyz is the package you are trying to
install.

If you wish to install a non-Debian-packaged Python package,
create a virtual environment using python3 -m venv path/to/venv.
Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make
sure you have python3-full installed.

If you wish to install a non-Debian packaged Python application,
it may be easiest to use pipx install xyz, which will manage a
virtual environment for you. Make sure you have pipx installed.

See /usr/share/doc/python3.11/README.venv for more information.

note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this,
at the risk of breaking your Python installation or OS, by passing --break-system-packages.
hint: See PEP 668 for the detailed specification.


sudo apt install python3-chatblade
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
E: Unable to locate package python3-chatblade


pip install 'chatblade @ git+https://github.com/npiv/chatblade'
error: externally-managed-environment

× This environment is externally managed
╰─> To install Python packages system-wide, try apt install
python3-xyz, where xyz is the package you are trying to
install.

If you wish to install a non-Debian-packaged Python package,
create a virtual environment using python3 -m venv path/to/venv.
Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make
sure you have python3-full installed.

If you wish to install a non-Debian packaged Python application,
it may be easiest to use pipx install xyz, which will manage a
virtual environment for you. Make sure you have pipx installed.

See /usr/share/doc/python3.11/README.venv for more information.

note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this,
at the risk of breaking your Python installation or OS, by passing --break-system-packages.
hint: See PEP 668 for the detailed specification.


Not detecting env variable

Steps:

I installed via pip install .
Added OPENAI_API_KEY="sk-blahblahblahblah" in my .env file
Confirmed that echo $OPENAI_API_KEY prints the expected result
Ran chatblade is this working and got expecting openai API Key back
Confirmed that chatblade is this working --openai-api-key $OPENAI_API_KEY does work

My setup is zsh, MacOS, python 3.11.2

argument --chat-gpt=4 does not work

I installed through pip about 20 minutes ago, 3.5 (default) works great, but when I try to provide model 4, I get the following error:

➜  chatblade -c 4 write a short poem for me
openai error: The model: `gpt-4` does not exist

Is this something you're familiar with?
🙏

Light theme issue

Really loving chatblade, use it all the time.

Small issue tho, when a light terminal theme is used, code blocks still render out dark, and typically text is dark on light terms

image

The above is an example, should I look to fix this on my end, or is this something chatblade can figure out?

Allow to set default model via env variable

For daily usage it would be helpful to be able to set the default GPT model via an env variable similar to the OPENAI_API_KEY.
e.g. OPENAI_API_MODEL. Because I am most of the time using GPT-4 and not 3.5.

Syntax error

chatblade python <nazwa pliku | >, this generate syntax error.
chatblade "python <nazwa pliku | >" while this works fine.
Typical input is without using ""
This problem can be easily handled. But may be better handle to make chatblade easier to use in command line

model: gpt-4 does not exist

Commit used: 4ce2873

file.md | chatblade continue this -c 4                                                                                      │
│                                                                                                                                                                                       │
│ Traceback (most recent call last): File "/my/home/.local/bin/chatblade", line 8, in <module> sys.exit(main()) File "/my/home/.local/lib/python3.10/site-packages/chatblade/main.py",  │
│ line 5, in main cli.cli() File "/my/home/.local/lib/python3.10/site-packages/chatblade/cli.py", line 154, in cli messages = fetch_and_cache(messages, params) File                    │
│ "/my/home/.local/lib/python3.10/site-packages/chatblade/cli.py", line 126, in fetch_and_cache response_msg, _ = chat.query_chat_gpt(messages, params) File                            │
│ "/my/home/.local/lib/python3.10/site-packages/chatblade/chat.py", line 44, in query_chat_gpt result = openai.ChatCompletion.create(messages=dict_messages, **config) File             │
│ "/my/home/.local/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create return super().create(*args, **kwargs) File                                │
│ "/my/home/.local/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create response, _, api_key = requestor.request( File               │
│ "/my/home/.local/lib/python3.10/site-packages/openai/api_requestor.py", line 226, in request resp, got_stream = self._interpret_response(result, stream) File                         │
│ "/my/home/.local/lib/python3.10/site-packages/openai/api_requestor.py", line 619, in _interpret_response self._interpret_response_line( File                                          │
│ "/my/home/.local/lib/python3.10/site-packages/openai/api_requestor.py", line 682, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: The │
│ model: gpt-4 does not exist

yaml not working

when i am placing a file called expert.yaml or anything else on the config folder as instructed i get error invoking the command :

chatblade -p expert

no prompt expert found in any of following locations:

  • expert
  • ~/.config/chatblade/expert

Tag versions

It would be great if you could tag specific commits, so one can provide fixed packages.

Feature request: Allow retrying

The API is wonky and gets stuck sometimes. It would be nice to automatically time out and retry after a configurable number of seconds, or just upon pressing a button.

ERROR: tiktoken 0.3.3 has requirement requests>=2.26.0, but you'll have requests 2.22.0 which is incompatible.

I installed it via cloning and the error pops up: ERROR: tiktoken 0.3.3 has requirement requests>=2.26.0, but you'll have requests 2.22.0 which is incompatible.

I managed to fix it locally with:

pip install --upgrade "requests==2.26.0"

But i think it should work out-of-the-box.


~ pip install 'chatblade @ git+https://github.com/npiv/chatblade'
Collecting chatblade@ git+https://github.com/npiv/chatblade
Cloning https://github.com/npiv/chatblade to /tmp/pip-install-xlf9ougx/chatblade
Running command git clone -q https://github.com/npiv/chatblade /tmp/pip-install-xlf9ougx/chatblade
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing wheel metadata ... done
Collecting openai>=0.27.2
Downloading openai-0.27.4-py3-none-any.whl (70 kB)
|████████████████████████████████| 70 kB 2.0 MB/s
Collecting rich
Downloading rich-13.3.4-py3-none-any.whl (238 kB)
|████████████████████████████████| 238 kB 4.0 MB/s
Requirement already satisfied: pyyaml in /usr/lib/python3/dist-packages (from chatblade@ git+https://github.com/npiv/chatblade) (5.3.1)
Collecting tiktoken
Downloading tiktoken-0.3.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.7 MB)
|████████████████████████████████| 1.7 MB 12.2 MB/s
Collecting platformdirs
Downloading platformdirs-3.4.0-py3-none-any.whl (15 kB)
Requirement already satisfied: tqdm in ./.local/lib/python3.8/site-packages (from openai>=0.27.2->chatblade@ git+https://github.com/npiv/chatblade) (4.64.0)
Requirement already satisfied: aiohttp in ./.local/lib/python3.8/site-packages (from openai>=0.27.2->chatblade@ git+https://github.com/npiv/chatblade) (3.8.1)
Requirement already satisfied: requests>=2.20 in /usr/lib/python3/dist-packages (from openai>=0.27.2->chatblade@ git+https://github.com/npiv/chatblade) (2.22.0)
Collecting typing-extensions<5.0,>=4.0.0; python_version < "3.9"
Downloading typing_extensions-4.5.0-py3-none-any.whl (27 kB)
Collecting markdown-it-py<3.0.0,>=2.2.0
Downloading markdown_it_py-2.2.0-py3-none-any.whl (84 kB)
|████████████████████████████████| 84 kB 1.3 MB/s
Collecting pygments<3.0.0,>=2.13.0
Downloading Pygments-2.15.1-py3-none-any.whl (1.1 MB)
|████████████████████████████████| 1.1 MB 8.7 MB/s
Requirement already satisfied: regex>=2022.1.18 in ./.local/lib/python3.8/site-packages (from tiktoken->chatblade@ git+https://github.com/npiv/chatblade) (2022.4.24)
Requirement already satisfied: charset-normalizer<3.0,>=2.0 in ./.local/lib/python3.8/site-packages (from aiohttp->openai>=0.27.2->chatblade@ git+https://github.com/npiv/chatblade) (2.0.12)
Requirement already satisfied: yarl<2.0,>=1.0 in ./.local/lib/python3.8/site-packages (from aiohttp->openai>=0.27.2->chatblade@ git+https://github.com/npiv/chatblade) (1.7.2)
Requirement already satisfied: aiosignal>=1.1.2 in ./.local/lib/python3.8/site-packages (from aiohttp->openai>=0.27.2->chatblade@ git+https://github.com/npiv/chatblade) (1.2.0)
Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in ./.local/lib/python3.8/site-packages (from aiohttp->openai>=0.27.2->chatblade@ git+https://github.com/npiv/chatblade) (4.0.2)
Requirement already satisfied: attrs>=17.3.0 in /usr/lib/python3/dist-packages (from aiohttp->openai>=0.27.2->chatblade@ git+https://github.com/npiv/chatblade) (19.3.0)
Requirement already satisfied: multidict<7.0,>=4.5 in ./.local/lib/python3.8/site-packages (from aiohttp->openai>=0.27.2->chatblade@ git+https://github.com/npiv/chatblade) (6.0.2)
Requirement already satisfied: frozenlist>=1.1.1 in ./.local/lib/python3.8/site-packages (from aiohttp->openai>=0.27.2->chatblade@ git+https://github.com/npiv/chatblade) (1.3.0)
Collecting mdurl~=0.1
Downloading mdurl-0.1.2-py3-none-any.whl (10.0 kB)
Requirement already satisfied: idna>=2.0 in /usr/lib/python3/dist-packages (from yarl<2.0,>=1.0->aiohttp->openai>=0.27.2->chatblade@ git+https://github.com/npiv/chatblade) (2.8)
Building wheels for collected packages: chatblade
Building wheel for chatblade (PEP 517) ... done
Created wheel for chatblade: filename=chatblade-0.2.3-py3-none-any.whl size=27555 sha256=f24cf5360089bd6536bbf5c31d35d7eb063e552113175b4f90fa0e7d2517814d
Stored in directory: /tmp/pip-ephem-wheel-cache-7zc882ev/wheels/0f/f6/d1/5ae4fda48f2c65b64809bc54498c6ce0578678f9c52eaa3829
Successfully built chatblade
ERROR: tiktoken 0.3.3 has requirement requests>=2.26.0, but you'll have requests 2.22.0 which is incompatible.
Installing collected packages: openai, typing-extensions, mdurl, markdown-it-py, pygments, rich, tiktoken, platformdirs, chatblade
Attempting uninstall: pygments
Found existing installation: Pygments 2.12.0
Uninstalling Pygments-2.12.0:
Successfully uninstalled Pygments-2.12.0
Successfully installed chatblade-0.2.3 markdown-it-py-2.2.0 mdurl-0.1.2 openai-0.27.4 platformdirs-3.4.0 pygments-2.15.1 rich-13.3.4 tiktoken-0.3.3 typing-extensions-4.5.0

Unusually high estimate of tokens

$ chatblade what is 1 + 1 --raw -t
what is 1 + 1

      tokens/costs

┏━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━┓
┃ Model ┃ Tokens ┃ Price ┃
┡━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━┩
│ gpt-3.5 │ 4103 │ $0.008206 │
│ gpt-4 │ 4103 │ $0.123090 │
└─────────┴────────┴───────────┘

  • estimated costs do not include the tokens that may be returned

Fish shell support

I can't run chatblade on fish shell.

Even after setting the OPENAI_API_KEY environment variable, I get the error

openai error: No API key provided. You can set your API key in code using 'openai.api_key = <API-KEY>', or you can set the environment variable OPENAI_API_KEY=<API-KEY>). If your API key is stored in a file, you can point the openai module at it with 'openai.api_key_path = <PATH>'. You can generate API keys in the OpenAI web interface. See https://platform.openai.com/account/api-keys for details.

Remove borders or add configuration to optionally remove them

Really hand tool! Thanks for this. Very helpful to now be able to pipeline into other tools.

I like the --raw and --extract options to be able to get raw access to responses but have found myself wanting to copy/paste more of the input and output from the terminal history. Even though the borders are nice to organize the content, I find they get in the way when trying to select and copy:

Kapture 2023-03-21 at 09 54 31

Results in:

│ given the above list of responses where each line is an individual response, provide up to 5 │
│ categories that summarize all of the responses and then categories each response. Provide    │
│ the categorized responses in json

I'd like to have the side borders go away to be more friendly for copying/pasting.

How can I send a pdf to gpt4?

cat my.pdf | chatblade 'explain the following pdf'

produces an error in sys.stdin.read() , called from parser.get_piped_input()

The error is UnicodeDecodeError: 'utf8' codec can't decode byte 0x9c

Is there a way of sending the pdf?

Many thanks!

FileExistsError on Windows

Thank you for developing this tool, it has been very helpful. However, I have encountered issues with the session feature while using it on Windows. When starting a query, an error message appears as follows:

Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "E:\python\Scripts\chatblade.exe\__main__.py", line 7, in <module>
  File "E:\python\Lib\site-packages\chatblade\__main__.py", line 5, in main
    cli.cli()
  File "E:\python\Lib\site-packages\chatblade\cli.py", line 152, in cli
    handle_input(query, params)
  File "E:\python\Lib\site-packages\chatblade\cli.py", line 81, in handle_input
    messages = fetch_and_cache(messages, params)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\python\Lib\site-packages\chatblade\cli.py", line 26, in fetch_and_cache
    storage.to_cache(messages, params.session or utils.scratch_session)
  File "E:\python\Lib\site-packages\chatblade\storage.py", line 56, in to_cache
    os.rename(file_path_tmp, file_path)
FileExistsError: [WinError 183] 当文件已存在时,无法创建该文件。: 'C:\\Users\\qipao\\AppData\\Local\\chatblade\\chatblade\\Cache\\chatblade\\last.yaml.eDdQy479DK' -> 'C:\\Users\\qipao\\AppData\\Local\\chatblade\\chatblade\\Cache\\chatblade\\last.yaml'

It says that you can not create a file when it already exists. I tried on my mac and everything goes well. Can this be fixed?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.