GithubHelp home page GithubHelp logo

kharvd / gpt-cli Goto Github PK

View Code? Open in Web Editor NEW
514.0 10.0 68.0 1.69 MB

Command-line interface for ChatGPT, Claude and Bard

License: MIT License

Python 100.00%
assistant chatgpt gpt-3 gpt-4 llm openai gpt-cli gpt-client cli command-line

gpt-cli's Introduction

gpt-cli

Command-line interface for ChatGPT Claude and Bard.

screenshot

Features

Coming soon - Code Interpreter support #37

  • Command-Line Interface: Interact with ChatGPT or Claude directly from your terminal.
  • Model Customization: Override the default model, temperature, and top_p values for each assistant, giving you fine-grained control over the AI's behavior.
  • Usage tracking: Track your API usage with token count and price information.
  • Keyboard Shortcuts: Use Ctrl-C, Ctrl-D, and Ctrl-R shortcuts for easier conversation management and input control.
  • Multi-Line Input: Enter multi-line mode for more complex queries or conversations.
  • Markdown Support: Enable or disable markdown formatting for chat sessions to tailor the output to your preferences.
  • Predefined Messages: Set up predefined messages for your custom assistants to establish context or role-play scenarios.
  • Multiple Assistants: Easily switch between different assistants, including general, dev, and custom assistants defined in the config file.
  • Flexible Configuration: Define your assistants, model parameters, and API key in a YAML configuration file, allowing for easy customization and management.

Installation

This install assumes a Linux/OSX machine with Python and pip available.

pip install gpt-command-line

Install latest version from source:

pip install git+https://github.com/kharvd/gpt-cli.git

Or install by cloning the repository manually:

git clone https://github.com/kharvd/gpt-cli.git
cd gpt-cli
pip install .

Add the OpenAI API key to your .bashrc file (in the root of your home folder). In this example we use nano, you can use any text editor.

nano ~/.bashrc
export OPENAI_API_KEY=<your_key_here>

Run the tool

gpt

You can also use a gpt.yml file for configuration. See the Configuration section below.

Usage

Make sure to set the OPENAI_API_KEY environment variable to your OpenAI API key (or put it in the ~/.config/gpt-cli/gpt.yml file as described below).

usage: gpt [-h] [--no_markdown] [--model MODEL] [--temperature TEMPERATURE] [--top_p TOP_P]
              [--log_file LOG_FILE] [--log_level {DEBUG,INFO,WARNING,ERROR,CRITICAL}]
              [--prompt PROMPT] [--execute EXECUTE] [--no_stream]
              [{dev,general,bash}]

Run a chat session with ChatGPT. See https://github.com/kharvd/gpt-cli for more information.

positional arguments:
  {dev,general,bash}
                        The name of assistant to use. `general` (default) is a generally helpful
                        assistant, `dev` is a software development assistant with shorter
                        responses. You can specify your own assistants in the config file
                        ~/.config/gpt-cli/gpt.yml. See the README for more information.

optional arguments:
  -h, --help            show this help message and exit
  --no_markdown         Disable markdown formatting in the chat session.
  --model MODEL         The model to use for the chat session. Overrides the default model defined
                        for the assistant.
  --temperature TEMPERATURE
                        The temperature to use for the chat session. Overrides the default
                        temperature defined for the assistant.
  --top_p TOP_P         The top_p to use for the chat session. Overrides the default top_p defined
                        for the assistant.
  --log_file LOG_FILE   The file to write logs to. Supports strftime format codes.
  --log_level {DEBUG,INFO,WARNING,ERROR,CRITICAL}
                        The log level to use
  --prompt PROMPT, -p PROMPT
                        If specified, will not start an interactive chat session and instead will
                        print the response to standard output and exit. May be specified multiple
                        times. Use `-` to read the prompt from standard input. Implies
                        --no_markdown.
  --execute EXECUTE, -e EXECUTE
                        If specified, passes the prompt to the assistant and allows the user to
                        edit the produced shell command before executing it. Implies --no_stream.
                        Use `-` to read the prompt from standard input.
  --no_stream           If specified, will not stream the response to standard output. This is
                        useful if you want to use the response in a script. Ignored when the
                        --prompt option is not specified.
  --no_price            Disable price logging.

Type :q or Ctrl-D to exit, :c or Ctrl-C to clear the conversation, :r or Ctrl-R to re-generate the last response. To enter multi-line mode, enter a backslash \ followed by a new line. Exit the multi-line mode by pressing ESC and then Enter.

You can override the model parameters using --model, --temperature and --top_p arguments at the end of your prompt. For example:

> What is the meaning of life? --model gpt-4 --temperature 2.0
The meaning of life is subjective and can be different for diverse human beings and unique-phil ethics.org/cultuties-/ it that reson/bdstals89im3_jrf334;mvs-bread99ef=g22me

The dev assistant is instructed to be an expert in software development and provide short responses.

$ gpt dev

The bash assistant is instructed to be an expert in bash scripting and provide only bash commands. Use the --execute option to execute the commands. It works best with the gpt-4 model.

gpt bash -e "How do I list files in a directory?"

This will prompt you to edit the command in your $EDITOR it before executing it.

Configuration

You can configure the assistants in the config file ~/.config/gpt-cli/gpt.yml. The file is a YAML file with the following structure (see also config.py)

default_assistant: <assistant_name>
markdown: False
openai_api_key: <openai_api_key>
anthropic_api_key: <anthropic_api_key>
log_file: <path>
log_level: <DEBUG|INFO|WARNING|ERROR|CRITICAL>
assistants:
  <assistant_name>:
    model: <model_name>
    temperature: <temperature>
    top_p: <top_p>
    messages:
      - { role: <role>, content: <message> }
      - ...
  <assistant_name>:
    ...

You can override the parameters for the pre-defined assistants as well.

You can specify the default assistant to use by setting the default_assistant field. If you don't specify it, the default assistant is general. You can also specify the model, temperature and top_p to use for the assistant. If you don't specify them, the default values are used. These parameters can also be overridden by the command-line arguments.

Example:

default_assistant: dev
markdown: True
openai_api_key: <openai_api_key>
assistants:
  pirate:
    model: gpt-4
    temperature: 1.0
    messages:
      - { role: system, content: "You are a pirate." }
$ gpt pirate

> Arrrr
Ahoy, matey! What be bringing ye to these here waters? Be it treasure or adventure ye seek, we be sailing the high seas together. Ready yer map and compass, for we have a long voyage ahead!

Other chat bots

Anthropic Claude

To use Claude, you should have an API key from Anthropic (currently there is a waitlist for API access). After getting the API key, you can add an environment variable

export ANTHROPIC_API_KEY=<your_key_here>

or a config line in ~/.config/gpt-cli/gpt.yml:

anthropic_api_key: <your_key_here>

Now you should be able to run gpt with --model claude-v1 or --model claude-instant-v1:

gpt --model claude-v1

Google Bard (PaLM 2)

Similar to Claude, set the Google API key

export GOOGLE_API_KEY=<your_key_here>

or a config line:

google_api_key: <your_key_here>

Run gpt with the correct model:

gpt --model chat-bison-001

gpt-cli's People

Contributors

alexanderyastrebov avatar audreyt avatar chrisjefferson avatar dltn avatar gabelli avatar hazzlim avatar id4rksid3 avatar kharvd avatar maxbrito500 avatar ykim-isabel avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gpt-cli's Issues

Scrolling causes output corruption

When I try to scroll back, the same few lines repeat over and over. This happens in multiple terminal emulators. Do you have any idea what could cause this?

Doesn't actually execute commands?

It doesn't seem to be able to. If true, can this be added?

> gpt bash
Hi! I'm here to help. Type :q or Ctrl-D to exit, :c or Ctrl-C and Enter to clear
the conversation, :r or Ctrl-R to re-generate the last response. To enter
multi-line mode, enter a backslash \ followed by a new line. Exit the multi-line
mode by pressing ESC and then Enter (Meta+Enter). Try :? for help.
> list files
ls

                                                                                                                                                                                                  Tokens: 129 | Price: $0.000 | Total: $0.000
> ls
ls

                                                                                                                                                                                                  Tokens: 141 | Price: $0.000 | Total: $0.000

I want a REPL-like thing that I can talk to, ask it questions in plain language and get responses in plain language, but it can also suggest commands and explain what they do, and then I can execute them, and then it can see the command being executed and its output as part of the conversation context, to see if it worked correctly, suggest a new command that builds on the output of the previous, etc.

Allow responding while gpt is writing back

Hey, I've been a power user of this tool for a while now. One thing I recently noticed that is slowing down my work flow is that I'd like to already respond to gpt when it's writing to me. However, I only have the chance to write back in case I stop it with ctrl c or when I wait until it is done.

Model GPT-4 not found

When I do
gpt.py --model=gpt-4

I got

Hi! I'm here to help. Type q or Ctrl-D to exit, c or Ctrl-C to clear the
conversation, r or Ctrl-R to re-generate the last response. To enter multi-line
mode, enter a backslash \ followed by a new line. Exit the multi-line mode by
pressing ESC and then Enter (Meta+Enter).
> are you chatgpt 4?


Request Error. The last prompt was not saved: <class 'openai.error.InvalidRequestError'>: The model: `gpt-4` does not exist
The model: `gpt-4` does not exist
Traceback (most recent call last):
  File "/Users/gogl92/PhpstormProjects/gpt-cli/gptcli/session.py", line 101, in _respond
    for response in completion_iter:
  File "/Users/gogl92/PhpstormProjects/gpt-cli/gptcli/openai.py", line 20, in complete
    openai.ChatCompletion.create(
  File "/opt/homebrew/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
                           ^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/openai/api_requestor.py", line 226, in request
    resp, got_stream = self._interpret_response(result, stream)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/openai/api_requestor.py", line 619, in _interpret_response
    self._interpret_response_line(
  File "/opt/homebrew/lib/python3.11/site-packages/openai/api_requestor.py", line 682, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: The model: `gpt-4` does not exist

Conditionally loaded modules

Having access to all of the best APIs in one CLI is awesome! 🚀 I've been thinking of how we could overcome the downsides:

  • 2 seconds from run to API from all of the imports
  • If I just want a CLI to OpenAI, the setup of llama.cpp is a bit overkill
  • Each module will introduce idiosyncrasies like google-generativeai mandating Python 3.9+ (#29)

What if we conditionally load modules, based on the presence of OPENAI_API_KEY or config file?

Pip build failure

Problem occurs on a fresh install of fedora 39

This is the output

user:gpt-cli/ (main) $ pip install . | tee log.log
Defaulting to user installation because normal site-packages is not writeable
Processing /home/user/git/gpt-cli
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
  Preparing metadata (pyproject.toml): started
  Preparing metadata (pyproject.toml): finished with status 'done'
Collecting anthropic==0.7.7 (from gpt-command-line==0.1.4)
  Using cached anthropic-0.7.7-py3-none-any.whl.metadata (13 kB)
Requirement already satisfied: attrs==23.1.0 in /home/user/.local/lib/python3.12/site-packages (from gpt-command-line==0.1.4) (23.1.0)
Collecting black==23.1.0 (from gpt-command-line==0.1.4)
  Using cached black-23.1.0-py3-none-any.whl (174 kB)
Collecting google-generativeai==0.1.0 (from gpt-command-line==0.1.4)
  Using cached google_generativeai-0.1.0-py3-none-any.whl.metadata (3.0 kB)
Collecting openai==1.3.8 (from gpt-command-line==0.1.4)
  Using cached openai-1.3.8-py3-none-any.whl.metadata (17 kB)
Collecting prompt-toolkit==3.0.41 (from gpt-command-line==0.1.4)
  Using cached prompt_toolkit-3.0.41-py3-none-any.whl.metadata (6.5 kB)
Collecting pytest==7.3.1 (from gpt-command-line==0.1.4)
  Using cached pytest-7.3.1-py3-none-any.whl (320 kB)
Collecting PyYAML==6.0 (from gpt-command-line==0.1.4)
  Using cached PyYAML-6.0.tar.gz (124 kB)
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'error'
  error: subprocess-exited-with-error

  × Getting requirements to build wheel did not run successfully.
  │ exit code: 1
  ╰─> [54 lines of output]
      running egg_info
      writing lib/PyYAML.egg-info/PKG-INFO
      writing dependency_links to lib/PyYAML.egg-info/dependency_links.txt
      writing top-level names to lib/PyYAML.egg-info/top_level.txt
      Traceback (most recent call last):
        File "/home/user/.local/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
          main()
        File "/home/user/.local/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
          json_out['return_val'] = hook(**hook_input['kwargs'])
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "/home/user/.local/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
          return hook(config_settings)
                 ^^^^^^^^^^^^^^^^^^^^^
        File "/tmp/pip-build-env-db23p709/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 325, in get_requires_for_build_wheel
          return self._get_build_requires(config_settings, requirements=['wheel'])
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "/tmp/pip-build-env-db23p709/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 295, in _get_build_requires
          self.run_setup()
        File "/tmp/pip-build-env-db23p709/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 311, in run_setup
          exec(code, locals())
        File "<string>", line 288, in <module>
        File "/tmp/pip-build-env-db23p709/overlay/lib/python3.12/site-packages/setuptools/__init__.py", line 103, in setup
          return distutils.core.setup(**attrs)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "/tmp/pip-build-env-db23p709/overlay/lib/python3.12/site-packages/setuptools/_distutils/core.py", line 185, in setup
          return run_commands(dist)
                 ^^^^^^^^^^^^^^^^^^
        File "/tmp/pip-build-env-db23p709/overlay/lib/python3.12/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
          dist.run_commands()
        File "/tmp/pip-build-env-db23p709/overlay/lib/python3.12/site-packages/setuptools/_distutils/dist.py", line 969, in run_commands
          self.run_command(cmd)
        File "/tmp/pip-build-env-db23p709/overlay/lib/python3.12/site-packages/setuptools/dist.py", line 963, in run_command
          super().run_command(command)
        File "/tmp/pip-build-env-db23p709/overlay/lib/python3.12/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
          cmd_obj.run()
        File "/tmp/pip-build-env-db23p709/overlay/lib/python3.12/site-packages/setuptools/command/egg_info.py", line 321, in run
          self.find_sources()
        File "/tmp/pip-build-env-db23p709/overlay/lib/python3.12/site-packages/setuptools/command/egg_info.py", line 329, in find_sources
          mm.run()
        File "/tmp/pip-build-env-db23p709/overlay/lib/python3.12/site-packages/setuptools/command/egg_info.py", line 551, in run
          self.add_defaults()
        File "/tmp/pip-build-env-db23p709/overlay/lib/python3.12/site-packages/setuptools/command/egg_info.py", line 589, in add_defaults
          sdist.add_defaults(self)
        File "/tmp/pip-build-env-db23p709/overlay/lib/python3.12/site-packages/setuptools/command/sdist.py", line 112, in add_defaults
          super().add_defaults()
        File "/tmp/pip-build-env-db23p709/overlay/lib/python3.12/site-packages/setuptools/_distutils/command/sdist.py", line 251, in add_defaults
          self._add_defaults_ext()
        File "/tmp/pip-build-env-db23p709/overlay/lib/python3.12/site-packages/setuptools/_distutils/command/sdist.py", line 336, in _add_defaults_ext
          self.filelist.extend(build_ext.get_source_files())
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "<string>", line 204, in get_source_files
        File "/tmp/pip-build-env-db23p709/overlay/lib/python3.12/site-packages/setuptools/_distutils/cmd.py", line 107, in __getattr__
          raise AttributeError(attr)
      AttributeError: cython_sources
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error

× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.

Recent commit broke the gpt usage (for me)

Current behaviour (46247d3)

~/p/g/gpt-cli ❯❯❯ ./gpt.py dev                                                 main
Traceback (most recent call last):
  File "./gpt.py", line 10, in <module>
    from gptcli.assistant import (
  File "/Users/patricknueser/projects/gpt-cli/gpt-cli/gptcli/assistant.py", line 8, in <module>
    from gptcli.llama import LLaMACompletionProvider
  File "/Users/patricknueser/projects/gpt-cli/gpt-cli/gptcli/llama.py", line 12, in <module>
    LLAMA_MODELS: Optional[dict[str, Path]] = None
TypeError: 'type' object is not subscriptable

Expected behaviour (7752b05)
~/p/g/gpt-cli ❯❯❯ ./gpt.py dev ✘ 1 main
Hi! I'm here to help. Type q or Ctrl-D to exit, c or Ctrl-C to clear the
conversation, r or Ctrl-R to re-generate the last response. To enter multi-line
mode, enter a backslash \ followed by a new line. Exit the multi-line mode by
pressing ESC and then Enter (Meta+Enter).

CLI Conflict with /usr/sbin/gpt

I followed install instructions - installed via pip and tried to run samples from Readme. None of the command worked. Finally ran a man gpt and saw that MacOS came prepackaged with gpt – GUID partition table maintenance utility.

The default install should probably have a "mostly" globally unique command, rather than one that comes prepackaged with one of the most popular operating systems.

Alternatively, provide the instructions for creating a command alias

error when attempting to chat with bison model

Entered key for bison model and got this error when attempting to chat

Uncaught exception
Traceback (most recent call last):
File "C:\Users\matth\Documents\interesting\gpt-cli\gpt.py", line 236, in
main()
File "C:\Users\matth\Documents\interesting\gpt-cli\gpt.py", line 184, in main
run_interactive(args, assistant)
File "C:\Users\matth\Documents\interesting\gpt-cli\gpt.py", line 232, in run_interactive
session.loop(input_provider)
File "C:\Users\matth\Documents\interesting\gpt-cli\gptcli\session.py", line 168, in loop
while self.process_input(*input_provider.get_user_input()):
File "C:\Users\matth\Documents\interesting\gpt-cli\gptcli\session.py", line 160, in process_input
response_saved = self._respond(args)
File "C:\Users\matth\Documents\interesting\gpt-cli\gptcli\session.py", line 102, in _respond
next_response += response
TypeError: can only concatenate str (not "NoneType") to str
An uncaught exception occurred. Please report this issue on GitHub.
Traceback (most recent call last):
File "C:\Users\matth\Documents\interesting\gpt-cli\gpt.py", line 236, in
main()
File "C:\Users\matth\Documents\interesting\gpt-cli\gpt.py", line 184, in main
run_interactive(args, assistant)
File "C:\Users\matth\Documents\interesting\gpt-cli\gpt.py", line 232, in run_interactive
session.loop(input_provider)
File "C:\Users\matth\Documents\interesting\gpt-cli\gptcli\session.py", line 168, in loop
while self.process_input(*input_provider.get_user_input()):
File "C:\Users\matth\Documents\interesting\gpt-cli\gptcli\session.py", line 160, in process_input
response_saved = self._respond(args)
File "C:\Users\matth\Documents\interesting\gpt-cli\gptcli\session.py", line 102, in _respond
next_response += response
TypeError: can only concatenate str (not "NoneType") to str

seems to throw when using new gpt4-turbo model

Hey are you planning to continue to maintain this, because I'm an active user

for new model https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo

gpt-4-1106-preview

hey
Hello! How can I assist you today?

Uncaught exception
Traceback (most recent call last):
File "/Users/timdaub/Projects/gpt-cli/gpt.py", line 239, in
main()
File "/Users/timdaub/Projects/gpt-cli/gpt.py", line 187, in main
run_interactive(args, assistant)
File "/Users/timdaub/Projects/gpt-cli/gpt.py", line 235, in run_interactive
session.loop(input_provider)
File "/Users/timdaub/Projects/gpt-cli/gptcli/session.py", line 168, in loop
while self.process_input(*input_provider.get_user_input()):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/timdaub/Projects/gpt-cli/gptcli/session.py", line 160, in process_input
response_saved = self._respond(args)
^^^^^^^^^^^^^^^^^^^
File "/Users/timdaub/Projects/gpt-cli/gptcli/session.py", line 116, in _respond
self.listener.on_chat_response(self.messages, next_message, args)
File "/Users/timdaub/Projects/gpt-cli/gptcli/composite.py", line 59, in on_chat_response
listener.on_chat_response(messages, response, overrides)
File "/Users/timdaub/Projects/gpt-cli/gptcli/cost.py", line 142, in on_chat_response
price = price_for_completion(messages, response, model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

add try-agagin func to solve speed rate limit in claude

when I use claude api, always return
anthropic.api.ApiException: ('post request failed with status code: 429', {'error': {'type': 'rate_limit_error', 'message': 'Number of concurrent connections to Claude exceeds your rate limit. Please try again, or contact [email protected] to discuss your options for a rate limit increase.'}})
then the process will be killed(end)

How to use multi-line mode?

Hey there.

To enter multi-line mode, enter a backslash \ followed by a new line.

% python3 ./gpt.py
Hi! I'm here to help. Type q or Ctrl-D to exit, c or Ctrl-C to clear the
conversation, r or Ctrl-R to re-generate the last response. To enter multi-line
mode, enter a backslash \ followed by a new line. Exit the multi-line mode by
pressing ESC and then Enter (Meta+Enter).
> Hello world \
Hello! How can I assist you today?

What am I doing wrong? Thanks.

macos 12.6 hurdles

Cleared a fair few hurdles to get this going using zsh.

Can't seem to get past this error:

gpt-cli % python3 gpt.py ERROR:root:Uncaught exception Traceback (most recent call last): File "/Users/adtm1x/gpt-cli/gpt.py", line 191, in <module> main() File "/Users/adtm1x/gpt-cli/gpt.py", line 132, in main read_yaml_config(config_path) if os.path.isfile(config_path) else GptCliConfig() File "/Users/adtm1x/gpt-cli/gptcli/config.py", line 23, in read_yaml_config return GptCliConfig( TypeError: gptcli.config.GptCliConfig() argument after ** must be a mapping, not str An uncaught exception occurred. Please report this issue on GitHub. Traceback (most recent call last): File "/Users/adtm1x/gpt-cli/gpt.py", line 191, in <module> main() File "/Users/adtm1x/gpt-cli/gpt.py", line 132, in main read_yaml_config(config_path) if os.path.isfile(config_path) else GptCliConfig() File "/Users/adtm1x/gpt-cli/gptcli/config.py", line 23, in read_yaml_config return GptCliConfig( TypeError: gptcli.config.GptCliConfig() argument after ** must be a mapping, not str

--model

openai.error.InvalidRequestError'>: The model: gpt-4 does not exist

error given when trying to add --model gpt-4

FileNotFoundError: Shared library with base name 'llama' not found

Just followed the install step from readme file and getting this error

Traceback (most recent call last):
  File "gpt.py", line 10, in <module>
  File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
  File "PyInstaller/loader/pyimod02_importers.py", line 385, in exec_module
  File "gptcli/assistant.py", line 9, in <module>
  File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
  File "PyInstaller/loader/pyimod02_importers.py", line 385, in exec_module
  File "gptcli/llama.py", line 4, in <module>
  File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
  File "PyInstaller/loader/pyimod02_importers.py", line 385, in exec_module
  File "llama_cpp/__init__.py", line 1, in <module>
  File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
  File "PyInstaller/loader/pyimod02_importers.py", line 385, in exec_module
  File "llama_cpp/llama_cpp.py", line 73, in <module>
  File "llama_cpp/llama_cpp.py", line 64, in _load_shared_library
FileNotFoundError: Shared library with base name 'llama' not found
[58098] Failed to execute script 'gpt' due to unhandled exception!

Environment Info:
Mac OS Venture (13.4)
Python Version 3.11.2
Pip Version 23.1.2

Quotas exceeded ?

Hi, got a strange behavior

error

(.venv) ➜  gpt-cli git:(main) ./gpt.py
Hi! I'm here to help. Type q or Ctrl-D to exit, c or Ctrl-C to clear the
conversation, r or Ctrl-R to re-generate the last response. To enter multi-line
mode, enter a backslash \ followed by a new line. Exit the multi-line mode by
pressing ESC and then Enter (Meta+Enter).
> something fun

API Error. Type `r` or Ctrl-R to try again: <class 'openai.error.RateLimitError'>: You exceeded your current quota, please check your plan and billing details.
You exceeded your current quota, please check your plan and billing details.
Traceback (most recent call last):
  File "/Users/frederic/bin/gpt-cli/gptcli/session.py", line 96, in _respond
    for response in completion_iter:
  File "/Users/frederic/bin/gpt-cli/gptcli/openai.py", line 20, in complete
    openai.ChatCompletion.create(
  File "/Users/frederic/bin/gpt-cli/.venv/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/frederic/bin/gpt-cli/.venv/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
                           ^^^^^^^^^^^^^^^^^^
  File "/Users/frederic/bin/gpt-cli/.venv/lib/python3.11/site-packages/openai/api_requestor.py", line 226, in request
    resp, got_stream = self._interpret_response(result, stream)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/frederic/bin/gpt-cli/.venv/lib/python3.11/site-packages/openai/api_requestor.py", line 619, in _interpret_response
    self._interpret_response_line(
  File "/Users/frederic/bin/gpt-cli/.venv/lib/python3.11/site-packages/openai/api_requestor.py", line 682, in _interpret_response_line
    raise self.handle_error_response(
openai.error.RateLimitError: You exceeded your current quota, please check your plan and billing details.

usage screenshot

image

notes

i'm using free plan but i don't see any limitation mentioned about it

Add Base URL option

There are various open source models and hosted LLMOps tools that are compatible with the OpenAI API, like LocalAI and Helicone. It would be great to be able to use the OPENAI_API_BASE environment variable or the config file to use gpt-cli with these systems!

Can conversations be restored?

E.g. right now I had a conversation of 4k tokens with gpt4 and I accidentally did a Ctrl-c and cleared the conversation, but that was an accident. Is there any way to undo this? Or load that conversation again?

Error on input `--a b`

I have a reproducible error when the bot is given input string --a b.

$ gpt
Hi! I'm here to help. Type :q or Ctrl-D to exit, :c or Ctrl-C and Enter to clear
the conversation, :r or Ctrl-R to re-generate the last response. To enter
multi-line mode, enter a backslash \ followed by a new line. Exit the multi-line
mode by pressing ESC and then Enter (Meta+Enter). Try :? for help.
> --a b
Invalid argument: a. Allowed arguments: ['model', 'temperature', 'top_p']
Invalid argument: a. Allowed arguments: ['model', 'temperature', 'top_p']
NoneType: None
$ gpt --version
gpt-cli v0.1.3
Here is the output from running `$ gpt --log_file log.txt --log_level DEBUG`:
$ cat log.txt
2023-07-17 18:26:33,531 - gptcli - INFO - Starting a new chat session. Assistant config: {'messages': [], 'temperature': 0.0, 'model': 'gpt-4'}
2023-07-17 18:26:33,539 - gptcli-session - INFO - Chat started
2023-07-17 18:26:33,539 - asyncio - DEBUG - Using selector: EpollSelector
2023-07-17 18:26:35,314 - gptcli-session - ERROR - Invalid argument: a. Allowed arguments: ['model', 'temperature', 'top_p']
NoneType: None

For context, this error came up when I copy/pasted the following rustc error message into the cli using multiline> mode:

 15 | ) -> impl Iterator<Item = AdaptedRecord> {
    |      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ `()` is not an iterator
    |
    = help: the trait `Iterator` is not implemented for `()`

 For more information about this error, try `rustc --explain E0277`

This resulted in Invalid argument: explain. Allowed arguments: ...

Loading historical commands on start

Wondering if we could easily load previous session chat prompts on start - i.e. session history. This would allow you to up arrow and choose a command you ran in a recent session. Not a huge deal if it's a lot of work. Thanks for this library!

brew install gpt-cli

As a gpt-cli user
I want to be able to install via brew
So that I can have a single command for brew install which is fast and easy

Response duplicating many times

Hi, I love this tool, I hope to help make it better!

One major issue (and sorry if it has already been raised), the response from OpenAI is duplicated many times over before finishing response, usually in the middle of the response.

1. CocoaPods                                                               

If the package was added via CocoaPods, you would remove the package entry 
from the Podfile and then run:                                             

                                                                           
 pod install                                                               
                                                                           

This command will update the project and remove the unused dependencies.   

2. Carthage                                                                

For Carthage, remove the line from the Cartfile that includes the package  
you want to remove, then run:                                              

                                                                           
 carthage update                                                           
                                                                           

After that, you'll also need to manually remove the references from your   
To remove a package from an Xcode project, you typically need to do it     
through the Xcode interface or by editing the project's configuration      
files, depending on how the package was added (e.g., CocoaPods, Carthage,  
Swift Package Manager, or manually).                                       

Here are the general steps to remove a package for each package manager:   

1. CocoaPods                                                               

If the package was added via CocoaPods, you would remove the package entry 
from the Podfile and then run:                                             

                                                                           
 pod install                                                               
                                                                           

This command will update the project and remove the unused dependencies.   

2. Carthage                                                                

For Carthage, remove the line from the Cartfile that includes the package  
you want to remove, then run:                                              

                                                                           
 carthage update                                                           
                                                                           

After that, you'll also need to manually remove the references from your   
To remove a package from an Xcode project, you typically need to do it     
through the Xcode interface or by editing the project's configuration      
files, depending on how the package was added (e.g., CocoaPods, Carthage,  
Swift Package Manager, or manually).                                       

Here are the general steps to remove a package for each package manager:   

1. CocoaPods                                                               

If the package was added via CocoaPods, you would remove the package entry 
from the Podfile and then run:                                             

                                                                           
 pod install                                                               
                                                                           

This command will update the project and remove the unused dependencies.   

2. Carthage                                                                

For Carthage, remove the line from the Cartfile that includes the package  
you want to remove, then run:                                              

                                                                           
 carthage update                                                           
                                                                           

After that, you'll also need to manually remove the references from your   
To remove a package from an Xcode project, you typically need to do it     
through the Xcode interface or by editing the project's configuration      
files, depending on how the package was added (e.g., CocoaPods, Carthage,  
Swift Package Manager, or manually).                                       

Here are the general steps to remove a package for each package manager:   

1. CocoaPods                                                               

If the package was added via CocoaPods, you would remove the package entry 
from the Podfile and then run:                                             

                                                                           
 pod install                                                               
                                                                           

This command will update the project and remove the unused dependencies.   

2. Carthage                                                                

For Carthage, remove the line from the Cartfile that includes the package  
you want to remove, then run:                                              

                                                                           
 carthage update                                                           
                                                                           

After that, you'll also need to manually remove the references from your   
To remove a package from an Xcode project, you typically need to do it     
through the Xcode interface or by editing the project's configuration      
files, depending on how the package was added (e.g., CocoaPods, Carthage,  
Swift Package Manager, or manually).                                       

Here are the general steps to remove a package for each package manager:   

1. CocoaPods                                                               

If the package was added via CocoaPods, you would remove the package entry 
from the Podfile and then run:                                             

                                                                           
 pod install                                                               
                                                                           

This command will update the project and remove the unused dependencies.   

2. Carthage                                                                

For Carthage, remove the line from the Cartfile that includes the package  
you want to remove, then run:                                              

                                                                           
 carthage update                                                           
                                                                           

After that, you'll also need to manually remove the references from your   
To remove a package from an Xcode project, you typically need to do it     
through the Xcode interface or by editing the project's configuration      
files, depending on how the package was added (e.g., CocoaPods, Carthage,  
Swift Package Manager, or manually).                                       

Here are the general steps to remove a package for each package manager:   

1. CocoaPods                                                               

If the package was added via CocoaPods, you would remove the package entry 
from the Podfile and then run:                                             

                                                                           
 pod install                                                               
                                                                           

This command will update the project and remove the unused dependencies.   

2. Carthage                                                                

For Carthage, remove the line from the Cartfile that includes the package  
you want to remove, then run:                                              

                                                                           
 carthage update                                                           
                                                                           

After that, you'll also need to manually remove the references from your   
Xcode project's linked libraries and frameworks.                           
To remove a package from an Xcode project, you typically need to do it     
through the Xcode interface or by editing the project's configuration      
files, depending on how the package was added (e.g., CocoaPods, Carthage,  
Swift Package Manager, or manually).                                       

Here are the general steps to remove a package for each package manager:   

1. CocoaPods                                                               

If the package was added via CocoaPods, you would remove the package entry 
from the Podfile and then run:                                             

                                                                           
 pod install                                                               
                                                                           

This command will update the project and remove the unused dependencies.   

2. Carthage                                                                

For Carthage, remove the line from the Cartfile that includes the package  
you want to remove, then run:                                              

                                                                           
 carthage update                                                           
                                                                           

After that, you'll also need to manually remove the references from your   
Xcode project's linked libraries and frameworks.                           
To remove a package from an Xcode project, you typically need to do it     
through the Xcode interface or by editing the project's configuration      
files, depending on how the package was added (e.g., CocoaPods, Carthage,  
Swift Package Manager, or manually).                                       

Here are the general steps to remove a package for each package manager:   

1. CocoaPods                                                               

If the package was added via CocoaPods, you would remove the package entry 
from the Podfile and then run:                                             

                                                                           

pod install

This command will update the project and remove the unused dependencies.

  1. Carthage

For Carthage, remove the line from the Cartfile that includes the package
you want to remove, then run:

carthage update

After that, you'll also need to manually remove the references from your
Xcode project's linked libraries and frameworks.
To remove a package from an Xcode project, you typically need to do it
through the Xcode interface or by editing the project's configuration
files, depending on how the package was added (e.g., CocoaPods, Carthage,
Swift Package Manager, or manually).

Here are the general steps to remove a package for each package manager:

  1. CocoaPods

If the package was added via CocoaPods, you would remove the package entry
from the Podfile and then run:

pod install

This command will update the project and remove the unused dependencies.

  1. Carthage

For Carthage, remove the line from the Cartfile that includes the package
you want to remove, then run:

carthage update

After that, you'll also need to manually remove the references from your
Xcode project's linked libraries and frameworks.
To remove a package from an Xcode project, you typically need to do it
through the Xcode interface or by editing the project's configuration
files, depending on how the package was added (e.g., CocoaPods, Carthage,
Swift Package Manager, or manually).

Here are the general steps to remove a package for each package manager:

  1. CocoaPods

If the package was added via CocoaPods, you would remove the package entry
from the Podfile and then run:

pod install

This command will update the project and remove the unused dependencies.

  1. Carthage

For Carthage, remove the line from the Cartfile that includes the package
you want to remove, then run:

carthage update

After that, you'll also need to manually remove the references from your
Xcode project's linked libraries and frameworks.
To remove a package from an Xcode project, you typically need to do it
through the Xcode interface or by editing the project's configuration
files, depending on how the package was added (e.g., CocoaPods, Carthage,
Swift Package Manager, or manually).

Here are the general steps to remove a package for each package manager:

  1. CocoaPods

If the package was added via CocoaPods, you would remove the package entry
from the Podfile and then run:

pod install

This command will update the project and remove the unused dependencies.

  1. Carthage

For Carthage, remove the line from the Cartfile that includes the package
you want to remove, then run:

carthage update

After that, you'll also need to manually remove the references from your
Xcode project's linked libraries and frameworks.
To remove a package from an Xcode project, you typically need to do it
through the Xcode interface or by editing the project's configuration
files, depending on how the package was added (e.g., CocoaPods, Carthage,
Swift Package Manager, or manually).

Here are the general steps to remove a package for each package manager:

  1. CocoaPods

If the package was added via CocoaPods, you would remove the package entry
from the Podfile and then run:

pod install

This command will update the project and remove the unused dependencies.

  1. Carthage

For Carthage, remove the line from the Cartfile that includes the package
you want to remove, then run:

carthage update

After that, you'll also need to manually remove the references from your
Xcode project's linked libraries and frameworks.
To remove a package from an Xcode project, you typically need to do it
through the Xcode interface or by editing the project's configuration
files, depending on how the package was added (e.g., CocoaPods, Carthage,
Swift Package Manager, or manually).

Here are the general steps to remove a package for each package manager:

  1. CocoaPods

If the package was added via CocoaPods, you would remove the package entry
from the Podfile and then run:

pod install

This command will update the project and remove the unused dependencies.

  1. Carthage

For Carthage, remove the line from the Cartfile that includes the package
you want to remove, then run:

carthage update

After that, you'll also need to manually remove the references from your
Xcode project's linked libraries and frameworks.

  1. Swift Package Manager

If the package was added using Swift Package Manager, you can remove it by
opening the Xcode project and:

• Go to File > Swift Packages > Update Package Versions.
• Select the package you want to remove and click the minus sign (-) to
remove it.

If you want to use the command line, you can manually edit the
Package.swift file and remove the dependency from the dependencies array.
After saving the changes, you can run:

swift package update

  1. Manually Added

For manually added packages (e.g., added as a Git submodule or just copied
into the project), you'll need to:

• Delete the package files from your project directory.
• Remove any references to the package from your Xcode project (targets,
build phases, etc.).

Keep in mind that these are general guidelines. The specific commands and
steps might vary depending on the version of your package management tools
and the configuration of your project. Always make sure to have a backup
before modifying project files or dependencies.


My config file is set to OpenAI GPT-4-preview

Sometimes it erros if given special characters

I was not able to reproduce the error. It happened two times both when my query included special characters. It was something like host aborted the connection.. might be a connection error...

Also thanks for making this...it is good

error on macos 13.3

hi there-

i tried to run gpt-cli, but I got this error:

ERROR:root:Uncaught exception
Traceback (most recent call last):
File "/Users/rescreen/gpt-cli/./gpt.py", line 191, in
main()
File "/Users/rescreen/gpt-cli/./gpt.py", line 121, in main
read_yaml_config(config_path) if os.path.isfile(config_path) else GptCliConfig()
File "/Users/rescreen/gpt-cli/gptcli/config.py", line 23, in read_yaml_config
return GptCliConfig(
TypeError: gptcli.config.GptCliConfig() argument after ** must be a mapping, not str
An uncaught exception occurred. Please report this issue on GitHub.
Traceback (most recent call last):
File "/Users/rescreen/gpt-cli/./gpt.py", line 191, in
main()
File "/Users/rescreen/gpt-cli/./gpt.py", line 121, in main
read_yaml_config(config_path) if os.path.isfile(config_path) else GptCliConfig()
File "/Users/rescreen/gpt-cli/gptcli/config.py", line 23, in read_yaml_config
return GptCliConfig(
TypeError: gptcli.config.GptCliConfig() argument after ** must be a mapping, not str

How to install?

Hello,

Was missing to find the installation section? What would be the recommended steps for installing?

Preferably a one-liner script would certainly help. Thanks.

Publish to pypi/install with pip

Would be great to be able to install this more easily!

Or you could add this to the README:

pip install git+https://github.com/kharvd/gpt-cli

which would work once you add a pyproject.toml to the repo.

Bard: "Your default credentials were not found."

Great to see Bard capabilities in the app! I'm using a similar CLI but they don't have the ability to set a consistent 'role' like this app. I've been working to get it running this afternoon and have run into stumbling locks:

  1. Unlike Open_AI/AnthropicAI, the default yaml file is missing an optional line for Bard
  2. The system doesn't appear to recognize when I manually add google_api_key: <insert key here> to the yaml file but...
  3. It does appear to recognize api_key: <insert key here> instead. However, I don't get very far once within the package:

python3 gpt-cli/gpt.py --model chat-bison-001

Hi! I'm here to help. Type q or Ctrl-D to exit, c or Ctrl-C to clear the        
conversation, r or Ctrl-R to re-generate the last response. To enter multi-line 
mode, enter a backslash \ followed by a new line. Exit the multi-line mode by   
pressing ESC and then Enter (Meta+Enter).         

hello

An uncaught exception occurred. Please report this issue on GitHub.
Traceback (most recent call last):
  File "/home/cameron/gpt-cli/gpt.py", line 236, in <module>
    main()
  File "/home/cameron/gpt-cli/gpt.py", line 184, in main
    run_interactive(args, assistant)
  File "/home/cameron/gpt-cli/gpt.py", line 232, in run_interactive
    session.loop(input_provider)
  File "/home/cameron/gpt-cli/gptcli/session.py", line 168, in loop
    while self.process_input(*input_provider.get_user_input()):
  File "/home/cameron/gpt-cli/gptcli/session.py", line 160, in process_input
    response_saved = self._respond(args)
  File "/home/cameron/gpt-cli/gptcli/session.py", line 101, in _respond
    for response in completion_iter:
  File "/home/cameron/gpt-cli/gptcli/google.py", line 42, in complete
    response = genai.chat(**kwargs)
  File "/home/cameron/.local/lib/python3.9/site-packages/google/generativeai/discuss.py", line 342, in chat
    return _generate_response(client=client, request=request)
  File "/home/cameron/.local/lib/python3.9/site-packages/google/generativeai/discuss.py", line 478, in _generate_response
    client = get_default_discuss_client()
  File "/home/cameron/.local/lib/python3.9/site-packages/google/generativeai/client.py", line 122, in get_default_discuss_client
    default_discuss_client = glm.DiscussServiceClient(**default_client_config)
  File "/home/cameron/.local/lib/python3.9/site-packages/google/ai/generativelanguage_v1beta2/services/discuss_service/client.py", line 430, in __init__
    self._transport = Transport(
  File "/home/cameron/.local/lib/python3.9/site-packages/google/ai/generativelanguage_v1beta2/services/discuss_service/transports/grpc.py", line 151, in __init__
    super().__init__(
  File "/home/cameron/.local/lib/python3.9/site-packages/google/ai/generativelanguage_v1beta2/services/discuss_service/transports/base.py", line 97, in __init__
    credentials, _ = google.auth.default(
  File "/home/cameron/.local/lib/python3.9/site-packages/google/auth/_default.py", line 648, in default
    raise exceptions.DefaultCredentialsError(_CLOUD_SDK_MISSING_CREDENTIALS)
google.auth.exceptions.DefaultCredentialsError: Your default credentials were not found. To set up Application Default Credentials, see https://cloud.google.com/docs/authentication/external/set-up-adc for more information.

I followed the instructions via the error msg's link & download the gcloud cli package. Once this step is completely, I also manually set the api-key within the gcloud-cli as well but the issue persists.

Overwrite assistants

I redefined the general and dev assistant to use gpt-4 by default. The help-option now lists the assistants twice:

positional arguments:
  {dev,general,bash,general,dev}

token length issue

I use gpt4 and sometimes when my message is pretty long and I start running into this error below once, but I then adjust my input so that it is within he token length of 8192, the prompt will go through but it'll then not produce a very long response, e.g here and resurface that error

Screenshot 2023-07-01 at 21 33 18

happened to me twice now.

Request Error. The last prompt was not saved: <class 'openai.error.InvalidRequestError'>: This
model's maximum context length is 8192 tokens. However, your messages resulted in 8205 tokens.
Please reduce the length of the messages.
This model's maximum context length is 8192 tokens. However, your messages resulted in 8205 tokens. Please reduce the length of the messages.
Traceback (most recent call last):
  File "/Users/[username]/Projects/gpt-cli/gptcli/session.py", line 101, in _respond
    for response in completion_iter:
  File "/Users/[username]/Projects/gpt-cli/gptcli/openai.py", line 20, in complete
    openai.ChatCompletion.create(
  File "/Users/[username]/Projects/gpt-cli/venv/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/[username]/Projects/gpt-cli/venv/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
                           ^^^^^^^^^^^^^^^^^^
  File "/Users/[username]/Projects/gpt-cli/venv/lib/python3.11/site-packages/openai/api_requestor.py", line 298, in request
    resp, got_stream = self._interpret_response(result, stream)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/[username]/Projects/gpt-cli/venv/lib/python3.11/site-packages/openai/api_requestor.py", line 700, in _interpret_response
    self._interpret_response_line(
  File "/Users/[username]/Projects/gpt-cli/venv/lib/python3.11/site-packages/openai/api_requestor.py", line 763, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 8192 tokens. However, your messages resulted in 8205 tokens. Please reduce the length of the messages.

Solution to MacOS Overlapping Issue With An Existing Tool

I installed gpt-cli via pip through the instructions. When I tried to run it in my terminal as gpt I found out it executes a different tool that exists in MacOS's called "GUID partition table maintenance utility" which located at "/usr/sbin".

Screenshot 2023-11-26 at 16 51 42

To solve this problem I changed gpt-cli's executable's name from "gpt" to "gpt-cli". Now I'm able to run gpt-cli via gpt-cli command in my terminal.

Solution:

Find executable. In my case it was located in "/usr/local/bin" folder. It was named "gpt". To see if it works you can run ./gpt inside the "/usr/local/bin" folder.

Change its name to something you want. I wanted to run gpt-cli via gpt-cli command in my terminal so I changed its name to "gpt-cli". To do this you can execute mv gpt gpt-cli or you can replace "gpt-cli" with something you want.

Screenshot 2023-11-26 at 16 55 46

error with circular import

Traceback (most recent call last):
  File "/Users/timdaub/Projects/updated-gpt-cli/gptcli/gpt.py", line 11, in <module>
    import openai
  File "/Users/timdaub/Projects/updated-gpt-cli/gptcli/openai.py", line 3, in <module>
    from openai import OpenAI
ImportError: cannot import name 'OpenAI' from partially initialized module 'openai' (most likely due to a circular import) (/Users/timdaub/Projects/updated-gpt-cli/gptcli/openai.py)```

why does it even invoke the openai module in the first place?

Usage tracking not working.

Hi, I can't see any usage tracking in the terminal, as shown in the screenshots.

Running on Ubuntu 20 on Python 3.8.

Proposal: support more terminal keyboard commands

Loving the CLI tool and one thing I keep finding myself doing out of habit is using terminal keyboard commands like:

  • option + arrow (move forward or back a word)
  • option + delete (delete the previous word)
  • ctrl + e (go to end of line)
  • ctrl + a (go to beginning of line)
  • ctrl + u (delete entire line)

Unsure if listening to these commands is an easy addition, but if so it would be really nice as it more closely mirrors the terminal experience.

Thanks again!

"Server Overloaded" message at Anthropic breaks sessions often with 'uncaught exception'

When the servers are overloaded at Anthropic:


`anthropic.APIStatusError: {'type': 'error', 'error': {'details': None, 'type': 'overloaded_error', 'message': 'Overloaded'}}
`

They send back an error, sometimes 529, sometimes nothing, leading to this:

anthropic.APIStatusError: {'type': 'error', 'error': {'details': None, 'type': 'overloaded_error', 'message': 'Overloaded'}}
An uncaught exception occurred. Please report this issue on GitHub.
Traceback (most recent call last):
  File "/home/gnewt/.pyenv/versions/3.12-dev/bin/gpt", line 8, in <module>
    sys.exit(main())

this crashes the session and eliminates Claude's memory. Can the script be changed to tolerate a "server overloaded" message without breaking? These errors are pretty common.

Problem with using pipe and reading from stdin

Hello. I have encountered a problem with the tool when I try to pipe in some text to it:

$ echo "hi" | gpt general
Warning: Input is not a terminal (fd=0).
Hi! I'm here to help. Type :q or Ctrl-D to exit, :c or Ctrl-C and Enter to clear
the conversation, :r or Ctrl-R to re-generate the last response. To enter
multi-line mode, enter a backslash \ followed by a new line. Exit the multi-line
mode by pressing ESC and then Enter (Meta+Enter). Try :? for help.
^[[36;1R> hi
Hello! How can I assist you today?

                                                    Tokens: 22 | Price: $0.000 | Total: $0.000
^[[40;1R>
Uncaught exception
Traceback (most recent call last):
  File "/home/kuba/.local/bin/gpt", line 8, in <module>
    sys.exit(main())
  File "/home/kuba/.local/lib/python3.10/site-packages/gptcli/gpt.py", line 189, in main
    run_interactive(args, assistant)
  File "/home/kuba/.local/lib/python3.10/site-packages/gptcli/gpt.py", line 237, in run_interactive
    session.loop(input_provider)
  File "/home/kuba/.local/lib/python3.10/site-packages/gptcli/session.py", line 183, in loop
    while self.process_input(*input_provider.get_user_input()):
  File "/home/kuba/.local/lib/python3.10/site-packages/gptcli/cli.py", line 148, in get_user_input
    while (next_user_input := self._request_input()) == "":
  File "/home/kuba/.local/lib/python3.10/site-packages/gptcli/cli.py", line 197, in _request_input
    line = self.prompt()
  File "/home/kuba/.local/lib/python3.10/site-packages/gptcli/cli.py", line 186, in prompt
    return self.prompt_session.prompt(
  File "/home/kuba/.local/lib/python3.10/site-packages/prompt_toolkit/shortcuts/prompt.py", line 1035, in prompt
    return self.app.run(
  File "/home/kuba/.local/lib/python3.10/site-packages/prompt_toolkit/application/application.py", line 961, in run
    return loop.run_until_complete(coro)
  File "/usr/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
    return future.result()
  File "/home/kuba/.local/lib/python3.10/site-packages/prompt_toolkit/application/application.py", line 875, in run_async
    return await _run_async(f)
  File "/home/kuba/.local/lib/python3.10/site-packages/prompt_toolkit/application/application.py", line 740, in _run_async
    result = await f
EOFError
An uncaught exception occurred. Please report this issue on GitHub.
Traceback (most recent call last):
  File "/home/kuba/.local/bin/gpt", line 8, in <module>
    sys.exit(main())
  File "/home/kuba/.local/lib/python3.10/site-packages/gptcli/gpt.py", line 189, in main
    run_interactive(args, assistant)
  File "/home/kuba/.local/lib/python3.10/site-packages/gptcli/gpt.py", line 237, in run_interactive
    session.loop(input_provider)
  File "/home/kuba/.local/lib/python3.10/site-packages/gptcli/session.py", line 183, in loop
    while self.process_input(*input_provider.get_user_input()):
  File "/home/kuba/.local/lib/python3.10/site-packages/gptcli/cli.py", line 148, in get_user_input
    while (next_user_input := self._request_input()) == "":
  File "/home/kuba/.local/lib/python3.10/site-packages/gptcli/cli.py", line 197, in _request_input
    line = self.prompt()
  File "/home/kuba/.local/lib/python3.10/site-packages/gptcli/cli.py", line 186, in prompt
    return self.prompt_session.prompt(
  File "/home/kuba/.local/lib/python3.10/site-packages/prompt_toolkit/shortcuts/prompt.py", line 1035, in prompt
    return self.app.run(
  File "/home/kuba/.local/lib/python3.10/site-packages/prompt_toolkit/application/application.py", line 961, in run
    return loop.run_until_complete(coro)
  File "/usr/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
    return future.result()
  File "/home/kuba/.local/lib/python3.10/site-packages/prompt_toolkit/application/application.py", line 875, in run_async
    return await _run_async(f)
  File "/home/kuba/.local/lib/python3.10/site-packages/prompt_toolkit/application/application.py", line 740, in _run_async
    result = await f
EOFError

I would expect it to work the same way as with the following syntax:

$ gpt general --prompt "$(echo hi)"
Hello! How can I assist you today?%

Could you advise if there is something I'm doing incorrectly or there is a bug in the tool?

Running in Docker

Hey, i made a very simple Dockerfile on my local machine, and it worked, do you guys think it would be interesting to publish a docker image for the cli?

Model Parameter Not Functioning as Expected

When using the command line interface for ChatGPT, the --model parameter seems to not be working as intended. When attempting to set the model to GPT-4, the application returns responses as if it were GPT-3.

Steps to Reproduce

  1. Set the model to GPT-4 using the command:
    % gpt --model gpt-4 -p "Are you chatgpt-4?"
  2. The response was:
    As an AI model developed by OpenAI, I'm currently based on GPT-3. As of now, GPT-4 has not been released.
  3. Defining a new assistant in the config file and omitting the --model parameter yielded the same response.
    No, I'm an AI developed by OpenAI and currently known as ChatGPT-3. As of now, there is no ChatGPT-4.

Expected Behavior
The application should return responses as per the specified model in the command line or as set in the default assistant in the config file.
Actual Behavior
The application returns responses as if it were GPT-3, irrespective of the model set in the command line or the config file.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.