GithubHelp home page GithubHelp logo

gpt-review's Introduction

gpt-review

Actions Status Coverage Status License: MIT PyPI Downloads Code style: black

A Python based CLI and GitHub Action to use Open AI or Azure Open AI models to review contents of pull requests.

How to install CLI

First, install the package via pip:

pip install gpt-review

GPT API credentials

You will need to provide an OpenAI API key to use this CLI tool. In order of precedence, it will check the following methods:

  1. Presence of a context file at azure.yaml or wherever CONTEXT_FILE points to. See azure.yaml.template for an example.

  2. AZURE_OPENAI_API_URL and AZURE_OPENAI_API_KEY to connect to an Azure OpenAI API:

    export AZURE_OPENAI_API=<your azure api url>
    export AZURE_OPENAI_API_KEY=<your azure key>
  3. OPENAI_API_KEY for direct use of the OpenAI API

    export OPENAI_API_KEY=<your openai key>
  4. AZURE_KEY_VAULT_URL to use Azure Key Vault. Put secrets for the url at azure-open-ai and the API Key azure-openai-key, then run:

    export AZURE_KEY_VAULT_URL=https://<keyvault_name>.vault.azure.net/
    az login

Main Commands

To show help information about available commands and their usage, run:

gpt --help

To display the current version of this CLI tool, run:

gpt --version

Here are the main commands for using this CLI tool:

1. Ask a Question

To submit a question to GPT and receive an answer, use the following format:

gpt ask "What is the capital of France?"

You can customize your request using various options like maximum tokens (--max-tokens), temperature (--temperature), top-p value (--top-p), frequency penalty (--frequency-penalty), presence penalty (--presence-penalty), etc.

Ask a Question about a File

To submit a question to GPT with a file and receive an answer, use the following format:

gpt ask --files WordDocument.docx "Summarize the contents of this document."

2. Review a PR

To review a PR, use the following format:

gpt github review \
    --access-token $GITHUB_ACCESS_TOKEN \
    --pull-request $PULL_REQUEST_NUMBER \
    --repository $REPOSITORY_NAME

3. Generate a git commit message with GPT

To generate a git commit message with GPT after having added the files, use the following format:

git add .

gpt git commit

For more detailed information on each command and its options, run:

gpt COMMAND --help

Replace COMMAND with one of the main commands listed above (e.g., 'ask').

Developer Setup

To install the package in development mode, with additional packages for testing, run the following command:

pip install -e .[test]

gpt-review's People

Contributors

danay1999 avatar dciborow avatar deepika087 avatar dependabot[bot] avatar mhamilton723 avatar microsoft-github-policy-service[bot] avatar msnidal avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

gpt-review's Issues

[Bug Report]: workflow action fails with ImportError

Module path

gpt github review

review-gpt CLI version

0.9.5

Describe the bug

The workflow fails with following error:

ImportError: cannot import name 'BaseCache' from 'langchain' (/home/runner/work/***/***/.env/lib/python3.11/site-packages/langchain/__init__.py)

Could not import BaseCache from langchain when Run source .env/bin/activate step in Run microsoft/[email protected].

To reproduce

Set this workflow .github/workflows/gpt-review.yml

name: GPT Review on Pull Request

on:
  pull_request_target:
    branches: [ 'main' ]

jobs:
  add_pr_comment:
    runs-on: ubuntu-latest
    name: OpenAI PR Comment
    steps:
      - id: review
        uses: microsoft/[email protected]
        with:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
          OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}

Code snippet

No response

Relevant log output

Run source .env/bin/activate
  source .env/bin/activate
  
  gpt github review \
    --access-token $GITHUB_TOKEN \
    --pull-request $PATCH_PR \
    --repository $PATCH_REPO
  shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0}
  env:
    pythonLocation: /opt/hostedtoolcache/Python/3.11.5/x64
    PKG_CONFIG_PATH: /opt/hostedtoolcache/Python/3.11.5/x64/lib/pkgconfig
    Python_ROOT_DIR: /opt/hostedtoolcache/Python/3.11.5/x64
    Python2_ROOT_DIR: /opt/hostedtoolcache/Python/3.11.5/x64
    Python3_ROOT_DIR: /opt/hostedtoolcache/Python/3.11.5/x64
    LD_LIBRARY_PATH: /opt/hostedtoolcache/Python/3.11.5/x64/lib
    ACTION_REF: v0.9.5
    GIT_COMMIT_HASH: ***
    GITHUB_TOKEN: ***
    LINK: ***
    OPENAI_API_KEY: ***
    OPENAI_ORG_KEY: 
    PR_TITLE: ***
    BRANCH: 
    AZURE_OPENAI_API_KEY: 
    AZURE_OPENAI_API: 
    PATCH_PR: 2
    PATCH_REPO: ***
    FULL_SUMMARY: true
    FILE_SUMMARY: false
    TEST_SUMMARY: false
    BUG_SUMMARY: false
    RISK_SUMMARY: false
    RISK_BREAKING: false
/home/runner/work/***/***/.env/lib/python3.11/site-packages/langchain/__init__.py:38: UserWarning: Importing Cohere from langchain root module is no longer supported.
  warnings.warn(
/home/runner/work/***/***/.env/lib/python3.11/site-packages/langchain/__init__.py:38: UserWarning: Importing LLMChain from langchain root module is no longer supported.
  warnings.warn(
/home/runner/work/***/***/.env/lib/python3.11/site-packages/langchain/__init__.py:38: UserWarning: Importing OpenAI from langchain root module is no longer supported.
  warnings.warn(
Traceback (most recent call last):
  File "/home/runner/work/***/***/.env/bin/gpt", line 5, in <module>
    from gpt_review.main import __main__
  File "/home/runner/work/***/***/.env/lib/python3.11/site-packages/gpt_review/main.py", line 6, in <module>
    from gpt_review._gpt_cli import cli
  File "/home/runner/work/***/***/.env/lib/python3.11/site-packages/gpt_review/_gpt_cli.py", line 9, in <module>
    from gpt_review._ask import AskCommandGroup
  File "/home/runner/work/***/***/.env/lib/python3.11/site-packages/gpt_review/_ask.py", line 12, in <module>
    from gpt_review._llama_index import _query_index
  File "/home/runner/work/***/***/.env/lib/python3.11/site-packages/gpt_review/_llama_index.py", line 10, in <module>
    from llama_index import (
  File "/home/runner/work/***/***/.env/lib/python3.11/site-packages/llama_index/__init__.py", line 19, in <module>
    from llama_index.indices.common.struct_store.base import SQLDocumentContextBuilder
  File "/home/runner/work/***/***/.env/lib/python3.11/site-packages/llama_index/indices/__init__.py", line 4, in <module>
    from llama_index.indices.keyword_table.base import GPTKeywordTableIndex
  File "/home/runner/work/***/***/.env/lib/python3.11/site-packages/llama_index/indices/keyword_table/__init__.py", line 4, in <module>
    from llama_index.indices.keyword_table.base import GPTKeywordTableIndex
  File "/home/runner/work/***/***/.env/lib/python3.11/site-packages/llama_index/indices/keyword_table/base.py", line 18, in <module>
    from llama_index.indices.base import BaseGPTIndex
  File "/home/runner/work/***/***/.env/lib/python3.11/site-packages/llama_index/indices/base.py", line 6, in <module>
    from llama_index.chat_engine.types import BaseChatEngine, ChatMode
  File "/home/runner/work/***/***/.env/lib/python3.11/site-packages/llama_index/chat_engine/__init__.py", line 1, in <module>
    from llama_index.chat_engine.condense_question import CondenseQuestionChatEngine
  File "/home/runner/work/***/***/.env/lib/python3.11/site-packages/llama_index/chat_engine/condense_question.py", line 5, in <module>
    from llama_index.chat_engine.utils import to_chat_buffer
  File "/home/runner/work/***/***/.env/lib/python3.11/site-packages/llama_index/chat_engine/utils.py", line 6, in <module>
    from llama_index.indices.service_context import ServiceContext
  File "/home/runner/work/***/***/.env/lib/python3.11/site-packages/llama_index/indices/service_context.py", line 9, in <module>
    from llama_index.indices.prompt_helper import PromptHelper
  File "/home/runner/work/***/***/.env/lib/python3.11/site-packages/llama_index/indices/prompt_helper.py", line 13, in <module>
    from llama_index.llm_predictor.base import BaseLLMPredictor
  File "/home/runner/work/***/***/.env/lib/python3.11/site-packages/llama_index/llm_predictor/__init__.py", line 4, in <module>
    from llama_index.llm_predictor.base import LLMPredictor
  File "/home/runner/work/***/***/.env/lib/python3.11/site-packages/llama_index/llm_predictor/base.py", line 11, in <module>
    from langchain import BaseCache, Cohere, LLMChain, OpenAI
ImportError: cannot import name 'BaseCache' from 'langchain' (/home/runner/work/***/***/.env/lib/python3.11/site-packages/langchain/__init__.py)
Error: Process completed with exit code 1.

bug: retry does not always have retry-after header

_retry_with_exponential_backoff(retry, error.headers["Retry-After"])

Sometimes the error will not contain the retry-after header.

openai.error.RateLimitError: The server is currently overloaded with other requests. Sorry about that! You can retry your request, or contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 if the error persists.

https://github.com/microsoft/gpt-review/actions/runs/4994531239/jobs/8945207747#step:7:191

[Bug Report]: ImportError: cannot import name 'BaseCache' from 'langchain'

Module path

id: review

review-gpt CLI version

0.9.5

Describe the bug

Hello, is this repository / action still alive?
I'm trying to run pipeline with this action and it fails on importing

ImportError: cannot import name 'BaseCache' from 'langchain' (/runner/_work/integration-service/integration-service/.env/lib/python3.11/site-packages/langchain/__init__.py)
Error: Process completed with exit code 1.

To be quite honest I'm not sure what else I could do when it fails on such step. Am I doing something wrong?

To reproduce

Run GitHub Action to review PR

Code snippet

name: "AI Code Review"

on:
  pull_request:
    paths-ignore:
      - "*.md"
jobs:
  review:
    runs-on: k8s
    permissions:
      contents: read
      pull-requests: write
    steps:
      - uses: microsoft/gpt-review@v0.9.5
        name: "Code Review by GPT"
        id: review
        with:
          # Derivative token for using the GitHub REST API
          GITHUB_TOKEN: GITHUB_TOKEN
          # OpenAI API Key
          AZURE_OPENAI_API: ${{ vars.AZURE_OPENAPI_URL }}
          AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAPI_KEY }}

Relevant log output

No response

Link example in Readme

Without having a concrete example on how the resulting review looks, this is not as intimidating as it could beโ€ฆ

Can you maybe just link an example PR that has some reviews from GPT/the bot applied?
One needs to be convinced this is actually good.

I tried finding some PRs in this repo, but I found none with that, so I am not sure whether the bot is applied here.

This should actually be linked in the Readme.

[Bug Report]: The API deployment for this resource does not exist

Module path

gpt ask

review-gpt CLI version

0.9.4

Describe the bug

openai.error.InvalidRequestError: The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again.

To reproduce

How do we set the deployment name (for azure openai service)? Docs need improving

I have set the key and API URL

Code snippet

python3 main.py ask "hello"

Relevant log output

python3 main.py ask "hello"
This command is in preview. It may be changed/removed in a future release.
The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again.
Traceback (most recent call last):
  File "/opt/homebrew/lib/python3.11/site-packages/knack/cli.py", line 233, in invoke
    cmd_result = self.invocation.execute(args)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/knack/invocation.py", line 224, in execute
    cmd_result = parsed_args.func(params)
                 ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/knack/commands.py", line 149, in __call__
    return self.handler(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/knack/commands.py", line 256, in _command_handler
    result = op(client, **command_args) if client else op(**command_args)
                                                       ^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/gpt_review/_ask.py", line 121, in _ask
    response = _call_gpt(
               ^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/gpt_review/_openai.py", line 91, in _call_gpt
    completion = openai.ChatCompletion.create(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
                           ^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/openai/api_requestor.py", line 298, in request
    resp, got_stream = self._interpret_response(result, stream)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/openai/api_requestor.py", line 700, in _interpret_response
    self._interpret_response_line(
  File "/opt/homebrew/lib/python3.11/site-packages/openai/api_requestor.py", line 763, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again.

Use Rate Limit time suggestion in retry

When we get rate limited the API tells us how long to wait,

INFO:openai:error_code=429 error_message='Requests to the Creates a completion for the chat message Operation under Azure OpenAI API version 2023-03-15-preview have exceeded call rate limit of your current OpenAI S0 pricing tier. Please retry after 3 seconds. Please go here: https://aka.ms/oai/quotaincrease if you would like to further increase the default rate limit.' error_param=None error_type=None message='OpenAI API error received' stream_error=False
WARNING:root:Call to GPT failed due to rate limit, retry attempt: 1

feat: ignore specific files from git diff

Can I ignore some specific files from git diff?
By excluding unnecessary files (like package-lock.json) when asking to GPT, it seems helpful to save tokens used.

I have not enough familiarity the codebase to propose solution.

feat: add configuration for open ai

based on the auto-gpt configuration context.

################################################################################
### LLM PROVIDER
################################################################################

### OPENAI
## OPENAI_API_KEY - OpenAI API Key (Example: my-openai-api-key)


## NOTE: https://platform.openai.com/docs/api-reference/completions
# The temperature setting in language models like GPT controls the balance between predictable and random responses. 
# Lower temperature makes the responses more focused and deterministic, while higher temperature makes them more 
# creative and varied. The temperature range typically goes from 0 to 2 in OpenAI's implementation.
##
## TEMPERATURE - Sets temperature in OpenAI (Default: 0)
##
###

## USE_AZURE - Use Azure OpenAI or not (Default: False)
OPENAI_API_KEY=your-openai-api-key
# TEMPERATURE=0
# USE_AZURE=False

### AZURE
# moved to `azure.yaml.template`

################################################################################
### LLM MODELS
################################################################################

## SMART_LLM_MODEL - Smart language model (Default: gpt-4)
## FAST_LLM_MODEL - Fast language model (Default: gpt-3.5-turbo)
# SMART_LLM_MODEL=gpt-4
# FAST_LLM_MODEL=gpt-3.5-turbo

### LLM MODEL SETTINGS
## FAST_TOKEN_LIMIT - Fast token limit for OpenAI (Default: 4000)
## SMART_TOKEN_LIMIT - Smart token limit for OpenAI (Default: 8000)
## When using --gpt3only this needs to be set to 4000.
# FAST_TOKEN_LIMIT=4000
# SMART_TOKEN_LIMIT=8000

### EMBEDDINGS
## EMBEDDING_MODEL       - Model to use for creating embeddings
## EMBEDDING_TOKENIZER   - Tokenizer to use for chunking large inputs
## EMBEDDING_TOKEN_LIMIT - Chunk size limit for large inputs
# EMBEDDING_MODEL=text-embedding-ada-002
# EMBEDDING_TOKENIZER=cl100k_base
# EMBEDDING_TOKEN_LIMIT=8191

[Bug Report]: github action does not run due to incompatible python dependency

Module path

gpt github review

review-gpt CLI version

0.9.4

Describe the bug

The workflow fails when running the Run source .env/bin/activate sub step in Run microsoft/[email protected] with this error

<module>
    info_str=example_info.json(indent=4),
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/.../.env/lib/python3.11/site-packages/typing_extensions.py", line 2562, in wrapper
    return __arg(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/.../.env/lib/python3.11/site-packages/pydantic/main.py", line 952, in json
Error:     raise TypeError('`dumps_kwargs` keyword arguments are no longer supported.')
TypeError: `dumps_kwargs` keyword arguments are no longer supported.
Error: Process completed with exit code 1.

To reproduce

Add this worklow to the repository

name: GPT Review on Pull Request

on:
  pull_request_target

jobs:
  add_pr_comment:
    runs-on: ubuntu-latest
    name: Azure OpenAI PR Comment
    steps:
      - id: review
        uses: microsoft/[email protected]
        with:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
          AZURE_OPENAI_API: ${{ secrets.AZURE_OPENAI_API }}
          AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}

Code snippet

No response

Relevant log output

Collecting openapi-schema-pydantic<2.0,>=1.2 (from langchain>=0.0.154->llama-index<=0.6.9,>=0.6.0->gpt-review)
  Downloading openapi_schema_pydantic-1.2.4-py3-none-any.whl (90 kB)
     โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 90.0/90.0 kB 32.2 MB/s eta 0:00:00
Collecting pydantic<3,>=1 (from langchain>=0.0.154->llama-index<=0.6.9,>=0.6.0->gpt-review)
  Obtaining dependency information for pydantic<3,>=1 from https://files.pythonhosted.org/packages/82/54/ed9a1005c580b619a4c53c324f472c99c165051b22f8885b09be1882aece/pydantic-2.2.0-py3-none-any.whl.metadata
  Downloading pydantic-2.2.0-py3-none-any.whl.metadata (145 kB)
...

<module>
    info_str=example_info.json(indent=4),
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/.../.env/lib/python3.11/site-packages/typing_extensions.py", line 2562, in wrapper
    return __arg(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/.../.env/lib/python3.11/site-packages/pydantic/main.py", line 952, in json
Error:     raise TypeError('`dumps_kwargs` keyword arguments are no longer supported.')
TypeError: `dumps_kwargs` keyword arguments are no longer supported.
Error: Process completed with exit code 1.

[Bug Report]: ImportError: cannot import name 'BaseCache' from 'langchain'

Module path

gpt github review

review-gpt CLI version

v0.9.5

Describe the bug

The package is broken as it runs into ImportError when we invoke gpt github review command on CLI.
ImportError: cannot import name 'BaseCache' from 'langchain' (/home/emumba/Documents/emumba/gpt-review/.env/lib/python3.8/site-packages/langchain/__init__.py)

A suggestion was made to install specific version of some packages as described in Issue # 208


         python -m pip install langchain==0.0.301
          python -m pip install pydantic==1.10.13
          python -m pip install gpt-review==v0.9.5

However, after this we run into new error:

Traceback (most recent call last):
  File "/home/emumba/Documents/emumba/gpt-review/.env/bin/gpt", line 5, in <module>
    from gpt_review.main import __main__
  File "/home/emumba/Documents/emumba/gpt-review/.env/lib/python3.8/site-packages/gpt_review/main.py", line 6, in <module>
    from gpt_review._gpt_cli import cli
  File "/home/emumba/Documents/emumba/gpt-review/.env/lib/python3.8/site-packages/gpt_review/_gpt_cli.py", line 9, in <module>
    from gpt_review._ask import AskCommandGroup
  File "/home/emumba/Documents/emumba/gpt-review/.env/lib/python3.8/site-packages/gpt_review/_ask.py", line 13, in <module>
    from gpt_review._openai import _call_gpt
  File "/home/emumba/Documents/emumba/gpt-review/.env/lib/python3.8/site-packages/gpt_review/_openai.py", line 6, in <module>
    from openai.error import RateLimitError
ModuleNotFoundError: No module named 'openai.error'

I want to run it with AzureOpenAI. It would be great if we can get a running example of a GitHub Action with Azure OpenAI.

To reproduce

Simply follow the commands provided in the workflow.

            sudo apt-get update
            python3 -m venv .env
            source .env/bin/activate
            python -m pip install --upgrade pip
            python -m pip install gpt-review 
            gpt github review 

Code snippet

name: 'code-review' 
on: [pull_request]
jobs:
    add_pr_comment:
      permissions: write-all
      runs-on: ubuntu-latest
      name: OpenAI PR Comment
      env:
        GIT_COMMIT_HASH: ${{ github.event.pull_request.head.sha }}
        GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        PR_NUMBER: ${{ github.event.pull_request.number }}
        PR_TITLE: ${{ github.event.pull_request.title }}
        REPOSITORY_NAME: ${{ github.repository }}
        AZURE_OPENAI_API: ${{ secrets.AZURE_OPENAI_API }}
        AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
        LINK: "https://github.com/${{ github.repository }}/pull/${{ github.event.pull_request.number }}"
        FILE_SUMMARY: false
        TEST_SUMMARY: false
        BUG_SUMMARY: false
        RISK_SUMMARY: false
        RISK_BREAKING: false
        CONTEXT_FILE: "../code-reviewer/azure.yaml"
      steps:
        - uses: actions/checkout@v3
          with:
            ref: ${{ github.event.pull_request.head.sha }}
        - name: Set up Python 3.11
          uses: actions/setup-python@v4
          with:
            python-version: 3.11
        - run: |
            sudo apt-get update
            python3 -m venv .env
            source .env/bin/activate
            python -m pip install --upgrade pip
            python -m pip install gpt-review\
        - run: |
            source .env/bin/activate
            gpt github review \
                --access-token $GITHUB_TOKEN \
                --pull-request $PR_NUMBER \
                --repository $REPOSITORY_NAME
            continue-on-error: true
        - run: |
            source .env/bin/activate
            pip install -e .
            gpt github review \
                --access-token $GITHUB_TOKEN \
                --pull-request $PR_NUMBER \

Relevant log output

$ gpt github review
/home/emumba/Documents/emumba/gpt-review/.env/lib/python3.8/site-packages/langchain/chat_models/__init__.py:31: LangChainDeprecationWarning: Importing chat models from langchain is deprecated. Importing from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead:

`from langchain_community.chat_models import AzureChatOpenAI`.

To install langchain-community run `pip install -U langchain-community`.
  warnings.warn(
/home/emumba/Documents/emumba/gpt-review/.env/lib/python3.8/site-packages/langchain/chat_models/__init__.py:31: LangChainDeprecationWarning: Importing chat models from langchain is deprecated. Importing from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead:

`from langchain_community.chat_models import ChatOpenAI`.

To install langchain-community run `pip install -U langchain-community`.
  warnings.warn(
/home/emumba/Documents/emumba/gpt-review/.env/lib/python3.8/site-packages/langchain/embeddings/__init__.py:29: LangChainDeprecationWarning: Importing embeddings from langchain is deprecated. Importing from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead:

`from langchain_community.embeddings import OpenAIEmbeddings`.

To install langchain-community run `pip install -U langchain-community`.
  warnings.warn(
/home/emumba/Documents/emumba/gpt-review/.env/lib/python3.8/site-packages/langchain/llms/__init__.py:548: LangChainDeprecationWarning: Importing LLMs from langchain is deprecated. Importing from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead:

`from langchain_community.llms import AzureOpenAI`.

To install langchain-community run `pip install -U langchain-community`.
  warnings.warn(
/home/emumba/Documents/emumba/gpt-review/.env/lib/python3.8/site-packages/langchain/__init__.py:29: UserWarning: Importing Cohere from langchain root module is no longer supported. Please use langchain_community.llms.Cohere instead.
  warnings.warn(
/home/emumba/Documents/emumba/gpt-review/.env/lib/python3.8/site-packages/langchain/__init__.py:29: UserWarning: Importing LLMChain from langchain root module is no longer supported. Please use langchain.chains.LLMChain instead.
  warnings.warn(
/home/emumba/Documents/emumba/gpt-review/.env/lib/python3.8/site-packages/langchain/__init__.py:29: UserWarning: Importing OpenAI from langchain root module is no longer supported. Please use langchain_community.llms.OpenAI instead.
  warnings.warn(
Traceback (most recent call last):
  File "/home/emumba/Documents/emumba/gpt-review/.env/bin/gpt", line 5, in <module>
    from gpt_review.main import __main__
  File "/home/emumba/Documents/emumba/gpt-review/.env/lib/python3.8/site-packages/gpt_review/main.py", line 6, in <module>
    from gpt_review._gpt_cli import cli
  File "/home/emumba/Documents/emumba/gpt-review/.env/lib/python3.8/site-packages/gpt_review/_gpt_cli.py", line 9, in <module>
    from gpt_review._ask import AskCommandGroup
  File "/home/emumba/Documents/emumba/gpt-review/.env/lib/python3.8/site-packages/gpt_review/_ask.py", line 12, in <module>
    from gpt_review._llama_index import _query_index
  File "/home/emumba/Documents/emumba/gpt-review/.env/lib/python3.8/site-packages/gpt_review/_llama_index.py", line 10, in <module>
    from llama_index import (
  File "/home/emumba/Documents/emumba/gpt-review/.env/lib/python3.8/site-packages/llama_index/__init__.py", line 19, in <module>
    from llama_index.indices.common.struct_store.base import SQLDocumentContextBuilder
  File "/home/emumba/Documents/emumba/gpt-review/.env/lib/python3.8/site-packages/llama_index/indices/__init__.py", line 4, in <module>
    from llama_index.indices.keyword_table.base import GPTKeywordTableIndex
  File "/home/emumba/Documents/emumba/gpt-review/.env/lib/python3.8/site-packages/llama_index/indices/keyword_table/__init__.py", line 4, in <module>
    from llama_index.indices.keyword_table.base import GPTKeywordTableIndex
  File "/home/emumba/Documents/emumba/gpt-review/.env/lib/python3.8/site-packages/llama_index/indices/keyword_table/base.py", line 18, in <module>
    from llama_index.indices.base import BaseGPTIndex
  File "/home/emumba/Documents/emumba/gpt-review/.env/lib/python3.8/site-packages/llama_index/indices/base.py", line 6, in <module>
    from llama_index.chat_engine.types import BaseChatEngine, ChatMode
  File "/home/emumba/Documents/emumba/gpt-review/.env/lib/python3.8/site-packages/llama_index/chat_engine/__init__.py", line 1, in <module>
    from llama_index.chat_engine.condense_question import CondenseQuestionChatEngine
  File "/home/emumba/Documents/emumba/gpt-review/.env/lib/python3.8/site-packages/llama_index/chat_engine/condense_question.py", line 5, in <module>
    from llama_index.chat_engine.utils import to_chat_buffer
  File "/home/emumba/Documents/emumba/gpt-review/.env/lib/python3.8/site-packages/llama_index/chat_engine/utils.py", line 6, in <module>
    from llama_index.indices.service_context import ServiceContext
  File "/home/emumba/Documents/emumba/gpt-review/.env/lib/python3.8/site-packages/llama_index/indices/service_context.py", line 9, in <module>
    from llama_index.indices.prompt_helper import PromptHelper
  File "/home/emumba/Documents/emumba/gpt-review/.env/lib/python3.8/site-packages/llama_index/indices/prompt_helper.py", line 13, in <module>
    from llama_index.llm_predictor.base import BaseLLMPredictor
  File "/home/emumba/Documents/emumba/gpt-review/.env/lib/python3.8/site-packages/llama_index/llm_predictor/__init__.py", line 4, in <module>
    from llama_index.llm_predictor.base import LLMPredictor
  File "/home/emumba/Documents/emumba/gpt-review/.env/lib/python3.8/site-packages/llama_index/llm_predictor/base.py", line 11, in <module>
    from langchain import BaseCache, Cohere, LLMChain, OpenAI
ImportError: cannot import name 'BaseCache' from 'langchain' (/home/emumba/Documents/emumba/gpt-review/.env/lib/python3.8/site-packages/langchain/__init__.py)

Configure Azure Open AI models via configuration

Right now we assume the engines are named gpt-35-turbo, gpt-4, or gpt-4-32k. We should use a file to configure this, following the format of Auto-GPT.

azure_api_type: azure
azure_api_base: https://synapseml-openai.openai.azure.com/
azure_api_version: 2023-03-15-preview
azure_model_map:
    turbo_llm_model_deployment_id: gpt-35-turbo
    smart_llm_model_deployment_id: gpt-4
    large_llm_model_deployment_id: gpt-4-32k
    embedding_model_deployment_id: text-embedding-ada-002

[Bug Report]: No comment on PR after workflow run succeeds

Module path

gpt review

review-gpt CLI version

0.9.5

Describe the bug

Since there was a dependency issue #208 breaking the workflow, I created a copy of the action.yml file in my repository. I edited the file to add the install command as proposed by this solution on the same issue and use this as my workflow.

The workflow runs successfully, but no changes are seen on the PR. No comments or reviews are created.

To reproduce

  1. Create a workflow in the existing repo
  2. Add the following contents shown in code snippet
name: GPT Review on Pull Request

on:
  pull_request_target:
    branches: [ 'main' ]
 
jobs:
  add_pr_comment:
    runs-on: ubuntu-latest
    environment: <environment name> 
    name: Azure OpenAI PR Comment
    steps:
        - uses: actions/checkout@v3
          with:
            ref: ${{ github.event.pull_request.head.sha }}
            fetch-depth: 2
  
        - name: Set up Python 3.11
          uses: actions/setup-python@v4
          with:
            python-version: 3.11
  
        - name: Install gpt-review with fixed dependencies
          shell: bash
          run: |
            sudo apt-get update
            python3 -m venv .env
            source .env/bin/activate
            python -m pip install --upgrade pip
            python -m pip install langchain==0.0.301
            python -m pip install pydantic==1.10.13
            python -m pip install gpt-review==v0.9.5

        - name: Populate azure.yaml from vars
          env:
            AZURE_YAML: ${{ vars.AZURE_YAML }}
          shell: bash
          run: |
            echo "$AZURE_YAML" > azure.yaml
  
        - name: Review PR and make comment
          shell: bash
          run: |
            source .env/bin/activate
  
            gpt github review \
              --access-token $GITHUB_TOKEN \
              --pull-request $PATCH_PR \
              --repository $PATCH_REPO
          env:
            ACTION_REF: ${{ github.action_ref || env.BRANCH }}
            GIT_COMMIT_HASH: ${{ github.event.pull_request.head.sha }}
            GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
            LINK: "https://github.com/${{ github.repository }}/pull/${{ github.event.pull_request.number }}"
            PR_TITLE: ${{ github.event.pull_request.title }}
            BRANCH:  ${{ env.BRANCH }}
            AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY || '' }}
            AZURE_OPENAI_API: ${{ secrets.AZURE_OPENAI_API || '' }}
            PATCH_PR: ${{ github.event.pull_request.number }}
            PATCH_REPO: ${{ github.repository }}
            FULL_SUMMARY: true
            FILE_SUMMARY: false
            TEST_SUMMARY: false
            BUG_SUMMARY: false
            RISK_SUMMARY: false
            RISK_BREAKING: false
  1. Create a PR on the repo
  2. The workflow triggers, succeeds (screenshot below)
    image
  3. No comment is seen in the original PR

Code snippet

No response

Relevant log output

No response

[Bug Report]: github action can't config azure deployment name

Module path

gpt github review

review-gpt CLI version

0.9.4

Describe the bug

add_pr_comment:
name: OpenAI PR Comment
steps:
- id: review
uses: microsoft/[email protected]
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
AZURE_OPENAI_API: ${{ secrets.AZURE_OPENAI_API }}
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}

It use default model names in src/gpt_review/constants.py

WARNING: This command is in preview. It may be changed/removed in a future release.
WARNING: Command group 'github' is in preview. It may be changed/removed in a future release.
ERROR: The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again.

To reproduce

add_pr_comment:
name: OpenAI PR Comment
steps:
- id: review
uses: microsoft/[email protected]
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
AZURE_OPENAI_API: ${{ secrets.AZURE_OPENAI_API }}
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}

Run github action

Code snippet

No response

Relevant log output

No response

[Bug Report]: gpt-35-turbo is not a valid OpenAI API model

Module path

gpt ask "What is the capital of France?" --debug --fast

review-gpt CLI version

0.9.4

Describe the bug

When using OpenAI API directly (not with Azure) the model passed to the API is not valid.

{"model": "gpt-35-turbo", "messages": [{"role": "user", "content": "What is the capital of France?"}], "max_tokens": 100, "temperature": 0.7, "top_p": 0.5, "frequency_penalty": 0.5, "presence_penalty": 0}

The model name should be gpt-3.5-turbo as per the docs.

To reproduce

export OPENAI_API_KEY=****
unset AZURE_OPENAI_API
unset AZURE_OPENAI_API_KEY
gpt ask "What is the capital of France?" --debug --fast

Code snippet

No response

Relevant log output

cli.knack.cli: Command arguments: ['ask', 'What is the capital of France?', '--debug', '--fast']
cli.knack.cli: __init__ debug log:
Enable color in terminal.
cli.knack.cli: Event: Cli.PreExecute []
cli.knack.cli: Event: CommandParser.OnGlobalArgumentsCreate [<function CLILogging.on_global_arguments at 0x102ee2170>, <function OutputProducer.on_global_arguments at 0x103068820>, <function CLIQuery.on_global_arguments at 0x103085b40>]
cli.knack.cli: Event: CommandInvoker.OnPreCommandTableCreate []
cli.knack.cli: Event: CommandLoader.OnLoadArguments []
cli.knack.cli: Event: CommandInvoker.OnPostCommandTableCreate []
cli.knack.cli: Event: CommandInvoker.OnCommandTableLoaded []
cli.knack.cli: Event: CommandInvoker.OnPreParseArgs []
cli.knack.cli: Event: CommandInvoker.OnPostParseArgs [<function OutputProducer.handle_output_argument at 0x1030688b0>, <function CLIQuery.handle_query_parameter at 0x103085bd0>]
This command is in preview. It may be changed/removed in a future release.
root: Prompt sent to GPT: What is the capital of France?

root: Model Selected based on prompt size: gpt-35-turbo
root: Using Open AI.
openai: message='Request to OpenAI API' method=post path=https://api.openai.com/v1/chat/completions
openai: api_version=None data='{"model": "gpt-35-turbo", "messages": [{"role": "user", "content": "What is the capital of France?"}], "max_tokens": 100, "temperature": 0.7, "top_p": 0.5, "frequency_penalty": 0.5, "presence_penalty": 0}' message='Post details'
urllib3.util.retry: Converted retries value: 2 -> Retry(total=2, connect=None, read=None, redirect=None, status=None)
urllib3.connectionpool: Starting new HTTPS connection (1): api.openai.com:443
urllib3.connectionpool: https://api.openai.com:443 "POST /v1/chat/completions HTTP/1.1" 404 None
openai: message='OpenAI API response' path=https://api.openai.com/v1/chat/completions processing_ms=None request_id=8dcbe0a951e6de5e5e037c01cab50d3e response_code=404
openai: error_code=None error_message='The model `gpt-35-turbo` does not exist' error_param=None error_type=invalid_request_error message='OpenAI API error received' stream_error=False
cli.knack.cli: The model `gpt-35-turbo` does not exist
Traceback (most recent call last):
  File "/nix/store/zhlpbm7i908l99qs3q6vigzcb7pw0wmk-gpt-review-venv/lib/python3.10/site-packages/knack/cli.py", line 233, in invoke
    cmd_result = self.invocation.execute(args)
  File "/nix/store/zhlpbm7i908l99qs3q6vigzcb7pw0wmk-gpt-review-venv/lib/python3.10/site-packages/knack/invocation.py", line 224, in execute
    cmd_result = parsed_args.func(params)
  File "/nix/store/zhlpbm7i908l99qs3q6vigzcb7pw0wmk-gpt-review-venv/lib/python3.10/site-packages/knack/commands.py", line 146, in __call__
    return self.handler(*args, **kwargs)
  File "/nix/store/zhlpbm7i908l99qs3q6vigzcb7pw0wmk-gpt-review-venv/lib/python3.10/site-packages/knack/commands.py", line 253, in _command_handler
    result = op(client, **command_args) if client else op(**command_args)
  File "/nix/store/zhlpbm7i908l99qs3q6vigzcb7pw0wmk-gpt-review-venv/lib/python3.10/site-packages/gpt_review/_ask.py", line 121, in _ask
    response = _call_gpt(
  File "/nix/store/zhlpbm7i908l99qs3q6vigzcb7pw0wmk-gpt-review-venv/lib/python3.10/site-packages/gpt_review/_openai.py", line 102, in _call_gpt
    completion = openai.ChatCompletion.create(
  File "/nix/store/zhlpbm7i908l99qs3q6vigzcb7pw0wmk-gpt-review-venv/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
  File "/nix/store/zhlpbm7i908l99qs3q6vigzcb7pw0wmk-gpt-review-venv/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
  File "/nix/store/zhlpbm7i908l99qs3q6vigzcb7pw0wmk-gpt-review-venv/lib/python3.10/site-packages/openai/api_requestor.py", line 230, in request
    resp, got_stream = self._interpret_response(result, stream)
  File "/nix/store/zhlpbm7i908l99qs3q6vigzcb7pw0wmk-gpt-review-venv/lib/python3.10/site-packages/openai/api_requestor.py", line 624, in _interpret_response
    self._interpret_response_line(
  File "/nix/store/zhlpbm7i908l99qs3q6vigzcb7pw0wmk-gpt-review-venv/lib/python3.10/site-packages/openai/api_requestor.py", line 687, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: The model `gpt-35-turbo` does not exist
cli.knack.cli: Event: Cli.PostExecute []

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.