GithubHelp home page GithubHelp logo

joshpxyne / gpt-migrate Goto Github PK

View Code? Open in Web Editor NEW
6.8K 6.8K 478.0 171 KB

Easily migrate your codebase from one framework or language to another.

Home Page: https://gpt-migrate.com

License: MIT License

Python 100.00%

gpt-migrate's Introduction

Incepto ne desistam.

gpt-migrate's People

Contributors

ishaan-jaff avatar joshpxyne avatar mkuuwaujinga avatar sandys avatar sweep-ai[bot] avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gpt-migrate's Issues

Demos

GPT-Migrate.mp4
GPT-Migrate-debugging.mp4

`json.decoder.JSONDecodeError`

✅  Parsing function signatures for <file>...
Traceback (most recent call last):

  File "<string>", line 1, in <module>

  File ".../src/gpt_migrate/main.py", line 195, in main
    migrate(sourceentry, globals)

  File ".../src/gpt_migrate/main.py", line 186, in migrate
    migrate(dependency, globals, parent_file=sourcefile)

  File ".../src/gpt_migrate/main.py", line 187, in migrate
    file_name = write_migration(
                ^^^^^^^^^^^^^^^^

  File ".../src/gpt_migrate/steps/migrate.py", line 161, in write_migration
    sigs = get_function_signatures(deps_per_file, globals) if deps_per_file else []
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File ".../src/gpt_migrate/steps/migrate.py", line 62, in get_function_signatures
    sigs = json.loads(
           ^^^^^^^^^^^

  File "/opt/homebrew/Cellar/[email protected]/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/opt/homebrew/Cellar/[email protected]/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/opt/homebrew/Cellar/[email protected]/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/json/decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None

json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

Support for OpenRouter

It would be great to use other GPT-like API endpoints like OpenRouter. For example, this would allow anyone to use gpt-4-32k even if they have no access from their OpenAI accounts, since OpenAI is no longer giving access to gpt-4-32k for the time being, and this model is basically a requisite to use gpt-migrate.

Rate limit error on Python 3 to PHP

openai.error.RateLimitError: Rate limit reached for default-gpt-3.5-turbo-16k in organization org-xxxxxxxxxxxxxxx on tokens per min. Limit: 180000 / min. Current: 178656 / min. Contact us through our help center at help.openai.com if you continue to have issues.

I don't have gpt-4 trial. It has a lot of lines appearing for same python file. Is it stuck in a loop? I'm trying to port https://github.com/apriha/lineage to PHP 8.2 from Python 3.

Sorry I should of described this in my previous issue that was closed.

Langchain / OpenAI Dependencies

I'm getting:

/home/teamcoltra/.local/lib/python3.10/site-packages/langchain/chat_models/__init__.py:31: LangChainDeprecationWarning: Importing chat models from langchain is deprecated. Importing from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead:

`from langchain_community.chat_models import ChatOpenAI`.

To install langchain-community run `pip install -U langchain-community`.
  warnings.warn(
Traceback (most recent call last):
  File "/home/teamcoltra/gpt-migrate-main/gpt_migrate/main.py", line 7, in <module>
    from ai import AI
  File "/home/teamcoltra/gpt-migrate-main/gpt_migrate/ai.py", line 6, in <module>
    from litellm import completion
  File "/home/teamcoltra/.local/lib/python3.10/site-packages/litellm/__init__.py", line 28, in <module>
    from .timeout import timeout
  File "/home/teamcoltra/.local/lib/python3.10/site-packages/litellm/timeout.py", line 11, in <module>
    from openai.error import Timeout
ModuleNotFoundError: No module named 'openai.error'

I don't know if you want to update the code to be up-to-date with the modern langchain version or set a version number in requirements.

LiteLLM Improvements

Hi @joshpxyne,

Thanks for merging our PR for LiteLLM.

I'm one of the co-maintainers of the package. How can we make it better for you?

Target repository doesn't make use of internal functions

Saw this using the benchmark flask-nodejs:

app.js is not importing db.js but instead uses an external library node-json-db: const db = new JsonDB(new Config("storage/items", true, false, '/'));.

What about adding the internal function signatures, which the source file depends on, to the migration prompt for the target file? This should probably fix it :) Related to #5.

OSError: [Errno 63] File name too long

vue.js 1 to 3

$ python main.py --sourcelang "vue.js 1"  --targetlang "vue.js 3" --sourcedir ../benchmarks/vue1-vue3 --sourceentry "index.html" --model "gpt-3.5-turbo-16k"
◐ Reading vue.js 1 project from directory '/Users/user/gpt-migrate/benchmarks/vue1-vue3', with entrypoint 'index.html'.
◑ Outputting vue.js 3 project to directory '/Users/user/gpt-migrate/benchmarks/flask-nodejs/target'.
Source directory structure:

        ├── favicon-16x16.png
        ├── safari-pinned-tab.svg
        ├── favicon.ico
        ├── index.html
        ├── android-chrome-192x192.png
        ├── apple-touch-icon.png
        ├── renovate.json
        ├── css/
            │   └── all.css
        ├── js/
            │   └── all.js
        ├── 404.html
        ├── README.md
        ├── img/
            │   ├── walnut-logo.svg
            │   ├── walnut-logo-white-background.png
            │   ├── spin.svg
            │   ├── share.svg
            │   └── walnut-logo-white-background view.svg
        ├── channels.js
        ├── android-chrome-512x512.png
        ├── site.webmanifest
        ├── package-lock.json
        ├── package.json
        ├── scripts/
            │   └── check-channels.js
        ├── mstile-150x150.png
        ├── browserconfig.xml
        └── favicon-32x32.png

✅  Creating your environment...
Created Docker environment for vue.js 3 project in directory '/Users/user/gpt-migrate/benchmarks/flask-nodejs/target'.
✅  Identifying external dependencies for index.html...
✅  Identifying internal dependencies for index.html...
✅  Creating migration file for index.html...
Created file_name.ext at /Users/user/gpt-migrate/benchmarks/flask-nodejs/target
Copied renovate.json from /Users/user/gpt-migrate/benchmarks/vue1-vue3 to /Users/gianpaj/tmp/gpt-migrate/benchmarks/flask-nodejs/target
✅  Creating dependencies file required for the Docker environment...
Traceback (most recent call last):

  File "/Users/user//gpt-migrate/gpt_migrate/main.py", line 100, in main
    add_env_files(globals)

  File "/Users/user/gpt-migrate/gpt_migrate/steps/migrate.py", line 91, in add_env_files
    external_deps_name, _, external_deps_content = llm_write_file(prompt,
                                                   ^^^^^^^^^^^^^^^^^^^^^^

  File "/Users/user/gpt-migrate/gpt_migrate/utils.py", line 61, in llm_write_file
    with open(os.path.join(globals.targetdir, file_name), 'w') as file:
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

OSError: [Errno 63] File name too long: "/Users/user/gpt-migrate/benchmarks/flask-nodejs/target/PREFERENCE LEVEL 1\n\nHere are the guidelines for this prompt:\n\n1. Follow the output instructions precisely and do not make any assumptions. Your output will not be read by a human; it will be directly input into a computer for literal processing. Adding anything else or deviating from the instructions w

Decomposing functions/endpoints

Currently the source function ingests the entire file at once which will not work for larger API files. We'll need to go endpoint by endpoint and isolate dependency functions either in-file or in other files. With these, we should keep track of functions already writtend (file:function_name) in the LLM memory in order to avoid redundancy.

[FEATURE REQUEST]: Using this for more than migration.

I'm unsure about the exact title of this project, but I envision it going beyond a simple migration to a different language. I'm wondering if it could be utilized to make code changes as well.

Let's say you have a massive project and you want to replace all the int values with long values. Doing this manually would be challenging and time-consuming, especially considering other related conversions like IntegerToFloat. However, with GPT, we could automate these tedious tasks effortlessly.

I hope I explained my idea clearly.

Small todos

  • logic for model input size limiting based on window size
  • add feature flag for style (eg, "I want the output to use spaces instead of tabs")
  • Let the LLM select a "search google" option in the debug step
  • add natural language files in the source for the LLM to integrate into proper code, compatible with the rest of the project

Add milestones, roadmap or a TODO list in readme.md

As it stands right now, this is a great idea, but progress needs some parameters to allow more and better contributions:

  • A roadmap with the desired features, maybe classified by difficulty or urgency
  • Some kind of benchmark or success report for different projects and language conversion settings:
    • This is more common on emulator repos, where the compatibility can be measured by testing the emulated software, but if you pick some popular repos or projects that have stable releases on source and target languages, you can compare what works and what doesn't.
  • Some decisions on how certain features will be added, for example: what kind of preprocessors for code should be allowed or ignored? should specific prompt hints be added based on certain functions or libraries to make the AI answer less prone to errors or hallucination?

These additions would funnel contributor efforts to specific areas and tasks, rather than anyone expecting for their own unique needs that broadly overlap with a more structured feature project.

Invalid response object from API: 'Request Entity Too Large' (HTTP response code was 413) with openrouter gpt-4-32k

I am encountering an error when using the openrouter gpt-4-32k API. The error message I'm receiving is as follows:

Invalid response object from API: 'Request Entity Too Large\n\nFUNCTION_PAYLOAD_TOO_LARGE\n' (HTTP response code was 413)

The HTTP response code associated with this error is 413. It appears that the issue is related to the size of the prompt being sent to the openrouter API, as it is too large.

The prompt that is created by gpt-migrate is too large for openrouters api. Has anyone ran into this issue and fixed it?

Add distributed inference

Hi,

I've been looking to write something like this project myself. I've been researching how I would do source conversion, but distributed. It seems like you've done most of the heavy lifting for the general conversion, so it seems like it would be a good idea for me to contribute to this project instead of starting from scratch.

I saw that this project had some integration with litellm, so the distributed part can be done using that. I've also written some code to split source files up intelligently which could be used with this.

Is this project still active? If so, I would love to discuss more about contributing.

Benchmarks fail for the gpt-3.5-turbo model family

Hi Josh,

thanks for this awesome project 🎉.

Some observations from running python main.py --sourcedir ../benchmarks/flask-nodejs/source --targetlang nodejs --model gpt-3.5-turbo-16k --temperature 0:

  • The Dockerfile is already not generated correctly. It just contains one line named CODE. This is probably due to the instructions in p4_output_formats/single_file.
  • Internal dependency resolve of benchmarks/flask-nodejs/source/db.py is stuck in an infinite loop cause the output of the LLM is [db.py] again.

Perhaps this can be fixed with a lot of prompt engineering but as of now I'd probably communicate gpt-3.5-turbo is not supported. Wdyt?
Also, you could easily check that file x isn't in the list of internal dependencies of file x to fix getting stuck in an infinite loop. Happy to add a PR for this.

Lack of Internal Dependency Files from python source project

Hello,

During the project setup, I noticed that a source project does not have any internal dependency files.

I propose the following solution to generate a list of dependencies for this project:

Solution Steps:

  1. Use Pipdeptree to generate a list of dependencies.

Install pipdeptree:

$ pip install pipdeptree
  1. As an example, let's say we want to generate a dependency tree for a package named 'requests'. Execute the following command:
$ pipdeptree -p requests

This should provide an output similar to the following:

requests==2.23.0
  - certifi [required: >=2017.4.17, installed: 2020.4.5.1]
  - chardet [required: >=3.0.2,<4, installed: 3.0.4]
  - idna [required: >=2.5,<3, installed: 2.9]
  - urllib3 [required: >=1.21.1,<1.26,!=1.25.1,!=1.25.0, installed: 1.25.9]
  1. Next, copy these dependencies and their version information into a requirements.txt file, like so:
certifi>=2017.4.17
chardet>=3.0.2,<4
idna>=2.5,<3
urllib3>=1.21.1,<1.26,!=1.25.1,!=1.25.0
  1. Lastly, you can download the dependencies to the current directory without installing them by using the following command:
(current directory) $ pip download -r requirements.txt

This method will allow us make the project set-up smoother for python source projects. Thanks.

original link: https://www.activestate.com/resources/quick-reads/how-to-download-python-dependencies/

OPENAI_API_KEY for GPT-4

Hi - this looks like such a great project.

You say "It's also recommended that you use at least GPT-4, preferably GPT-4-32k.

Set your OpenAI API key and install the python requirements:

export OPENAI_API_KEY="

My key only permits GPT-3.5 at present, am on the waiting list for GPT-4 - I presume I won't be able to use your appliction in the meantime?

ValueError: No valid completion model args passed in

I'm trying to migrate a project from Fortran95 (yes, I know it's old, that's why it needs to be migrated... 😁) to Dart, but I get the following error:

Traceback (most recent call last):

  File "/var/home/agardh/Development/gpt-migrate/gpt_migrate/main.py", line 127, in <module>
    app()

  File "/var/home/agardh/Development/gpt-migrate/gpt_migrate/main.py", line 87, in main
    create_environment(globals)

  File "/var/home/agardh/Development/gpt-migrate/gpt_migrate/steps/setup.py", line 15, in create_environment
    llm_write_file(prompt,

  File "/var/home/agardh/Development/gpt-migrate/gpt_migrate/utils.py", line 52, in llm_write_file
    file_name,language,file_content = globals.ai.write_code(prompt)[0]
                                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/var/home/agardh/Development/gpt-migrate/gpt_migrate/ai.py", line 23, in write_code
    response = completion(
               ^^^^^^^^^^^

  File "/var/home/agardh/.local/lib/python3.11/site-packages/litellm/utils.py", line 98, in wrapper
    raise e

  File "/var/home/agardh/.local/lib/python3.11/site-packages/litellm/utils.py", line 89, in wrapper
    result = original_function(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/var/home/agardh/.local/lib/python3.11/site-packages/litellm/timeout.py", line 44, in wrapper
    result = future.result(timeout=local_timeout_duration)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/usr/lib64/python3.11/concurrent/futures/_base.py", line 456, in result
    return self.__get_result()
           ^^^^^^^^^^^^^^^^^^^

  File "/usr/lib64/python3.11/concurrent/futures/_base.py", line 401, in __get_result
    raise self._exception

  File "/var/home/agardh/.local/lib/python3.11/site-packages/litellm/timeout.py", line 35, in async_func
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^

  File "/var/home/agardh/.local/lib/python3.11/site-packages/litellm/main.py", line 248, in completion
    raise exception_type(model=model, original_exception=e)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/var/home/agardh/.local/lib/python3.11/site-packages/litellm/utils.py", line 273, in exception_type
    raise original_exception # base case - return the original exception
    ^^^^^^^^^^^^^^^^^^^^^^^^

  File "/var/home/agardh/.local/lib/python3.11/site-packages/litellm/main.py", line 242, in completion
    raise ValueError(f"No valid completion model args passed in - {args}")

It also prints all the parameters passed to the model, if you want I can give you those too.

The command I run is:

python3 main.py --sourcedir "../my/source" --sourceentry md_3PG.f95 --sourcelang fortran95 --targetlang dart --targetdir "../my/target"

I couldn't find anyone else had encountered the same problem, so I thought I might open an Issue.

I have tried a few different models, but that doesn't seem to make a difference; all of them throw the same error. Except when using the model "gpt-3.5-turbo", then it insteads complains that the model's maximum context length has been reached.

Use my own OpenAI keys instead of Open Router

I have a paid account with Openai apis, which I would like to use. Due to token limit, I can not use open-router, and it doesn't make sense for me to make another purchase.

Is it possible to use openai apis directly instead of open-router ?

Openai error "ModuleNotFoundError: No module named 'openai.error'"

When running python main.py --targetlang nodejs I go this errror

Traceback (most recent call last):
  File "/Users/mac/Desktop/dev/abel/gpt-migrate/gpt_migrate/main.py", line 7, in <module>
    from ai import AI
  File "/Users/mac/Desktop/dev/abel/gpt-migrate/gpt_migrate/ai.py", line 6, in <module>
    from litellm import completion
  File "/Users/mac/Desktop/dev/abel/gpt-migrate/gpt_migrate/env/lib/python3.11/site-packages/litellm/__init__.py", line 28, in <module>
    from .timeout import timeout
  File "/Users/mac/Desktop/dev/abel/gpt-migrate/gpt_migrate/env/lib/python3.11/site-packages/litellm/timeout.py", line 11, in <module>
    from openai.error import Timeout
ModuleNotFoundError: No module named 'openai.error'

Support for libraries

Currently, gpt-migrate only works with code repositories that run as applications with a main file as entry point. However, many code bases are libraries. Libraries are different in that they:

  • Don't have a dedicated entry point
  • Often come with a test suite

The migration problem is equally important, so I think it'd be great if gpt-migrate could support this. For that, we'd have to allow for:

  • Several entry points (or no entry point and a separate logic to determine all sources of the dependency DAG)
  • The possibility to specify your own test suite (and migrate it to the new target repo if possible)

TypeError: can only concatenate str (not "NoneType") to str

I'm getting this on pretty much any command. Even when going from python to python to try having it debug, test, etc or convert to different libraries with the --guidelines.

Here's the full console output (note this is an example with an obscure language, but getting the same on py to py):

 ➜ /workspaces/ODB/gpt-migrate-main/gpt_migrate (main) $ python main.py --sourcelang EasyLanguage --sourcedir /workspaces/ODB/Python/ELtoPy/ELSrc --sourceentry OD_23.02-Strategy.el --guidelines "You specialize in trading algorithms, you are an expert in converting TradeStation EasyLanguage code to Python using the Lumibot library" --targetdir /workspaces/ODB/Python/ELtoPy/PyTgt --targetlang python 

◐ Reading EasyLanguage project from directory '/workspaces/ODB/Python/ELtoPy/ELSrc', with entrypoint 'OD_23.02-Strategy.el'.
◑ Outputting python project to directory '/workspaces/ODB/Python/ELtoPy/PyTgt'.
Source directory structure: 

        ├── OD Strat Mashup.el
        ├── ELtoPySrc/
            │   └── OD_23.02-Strategy_GH-CoPilot1.py
        ├── OD_23.02-Strategy.el
        ├── OD_23.02-Strategy copy.el
        └── OD_23.02-Strategy_GH-CoPilot1.py

✅  Creating your environment...
Created Docker environment for python project in directory '/workspaces/ODB/Python/ELtoPy/PyTgt'.
Traceback (most recent call last):

  File "/workspaces/ODB/gpt-migrate-main/gpt_migrate/main.py", line 127, in <module>
    app()

  File "/workspaces/ODB/gpt-migrate-main/gpt_migrate/main.py", line 100, in main
    migrate(sourceentry, globals)

  File "/workspaces/ODB/gpt-migrate-main/gpt_migrate/main.py", line 94, in migrate
    internal_deps_list, external_deps_list = get_dependencies(sourcefile=sourcefile,globals=globals)

  File "/workspaces/ODB/gpt-migrate-main/gpt_migrate/steps/migrate.py", line 58, in get_dependencies
    external_dependencies = llm_run(prompt,

  File "/workspaces/ODB/gpt-migrate-main/gpt_migrate/utils.py", line 39, in llm_run
    output = globals.ai.run(prompt)

  File "/workspaces/ODB/gpt-migrate-main/gpt_migrate/ai.py", line 49, in run
    chat += msg

TypeError: can only concatenate str (not "NoneType") to str

Another example:

➜ /workspaces/ODB/gpt-migrate-main/gpt_migrate (main) $ python main.py --sourcelang python --sourcedir /workspaces/ODB/Python/ELtoPy/ELSrc/ELtoPySrc --sourceentry OD_23.02-Strategy_GH-CoPilot1.py --targetdir /workspaces/ODB/Python/ELtoPy/PyTgt --targetlang python 
◐ Reading python project from directory '/workspaces/ODB/Python/ELtoPy/ELSrc/ELtoPySrc', with entrypoint 'OD_23.02-Strategy_GH-CoPilot1.py'.
◑ Outputting python project to directory '/workspaces/ODB/Python/ELtoPy/PyTgt'.
Source directory structure: 

        └── OD_23.02-Strategy_GH-CoPilot1.py

✅  Creating your environment...
Created Docker environment for python project in directory '/workspaces/ODB/Python/ELtoPy/PyTgt'.
Traceback (most recent call last):

  File "/workspaces/ODB/gpt-migrate-main/gpt_migrate/main.py", line 127, in <module>
    app()

  File "/workspaces/ODB/gpt-migrate-main/gpt_migrate/main.py", line 100, in main
    migrate(sourceentry, globals)

  File "/workspaces/ODB/gpt-migrate-main/gpt_migrate/main.py", line 94, in migrate
    internal_deps_list, external_deps_list = get_dependencies(sourcefile=sourcefile,globals=globals)

  File "/workspaces/ODB/gpt-migrate-main/gpt_migrate/steps/migrate.py", line 58, in get_dependencies
    external_dependencies = llm_run(prompt,

  File "/workspaces/ODB/gpt-migrate-main/gpt_migrate/utils.py", line 39, in llm_run
    output = globals.ai.run(prompt)

  File "/workspaces/ODB/gpt-migrate-main/gpt_migrate/ai.py", line 49, in run
    chat += msg

TypeError: can only concatenate str (not "NoneType") to str

Local models

Is it going to support local ai models in the future?

Vector embeddings for different languages?

Hello! I am trying to learn a little more about gpt-migrate. Does gpt-migrate access a vector database with documentation for various languages, or is this pure gpt capability?

Thank you for this cool tool!

Allow request for dependency function

The LLM should be allowed to "request" to see a dependency function it cannot see (in another file etc). This can apply to either source or target.

Issue with GPT-3.5 for Python to PHP

Created Docker environment for php project in directory '/home/genealogia/projects/php-dna'.
✅ Identifying external dependencies for init.py...
✅ Identifying internal dependencies for init.py...
✅ Identifying external dependencies for io/reader.py...
✅ Identifying internal dependencies for io/reader.py...
✅ Identifying external dependencies for io/init.py...
✅ Identifying internal dependencies for io/init.py...
✅ Creating migration file for io/init.py...
Created file_name.ext at /home/genealogia/projects/php-dna
Traceback (most recent call last):

File "/home/genealogia/projects/gpt-migrate/gpt_migrate/main.py", line 126, in
app()

File "/home/genealogia/projects/gpt-migrate/gpt_migrate/main.py", line 99, in main
migrate(sourceentry, globals)

File "/home/genealogia/projects/gpt-migrate/gpt_migrate/main.py", line 96, in migrate
migrate(dependency, globals)

File "/home/genealogia/projects/gpt-migrate/gpt_migrate/main.py", line 97, in migrate
write_migration(sourcefile, external_deps_list, globals)

File "/home/genealogia/projects/gpt-migrate/gpt_migrate/steps/migrate.py", line 69, in write_migration
llm_write_file(prompt,

File "/home/genealogia/projects/gpt-migrate/gpt_migrate/utils.py", line 51, in llm_write_file
file_name,language,file_content = globals.ai.write_code(prompt)[0]

IndexError: list index out of range

Confused about dependency handling

Why is both poetry and pip used? Shouldn't dependencies be handled by poetry, then after poetry install run with poetry run python main.py?

When I do try that, I get the error of no litellm, but when I try to add it, I get:

Because no versions of litellm match >1.7.12,<2.0.0
 and litellm (1.7.12) depends on openai (>=1.0.0), litellm (>=1.7.12,<2.0.0) requires openai (>=1.0.0).
So, because gpt-migrate depends on both openai (^0.27.8) and litellm (^1.7.12), version solving failed.

When I do use pip for installing the dependencies, I get the following error:

Traceback (most recent call last):
  File "/Users/koeng/py/src/github.com/joshpxyne/gpt-migrate/gpt_migrate/main.py", line 7, in <module>
    from ai import AI
  File "/Users/koeng/py/src/github.com/joshpxyne/gpt-migrate/gpt_migrate/ai.py", line 6, in <module>
    from litellm import completion
  File "/opt/homebrew/lib/python3.11/site-packages/litellm/__init__.py", line 28, in <module>
    from .timeout import timeout
  File "/opt/homebrew/lib/python3.11/site-packages/litellm/timeout.py", line 11, in <module>
    from openai.error import Timeout
ModuleNotFoundError: No module named 'openai.error'

Which I solved with pip install --upgrade litellm==1.0.0. This is a little suspicious since it is one of the only frozen dependencies... requirements.txt:

typer
langchain
yaspin
openai
tree_sitter
litellm==0.1.213
pydantic==1.10.8

So, what's up with mixing poetry and pip?

The model: `gpt-4-32k` does not exist

I have gpt-4 access but when I try to use gpt-4-32k it gives me this error below, thoughts?
openai.error.InvalidRequestError: The model: gpt-4-32k does not exist

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.