GithubHelp home page GithubHelp logo

ajndkr / boilerplate-x Goto Github PK

View Code? Open in Web Editor NEW
44.0 2.0 6.0 155 KB

๐Ÿงฉ Generate your project boilerplate code auto-magically

License: MIT License

Python 100.00%
langchain chatgpt project-boilerplate boilerplate-x

boilerplate-x's Introduction

Boilerplate-X - Create GitHub Project Boilerplate in Minutes!

Boilerplate-X is a chatGPT-powered solution to create GitHub project boilerplate for any programming language in just a few minutes. Have an idea? Turn it into a fully functional repository with Boilerplate-X!

DISCLAIMER! This project is highly experimental and is not ready for production use. Use at your own risk!

Why do I need this?

Starting a new project can be challenging, especially when it comes to writing basic, repetitive code. While there are many cookiecutter packages that help create an outline for your code, they aren't always tailored to your specific needs. Boilerplate-X, however, utilizes chatGPT to generate not only the foundational code like a cookiecutter but also actual code. This allows you to focus on developing unique features instead of spending hours setting up your repository.

boilerplate-x-logo

License: MIT PyPI version Hugging Face Spaces

๐Ÿš€ Features

  • Powered by chatGPT and Langchain: Boilerplate-X uses OpenAI's chatGPT API and Langchain framework to generate your project template.
  • Create boilerplate for any programming language: Whether it's Python, JavaScript, Go, or any other language, Boilerplate-X has got you covered!
  • Easy to use: Create a template with a single CLI command.
  • Fast: Create boilerplate in minutes, not hours.
  • Customizable: Boilerplate-X allows you to customize your template with available options, such as adding unit tests, CI/CD, and more.
  • Open source: Boilerplate-X is open source and always will be. Contribute on GitHub!
  • GitHub integration: Boilerplate-X integrates with GitHub to create a new repository for your project.

Boilerplate-X has a collection of example boilerplates. You can find them in the examples folder.

๐Ÿ“– Table of Contents

๐Ÿ’พ Installation

Boilerplate-X is available on PyPI and can be installed via pip.

pip install boilerplate-x

๐ŸŽฏ Quickstart

Creating a GitHub project boilerplate with Boilerplate-X is as simple as running the following CLI command:

boilerplate-x -p "your project idea" -o "path/to/project"

Now, you'll have a new folder at path/to/project containing your GitHub project template, which includes a .gitignore, LICENSE, README.md, and more!

For more CLI options, run boilerplate-x --help.

๐Ÿค Contributing

Code check Publish

Contributions are more than welcome! If you have an idea for a new feature or want to help improve Boilerplate-X, please create an issue or submit a pull request on GitHub.

See CONTRIBUTING.md for more information.

โš–๏ธ License

Boilerplate-X is released under the MIT License.

boilerplate-x's People

Contributors

ajndkr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

boilerplate-x's Issues

Repeated timeout when creating package.json.

This is a great idea for a project BTW. Thanks!

I think that it simply might take the model more than 60s to create the package.json in my case. Full run (with prompt)

โฏ boilerplate-x -p "VSCode extension to make it easier to work with nbdev, a library that makes it easier to Write, test, document, and distribute software packages and technical articles from Jupyter notebooks" -o "./nbdev-vscode-extension-boilerplate"
[04/01/23 11:29:38] INFO     INFO:boilerplate_x.cli:Welcome to Boilerplate-X CLI!                                              cli.py:72
                    INFO     INFO:boilerplate_x.cli:Generating project template...                                             cli.py:31
                    INFO     INFO:boilerplate_x.generator:Generating project structure...                                generator.py:60
[04/01/23 11:30:38] WARNING  WARNING:/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/langchai before_sleep.py:65
                             n/chat_models/openai.py:Retrying
                             langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_
                             retry in 4.0 seconds as it raised Timeout: Request timed out:
                             HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=60).
[04/01/23 11:30:53] INFO     INFO:boilerplate_x.generator:Generating file content: .gitignore...                         generator.py:70
[04/01/23 11:31:02] INFO     INFO:boilerplate_x.generator:Generating file content: README.md...                          generator.py:70
[04/01/23 11:31:23] INFO     INFO:boilerplate_x.generator:Generating file content: LICENSE...                            generator.py:70
[04/01/23 11:31:37] INFO     INFO:boilerplate_x.generator:Generating file content: package.json...                       generator.py:70
[04/01/23 11:32:37] WARNING  WARNING:/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/langchai before_sleep.py:65
                             n/chat_models/openai.py:Retrying
                             langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_
                             retry in 4.0 seconds as it raised Timeout: Request timed out:
                             HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=60).
[04/01/23 11:33:41] WARNING  WARNING:/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/langchai before_sleep.py:65
                             n/chat_models/openai.py:Retrying
                             langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_
                             retry in 4.0 seconds as it raised Timeout: Request timed out:
                             HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=60).
[04/01/23 11:34:46] WARNING  WARNING:/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/langchai before_sleep.py:65
                             n/chat_models/openai.py:Retrying
                             langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_
                             retry in 4.0 seconds as it raised Timeout: Request timed out:
                             HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=60).
[04/01/23 11:35:50] WARNING  WARNING:/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/langchai before_sleep.py:65
                             n/chat_models/openai.py:Retrying
                             langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_
                             retry in 8.0 seconds as it raised Timeout: Request timed out:
                             HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=60).
[04/01/23 11:36:58] WARNING  WARNING:/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/langchai before_sleep.py:65
                             n/chat_models/openai.py:Retrying
                             langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_
                             retry in 10.0 seconds as it raised Timeout: Request timed out:
                             HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=60).
Traceback (most recent call last):
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/urllib3/connectionpool.py", line 449, in _make_request
    six.raise_from(e, None)
  File "<string>", line 3, in raise_from
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/urllib3/connectionpool.py", line 444, in _make_request
    httplib_response = conn.getresponse()
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/http/client.py", line 1374, in getresponse
    response.begin()
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/http/client.py", line 318, in begin
    version, status, reason = self._read_status()
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/http/client.py", line 279, in _read_status
    line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/socket.py", line 705, in readinto
    return self._sock.recv_into(b)
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/ssl.py", line 1274, in recv_into
    return self.read(nbytes, buffer)
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/ssl.py", line 1130, in read
    return self._sslobj.read(len, buffer)
TimeoutError: The read operation timed out

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/requests/adapters.py", line 489, in send
    resp = conn.urlopen(
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/urllib3/connectionpool.py", line 787, in urlopen
    retries = retries.increment(
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/urllib3/util/retry.py", line 550, in increment
    raise six.reraise(type(error), error, _stacktrace)
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/urllib3/packages/six.py", line 770, in reraise
    raise value
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen
    httplib_response = self._make_request(
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/urllib3/connectionpool.py", line 451, in _make_request
    self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/urllib3/connectionpool.py", line 340, in _raise_timeout
    raise ReadTimeoutError(
urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=60)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/openai/api_requestor.py", line 516, in request_raw
    result = _thread_context.session.request(
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/requests/sessions.py", line 587, in request
    resp = self.send(prep, **send_kwargs)
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/requests/sessions.py", line 701, in send
    r = adapter.send(request, **kwargs)
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/requests/adapters.py", line 578, in send
    raise ReadTimeout(e, request=request)
requests.exceptions.ReadTimeout: HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=60)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/taytay/miniconda3/envs/boilerplate-x/bin/boilerplate-x", line 8, in <module>
    sys.exit(main())
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/boilerplate_x/cli.py", line 128, in main
    run(prompt, output_path, verbose, customisation_kwargs, github_repo_creator_kwargs)
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/boilerplate_x/cli.py", line 32, in run
    generator.generate_template()
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/boilerplate_x/generator.py", line 47, in generate_template
    self._generate_project_files(project_structure)
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/boilerplate_x/generator.py", line 71, in _generate_project_files
    file_content = self.project_file_chain.predict(
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/langchain/chains/llm.py", line 151, in predict
    return self(kwargs)[self.output_key]
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/langchain/chains/base.py", line 116, in __call__
    raise e
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/langchain/chains/base.py", line 113, in __call__
    outputs = self._call(inputs)
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/langchain/chains/llm.py", line 57, in _call
    return self.apply([inputs])[0]
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/langchain/chains/llm.py", line 118, in apply
    response = self.generate(input_list)
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/langchain/chains/llm.py", line 62, in generate
    return self.llm.generate_prompt(prompts, stop)
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/langchain/chat_models/base.py", line 79, in generate_prompt
    raise e
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/langchain/chat_models/base.py", line 76, in generate_prompt
    output = self.generate(prompt_messages, stop=stop)
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/langchain/chat_models/base.py", line 53, in generate
    results = [self._generate(m, stop=stop) for m in messages]
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/langchain/chat_models/base.py", line 53, in <listcomp>
    results = [self._generate(m, stop=stop) for m in messages]
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/langchain/chat_models/openai.py", line 254, in _generate
    response = self.completion_with_retry(messages=message_dicts, **params)
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/langchain/chat_models/openai.py", line 216, in completion_with_retry
    return _completion_with_retry(**kwargs)
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/tenacity/__init__.py", line 289, in wrapped_f
    return self(f, *args, **kw)
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/tenacity/__init__.py", line 379, in __call__
    do = self.iter(retry_state=retry_state)
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/tenacity/__init__.py", line 325, in iter
    raise retry_exc.reraise()
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/tenacity/__init__.py", line 158, in reraise
    raise self.last_attempt.result()
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/concurrent/futures/_base.py", line 451, in result
    return self.__get_result()
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
    raise self._exception
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/tenacity/__init__.py", line 382, in __call__
    result = fn(*args, **kwargs)
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/langchain/chat_models/openai.py", line 214, in _completion_with_retry
    return self.client.create(**kwargs)
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/openai/api_requestor.py", line 216, in request
    result = self.request_raw(
  File "/home/taytay/miniconda3/envs/boilerplate-x/lib/python3.10/site-packages/openai/api_requestor.py", line 526, in request_raw
    raise error.Timeout("Request timed out: {}".format(e)) from e
openai.error.Timeout: Request timed out: HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=60)

Binary files have text descriptions instead of the file contents

This makes "sense", but was initially surprising. :)

Here's the repo it made for me:
https://github.com/Taytay/boilerplate-x-vscode-extension

From this prompt: "VSCode extension that adds a single command to restart and run all cells in a Jupyter notebook. Should include github actions to build the extension, create a release that include the built vsix file, and publish the extension."

The png file looks like this text file: https://github.com/Taytay/boilerplate-x-vscode-extension/blob/main/images/icon.png
This file contains the icon image for the VSCode extension. The icon should be a recognizable image that represents the functionality of the extension. The recommended size for the icon is 128x128 pixels. The icon should be saved in PNG format to ensure compatibility with all platforms

Of course we "know" that this should be an AI-generated png file, right? ;)

feat(speed): Reduce redundant LLM calls in case of re-runs

Description

If a user runs boilerplate-x and runs into an issue half-way, its likely that boilerplate-x has already generated a few files. Upon re-run, the current implementation will re-create the file structure as well the as files which were created in the previous run.

Related #5

Acceptance Criteria

  • update ProjectGenerator class to first check for existing files before executing LLM chains

bug(output): fix project structure chain output

Description

Sometimes the project_structure_chain output does not follow the yaml format. For example,

prompt: a full stack fastapi application with postgresql, single `/chat` endpoint

Project structure:
- Dockerfile
- docker-compose.yml
- requirements.txt
- app/ - __init__.py - main.py - database.py - models.py - schemas.py - crud.py -
  routers/ - __init__.py - chat.py
- migrations/ - alembic.ini - env.py - README - script.py.mako - versions/
- tests/ - __init__.py - test_chat.py
- README.md

Acceptance Criteria

  • add output parser/guardrail to fix bad output

feat(core): Add planner/executor logic flow

boilerplate-x is good at generating the project structure but fails to generate coherent file contents due to lack of contextual information. Langchain recently added a planner/executor agent which can be used here to generate better file content.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.