GithubHelp home page GithubHelp logo

ferrislucas / promptr Goto Github PK

View Code? Open in Web Editor NEW
899.0 12.0 37.0 368 KB

Promptr is a CLI tool that lets you use plain English to instruct GPT3 or GPT4 to make changes to your codebase.

License: MIT License

JavaScript 100.00%
ai cli command-line chatgpt gpt-3 gpt-35-turbo gpt-4 gpt3 gpt4 prompt-engineering

promptr's Introduction

Promptr

Promptr is a CLI tool that lets you use plain English to instruct OpenAI LLM models to make changes to your codebase. Changes are applied directly to the files that you reference from your prompt.

Usage

promptr [options] -p "your instructions" <file1> <file2> <file3> ...

I've found this to be a good workflow:

  • Commit any changes, so you have a clean working area.
  • Author your prompt in a file. The prompt should be specific clear instructions.
  • Make sure your prompt contains the relative paths of any files that are relevant to your instructions.
  • Use Promptr to execute your prompt. Provide the path to your prompt file using the -p option: promptr -p my_prompt.txt

Promptr will apply the model's code directly to your files. Use your favorite git UI to inspect the results.





Examples

The PR's below are good examples of what can be accomplished using Promptr. You can find links to the individual commits and the prompts that created them in the PR descriptions.

Templating

Promptr supports templating using liquidjs, which allows users to incorporate templating commands within their prompt files. This feature enhances the flexibility and reusability of prompts, especially when working on larger projects with repetitive patterns or standards.

Using Includes

Projects can have one or more "includes"—reusable snippets of code or instructions—that can be included from a prompt file. These includes may contain project-specific standards, instructions, or code patterns, enabling users to maintain consistency across their codebase.

For example, you might have an include file named _poject.liquid with the following content:

This project uses Node version 18.
Use yarn for dependency management.
Use import not require in Javascript.
Don't include `module.exports` at the bottom of Javascript classes.
Alphabetize method names and variable declarations.

In your prompt file, you can use the render function from liquidjs to include this include file in a prompt file that you're working with:

{% render '_project.liquid' %}
// your prompt here

This approach allows for the development of reusable include files that can be shared across multiple projects or within different parts of the same project.

Example Use Cases

  • Project-Wide Coding Standards: Create an include file with comments outlining coding standards, and include it in every new code file for the project.

  • Boilerplate Code: Develop a set of boilerplate code snippets for different parts of the application (e.g., model definitions, API endpoints) and include them as needed.

  • Shared Instructions: Maintain a set of instructions or guidelines for specific tasks (e.g., how to document functions) and include them in relevant prompt files.

By leveraging the templating feature, prompt engineers can significantly reduce redundancy and ensure consistency in prompt creation, leading to more efficient and standardized modifications to the codebase.



Options

Option Description
-p, --prompt <prompt> Specifies the prompt to use in non-interactive mode. A path or a url can also be specified - in this case the content at the specified path or url is used as the prompt. The prompt can leverage the liquidjs templating system.
-m, --model <model> Optional flag to set the model, defaults to gpt-4o. Using the value "gpt3" will use the gpt-3.5-turbo model.
-d, --dry-run Optional boolean flag that can be used to run the tool in dry-run mode where only the prompt that will be sent to the model is displayed. No changes are made to your filesystem when this option is used.
-i, --interactive Optional boolean flag that enables interactive mode where the user can provide input interactively. If this flag is not set, the tool runs in non-interactive mode.
`-t, --template <templateName templatePath
-x Optional boolean flag. Promptr parses the model's response and applies the resulting operations to your file system when using the default template. You only need to pass the -x flag if you've created your own template, and you want Promptr to parse and apply the output in the same way that the built in "refactor" template output is parsed and applied to your file system.
-o, --output-path <outputPath> Optional string flag that specifies the path to the output file. If this flag is not set, the output will be printed to stdout.
-v, --verbose Optional boolean flag that enables verbose output, providing more detailed information during execution.
-dac, --disable-auto-context Prevents files referenced in the prompt from being automatically included in the context sent to the model.
--version Display the version and exit

Additional parameters can specify the paths to files that will be included as context in the prompt. The parameters should be separated by a space.



## Requirements - Node 18 - [API key from OpenAI](https://beta.openai.com/account/api-keys) - [Billing setup in OpenAI](https://platform.openai.com/account/billing/overview)

Installation

With yarn

yarn global add @ifnotnowwhen/promptr

With npm

npm install -g @ifnotnowwhen/promptr

With the release binaries

You can install Promptr by copying the binary for the current release to your path. Only MacOS is supported right now.

Set OpenAI API Key

An environment variable called OPENAI_API_KEY is expected to contain your OpenAI API secret key.

Build Binaries using PKG

npm run bundle
npm run build:<platform win|macos|linux>
npm run test-binary

License

Promptr is released under the MIT License.

promptr's People

Contributors

dependabot[bot] avatar eiriksm avatar ferrislucas avatar keiththomps avatar umungobungo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

promptr's Issues

It is not running after install

install:

# npm install -g @ifnotnowwhen/promptr
npm WARN EBADENGINE Unsupported engine {
npm WARN EBADENGINE   package: '[email protected]',
npm WARN EBADENGINE   required: { node: '>=14' },
npm WARN EBADENGINE   current: { node: 'v12.22.12', npm: '7.5.2' }
npm WARN EBADENGINE }
npm WARN EBADENGINE Unsupported engine {
npm WARN EBADENGINE   package: '[email protected]',
npm WARN EBADENGINE   required: { node: '>=14' },
npm WARN EBADENGINE   current: { node: 'v12.22.12', npm: '7.5.2' }
npm WARN EBADENGINE }

added 16 packages, and audited 17 packages in 2s

2 packages are looking for funding
  run `npm fund` for details

found 0 vulnerabilities

$sb_release -a

No LSB modules are available.
Distributor ID: Debian
Description:    Debian GNU/Linux 11 (bullseye)
Release:        11
Codename:       bullseye

$ promptr

file:///usr/local/lib/node_modules/@ifnotnowwhen/promptr/bin/index.js:3
await MainService.call()
^^^^^

SyntaxError: Unexpected reserved word
    at Loader.moduleStrategy (internal/modules/esm/translators.js:133:18)
    at async link (internal/modules/esm/module_job.js:42:21)

Don't send files referenced from inside liquid template comment tags

See the example below where the file /abc/test.txt is referenced from inside a liquid comment block. The contents of the file are sent to the LLM, but I wouldn't expect them to be because the file reference is not part of the prompt.

Example:

Instructions here

{% comment %}
// this file shouldn't be included in the prompt sent to the LLM
See file /abc/test.txt
{% endcomment %}

Error using GPT 4

I followed the command in the readme:

$ promptr -m gpt4 -p "Cleanup the code in these files" app/index.js

and get the below error:

data: {
      error: {
        message: 'The model: `gpt-4` does not exist',
        type: 'invalid_request_error',
        param: null,
        code: 'model_not_found'
      }
    }

Response from GPT4 is sometimes invalid JSON

It seems that sometimes GPT-4 doesn't return valid JSON, using """ as some kind of string delimiter:

      "fileContents": """
import os
import json
... etc ...
"""

Does it need a stricter prompt?

Here's a full failed run:

promptr my-script.py -p "fix all the FIX comments in this Python script"
Execution time: 23615ms
There was an error parsing the model's output:
{
  "operations": [
    {
      "crudOperation": "update",
      "filePath": "my-script.py",
      "fileContents": """
import os
import json
...
"""
    }
  ]
}
Attemping to extract json:
{
  "operations": [
    {
      "crudOperation": "update",
      "filePath": "my-script.py",
      "fileContents": """
import os
import json
...
"""
    }
  ]
}
SyntaxError: Unexpected string in JSON at position 147
    at JSON.parse (<anonymous>)
    at extractOperationsFromOutput (file:///home/myuser/.nvm/versions/node/v16.19.0/lib/node_modules/@ifnotnowwhen/promptr/src/services/ExtractOperationsFromOutput.js:14:29)
    at Function.call (file:///home/myuser/.nvm/versions/node/v16.19.0/lib/node_modules/@ifnotnowwhen/promptr/src/services/PromptrService.js:57:26)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
    at async Function.call (file:///home/myuser/.nvm/versions/node/v16.19.0/lib/node_modules/@ifnotnowwhen/promptr/src/Main.js:38:12)
    at async file:///home/myuser/.nvm/versions/node/v16.19.0/lib/node_modules/@ifnotnowwhen/promptr/bin/index.js:4:18
file:///home/myuser/.nvm/versions/node/v16.19.0/lib/node_modules/@ifnotnowwhen/promptr/src/services/RefactorResultProcessor.js:6
    data.operations.forEach((operation) => {
         ^

TypeError: Cannot read properties of null (reading 'operations')
    at Function.call (file:///home/myuser/.nvm/versions/node/v16.19.0/lib/node_modules/@ifnotnowwhen/promptr/src/services/RefactorResultProcessor.js:6:10)
    at Function.call (file:///home/myuser/.nvm/versions/node/v16.19.0/lib/node_modules/@ifnotnowwhen/promptr/src/services/PromptrService.js:62:37)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
    at async Function.call (file:///home/myuser/.nvm/versions/node/v16.19.0/lib/node_modules/@ifnotnowwhen/promptr/src/Main.js:38:12)
    at async file:///home/myuser/.nvm/versions/node/v16.19.0/lib/node_modules/@ifnotnowwhen/promptr/bin/index.js:4:18

It would be better if the ‘context’ object available in templates was an object with child properties

Right now, ‘context’ is a hash with keys that are file paths.

  • it would be easier to add things to context later if context was an object that could have properties added to it over time.
  • the file paths for hash keys mechanism in place now probably doesn’t work right for some paths.
  • It would be great if ‘context.files’ was an array of hashes with each hash having keys for file path l, file name, file contents, etc

404 response

I have GPT4 access, but I'm getting a 404 response, without any clues as to the where or why even with -v.

OPENAI_API_KEY=[key here] promptr -m gpt4 -x -t refactor $(git ls-tree -r --name-only HEAD | grep ".js" | tr '\n' ' ') -p "Remove any unused methods -v"

Prompt token count: 322002
(node:58364) UnhandledPromiseRejectionWarning: Error: Request failed with status code 404
    at createError (/Users/glenn/.nvm/versions/node/v14.19.0/lib/node_modules/@ifnotnowwhen/promptr/node_modules/axios/lib/core/createError.js:16:15)
    at settle (/Users/glenn/.nvm/versions/node/v14.19.0/lib/node_modules/@ifnotnowwhen/promptr/node_modules/axios/lib/core/settle.js:17:12)
    at IncomingMessage.handleStreamEnd (/Users/glenn/.nvm/versions/node/v14.19.0/lib/node_modules/@ifnotnowwhen/promptr/node_modules/axios/lib/adapters/http.js:322:11)
    at IncomingMessage.emit (events.js:412:35)
    at endReadableNT (internal/streams/readable.js:1334:12)
    at processTicksAndRejections (internal/process/task_queues.js:82:21)
(Use `node --trace-warnings ...` to show where the warning was created)
(node:58364) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 2)
(node:58364) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

Getting an error when trying to run gpt4

When running promptr -m gpt4 -p "Clean up the code in this file" foo.js

I get:

data: {
      error: {
        message: 'The model: `gpt-4` does not exist',
        type: 'invalid_request_error',
        param: null,
        code: 'model_not_found'
      }
    }

Gateway timeout

Is there any way to fix the gateway timeout error?

  data: {
      error: {
        code: 524,
        message: 'Gateway timeout.',
        param: null,
        type: 'cf_gateway_timeout'
      }
    }
  },
  isAxiosError: true,
  toJSON: [Function: toJSON]
}

Display an error when the prompt can't be found

The -p option takes a string , path , or url. If the value is a path or url, and the prompt isn’t found (because the path or url don’t exist) then the user should see a friendly error message.

Accept wildcard paths

It’s not super convenient having to specify each file to send separated by a space. It would be more ergonomic to be able to pass paths with wildcards.

Example:
Something like this would pass all files that end in .rb or .js. The ‘-r’ would make it recursive:
promptr *.rb *.js -r -p “summarize how this code works”

Error when any prompt ran

Whenever I run promptr I get an error along the lines of

Execution time: 7194ms
undefined:1
--BEGIN-FILE: test.rb
 ^

SyntaxError: No number after minus sign in JSON at position 1

SyntaxError: Unexpected token c in JSON

Hi, Thanks for your repo. I don't know why the below not works:

it is a minimal code to reproduce :

package main

import (
	"crypto/md5"
	"encoding/hex"
)

type Input struct {
	Types     string `json:"types"` // comments!
	Json      string `json:"json"`
	Check_str string `json:"check_str"`
}

func validateInput(input Input) bool {
	expected := getCheckStr(input)

	return expected == input.Check_str
}

func getCheckStr(input Input) string {
	str := input.Types + input.Json
	hash := md5.Sum([]byte(str))
	expected := hex.EncodeToString(hash[:])[0:10]
	return expected
}

prompt: Make sample.go more efficient.

(node:19804) ExperimentalWarning: The Fetch API is an experimental feature. This feature could change at any time
(Use `promptr --trace-warnings ...` to show where the warning was created)
Execution time: 14615ms
There was an error parsing the model's output:
{
  "operations": [
    {
      "crudOperation": "update",
      "filePath": "sample.go",
      "fileContents": "package main\n\nimport (\n\t"crypto/md5"\n\t"encoding/hex"\n)\n\ntype Input struct {\n\tTypes     string `json:\"types\"` // comments!\n\tJson      string `json:\"json\"`\n\tCheck_str string `json:\"check_str\"`\n}\n\nfunc validateInput(input Input) bool {\n\texpected := getCheckStr(input)\n\n\treturn expected == input.Check_str\n}\n\nfunc getCheckStr(input Input) string {\n\tstr := input.Types + input.Json\n\thash := md5.Sum([]byte(str))\n\texpected := hex.EncodeToString(hash[:])[0:10]\n\treturn expected\n}\n"
    }
  ]
}
Attemping to extract json:
{
  "operations": [
    {
      "crudOperation": "update",
      "filePath": "sample.go",
      "fileContents": "package main\n\nimport (\n\t"crypto/md5"\n\t"encoding/hex"\n)\n\ntype Input struct {\n\tTypes     string `json:\"types\"` // comments!\n\tJson      string `json:\"json\"`\n\tCheck_str string `json:\"check_str\"`\n}\n\nfunc validateInput(input Input) bool {\n\texpected := getCheckStr(input)\n\n\treturn expected == input.Check_str\n}\n\nfunc getCheckStr(input Input) string {\n\tstr := input.Types + input.Json\n\thash := md5.Sum([]byte(str))\n\texpected := hex.EncodeToString(hash[:])[0:10]\n\treturn expected\n}\n"
    }
  ]
}
SyntaxError: Unexpected token c in JSON at position 142
    at JSON.parse (<anonymous>)
    at /snapshot/promptr/dist/index.js
    at w.call (/snapshot/promptr/dist/index.js)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async v.call (/snapshot/promptr/dist/index.js)
    at async /snapshot/promptr/dist/index.js
node:internal/process/task_queues:95
    runMicrotasks();
    ^

TypeError: Cannot read properties of null (reading 'operations')
    at h.call (/snapshot/promptr/dist/index.js)
    at w.call (/snapshot/promptr/dist/index.js)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async v.call (/snapshot/promptr/dist/index.js)
    at async /snapshot/promptr/dist/index.js

Node.js v18.5.0

Btw, I read [a post today]https://github.com/minimaxir/simpleaichat/blob/main/examples/notebooks/simpleaichat_coding.ipynb). maybe we can use tricks in it, like "stop": ["``` ", "```\n"] etc.

Cut-off outputs, unreliable

Hey!

Great project, but I've found that a lot of times, the output is cut off by the model:
image

Moreover, using GPT-4 fails, it just doesn't return sometimes

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.