pythagora-io / gpt-pilot Goto Github PK
View Code? Open in Web Editor NEWThe first real AI developer
License: Other
The first real AI developer
License: Other
when you ask for change, it will write
// ... the rest of your code remains unchanged
or
// ... other existing code
When gpt-pilot tries to save a file without extension (e.g. Dockerfile), the following error occurs:
Traceback (most recent call last):
File ".../gpt-pilot/pilot/main.py", line 35, in
project.start()
File ".../gpt-pilot/pilot/helpers/Project.py", line 81, in start
self.developer.start_coding()
File ".../gpt-pilot/pilot/helpers/agents/Developer.py", line 32, in start_coding
self.implement_task()
File ".../gpt-pilot/pilot/helpers/agents/Developer.py", line 53, in implement_task
self.execute_task(convo_dev_task, task_steps, continue_development=True)
File ".../gpt-pilot/pilot/helpers/agents/Developer.py", line 87, in execute_task
self.project.save_file(data)
File ".../gpt-pilot/pilot/helpers/Project.py", line 119, in save_file
data['name'] = data['path'].rsplit('/', 1)[1]
IndexError: list index out of range
This should be an easy fix, depending on why the second condition was added to the if statement in the line above 👽
tldr; update either database.py with "postgres" or change the readme to say DATABASE_TYPE=postgresql
The readme states:
PostgreSQL database info to the .env
file
DATABASE_TYPE=postgres
However, in database.py
sql = f'DROP TABLE IF EXISTS "{table._meta.table_name}" CASCADE'
elif DATABASE_TYPE == "sqlite":
sql = f'DROP TABLE IF EXISTS "{table._meta.table_name}"'
else:
raise ValueError(f"Unsupported DATABASE_TYPE: {DATABASE_TYPE}")
database.execute_sql(sql)
Basically everytime when gpt-pilot starts to generate code, it will run into Unterminated string error.
Often this is after it has been building some of the files.
If you rerun the prompt you run into the same issue.
It has done this on gpt-4 and gpt-3.5 16k on multiple different project attempts.
When re-running gpt-pilot on a project with all steps "DONE" I'm prompted with:
How did GPT Pilot do? Were you able to create any app that works? Please write any feedback you have or just press ENTER to exit:
I would prefer to be able to interact further with the AI:
Every time it executes a command the app prints
Saving file /some/file.txt
Saving file /another/file.py
...
for all of the files, whether they've been updated or not. It should calculate a hash/checksum each time it wants to update a file
When setting this project up, and following the documentation under the How to start using gpt-pilot?
section, the documentation at step number 8 mentions setting up the environment variable in the .env
file.
Here it mentions the following:
to change from SQLite to PostgreSQL in your .env just set DATABASE_TYPE=postgres
But this results in an error:
Traceback (most recent call last):
File "/home/ramkrishna/Documents/experiments/gpt-pilot/pilot/db_init.py", line 5, in <module>
drop_tables()
File "/home/ramkrishna/Documents/experiments/gpt-pilot/pilot/database/database.py", line 385, in drop_tables
raise ValueError(f"Unsupported DATABASE_TYPE: {DATABASE_TYPE}")
ValueError: Unsupported DATABASE_TYPE: postgres
The error seems to be due to this check in pilot/database/database.py:380
if DATABASE_TYPE == "postgresql":
sql = f'DROP TABLE IF EXISTS "{table._meta.table_name}"'
elif DATABASE_TYPE == "sqlite":
sql = f'DROP TABLE IF EXISTS "{table._meta.table_name}"'
else:
raise ValueError(f"Unsupported DATABASE_TYPE: {DATABASE_TYPE}")
The correct check should be:
if DATABASE_TYPE == "postgres":
sql = f'DROP TABLE IF EXISTS "{table._meta.table_name}"'
elif DATABASE_TYPE == "sqlite":
sql = f'DROP TABLE IF EXISTS "{table._meta.table_name}"'
else:
raise ValueError(f"Unsupported DATABASE_TYPE: {DATABASE_TYPE}")
I got the insufficient funds error.
I wrote a project need, and answer step by step, but after GPT-Pilot comfirmed User Task, it not response any more.
? What is the project name? file_manage_sys
? Describe your app in as many details as possible. "Please write a file upload and download management system with a front-end interface. It should also support automatic
file cleaning when the system's storage capacity exceeds 80% usage. The system should automatically delete the oldest files. Additionally, this system needs to support user
? Understood. Let's start with the task of getting additional answers for the Web App "file_manage_sys".
1. Do you have any specific requirements for the front-end interface of the file management system?
EVERYTHING_CLEAR EVERYTHING_CLEAR
Great! Now that everything is clear, let's move on to the next task: breaking down user stories.
? Great! Now that everything is clear, let's move on to the next task: breaking down user stories.
Based on the description of the "file_manage_sys" Web App, here are a few user stories:
1. As a user, I want to be able to upload files to the system.
2. As a user, I want to be able to download files from the system.
3. As a user, I want the system to automatically clean up files when storage capacity exceeds 80%.
4. As a user, I want to be able to login to the system.
5. As a user, I want to be able to manage user accounts.
Do you have any additional user stories or any modifications to the existing ones?
Please provide your response in the format: "USER_STORIES <your user stories>" or "USER_STORIES_CLEAR" if everything is clear.
**IMPORTANT**
Remember to break down user stories based on the description of the "file_manage_sys" Web App. USER_STORIES_CLEAR
Great! Now let's move on to the next task: breaking down user tasks.
Based on the description of the "file_manage_sys" Web App and the user stories we have identified, here are a few user tasks:
1. User Task: Upload Files
- User needs to select files from their local system to upload to the file management system.
- User should be able to provide a name or description for the uploaded files.
- The system should validate the file format, size, and other relevant criteria.
- The uploaded files should be stored in the system's storage.
2. User Task: Download Files
- User needs to search for files in the system and select the desired files to download.
- User should have the option to download individual files or multiple files simultaneously.
- The system should ensure the security and integrity of the downloaded files.
3. User Task: Automatic File Cleaning
- The system should monitor the storage capacity and check if it exceeds 80% usage.
- If the storage capacity exceeds 80%, the system should automatically identify and delete the oldest files to free up space.
- The system should have a mechanism to track file upload and modification timestamps for accurate deletion.
4. User Task: User Login
? Great! Now let's move on to the next task: breaking down user tasks.
Based on the description of the "file_manage_sys" Web App and the user stories we have identified, here are a few user tasks:
1. User Task: Upload Files
- User needs to select files from their local system to upload to the file management system.
- User should be able to provide a name or description for the uploaded files.
- The system should validate the file format, size, and other relevant criteria.
- The uploaded files should be stored in the system's storage.
2. User Task: Download Files
- User needs to search for files in the system and select the desired files to download.
- User should have the option to download individual files or multiple files simultaneously.
- The system should ensure the security and integrity of the downloaded files.
3. User Task: Automatic File Cleaning
Remember to break down user tasks based on the description of the "file_manage_sys" Web App and the user stories we have identified. TASK_CLEAR
? Fantastic! Now that we have a clear understanding of the requirements for the "file_manage_sys" Web App, we can proceed with the development process. I will take the user
Throughout the development process, I will regularly communicate with you to provide updates, gather feedback, and address any questions or concerns. We will work closely t
Is there anything else you would like to discuss before we begin the development process for the "file_manage_sys" Web App?
? Great! Since there is nothing else to discuss at the moment, we can move forward with the development process for the "file_manage_sys" Web App. I will update you regular
Thank you for your cooperation, and I look forward to working with you to bring your vision to life!
? Thank you for your confirmation. I will now proceed with the development process for the "file_manage_sys" Web App. I will keep you updated on the progress and reach out
If you have any additional questions or concerns during the development process, please don't hesitate to let me know. I'm here to assist you.
? Great! I'm excited to start working on the development of the "file_manage_sys" Web App. I will keep you updated on the progress and reach out if any questions or clarifi
Once the development is complete, I will provide you with a demo of the Web App and gather your feedback for any necessary adjustments or modifications.
Thank you for entrusting us with your project. We will do our best to deliver a high-quality and user-friendly Web App that meets your expectations.
Let's get started, and I'll be in touch soon with updates on the development progress of the "file_manage_sys" Web App.
I've started working on benchmarking here:
https://github.com/nalbion/gpt-pilot/tree/feature/agbenchmark/agbenchmark
I need to figure out how to get the benchmark script to provide responses - probably using another LLM.
Response message: Where should the .txt file containing the word 'Washington' be stored or saved?
https://lablab.ai/event/autogpt-arena-hacks
Dear gpt-pilot developer,
Greetings! I am vansinhu, a community developer and volunteer at InternLM. Your work has been immensely beneficial to me, and I believe it can be effectively utilized in InternLM as well. Welcome to add Discord https://discord.gg/gF9ezcmtM3 . I hope to get in touch with you.
Best regards,
vansinhu
When creating a project, it is asking about 4 or 5 questions, then saying everything is clear...
Then immediately I get the following error message:
There was a problem with request to openai API:
API responded with status code: 429.
Response text: {
"error": {
"message": "Rate limit reached for 10KTPM-200RPM in organization org-XXXXXXXXXXXX on tokens per min. Limit: 10000 / min. Please try again in 6ms. Contact us through our help center at help.openai.com if you continue to have issues.",
"type": "tokens",
"param": null,
"code": "rate_limit_exceeded"
}
}
I have access to GPT-4, is this just OpenAI currently having issues?
When it's trying to run the command following error occurs:
Traceback (most recent call last):
File "main.py", line 35, in
project.start()
File "C:\Users\Samsung1\Downloads\gpt-pilot-main\gpt-pilot-main\pilot\helpers\Project.py", line 81, in start
self.developer.start_coding()
File "C:\Users\Samsung1\Downloads\gpt-pilot-main\gpt-pilot-main\pilot\helpers\agents\Developer.py", line 32, in start_coding
self.implement_task()
File "C:\Users\Samsung1\Downloads\gpt-pilot-main\gpt-pilot-main\pilot\helpers\agents\Developer.py", line 53, in implement_task
self.execute_task(convo_dev_task, task_steps, continue_development=True)
File "C:\Users\Samsung1\Downloads\gpt-pilot-main\gpt-pilot-main\pilot\helpers\agents\Developer.py", line 71, in execute_task
run_command_until_success(data['command'], data['timeout'], convo, additional_message=additional_message)
File "C:\Users\Samsung1\Downloads\gpt-pilot-main\gpt-pilot-main\pilot\helpers\cli.py", line 188, in run_command_until_success
cli_response = execute_command(convo.agent.project, command, timeout, force)
File "C:\Users\Samsung1\Downloads\gpt-pilot-main\gpt-pilot-main\pilot\helpers\cli.py", line 73, in execute_command
process = run_command(command, project.root_path, q, q_stderr, pid_container)
File "C:\Users\Samsung1\Downloads\gpt-pilot-main\gpt-pilot-main\pilot\helpers\cli.py", line 32, in run_command
preexec_fn=os.setsid,
AttributeError: module 'os' has no attribute 'setsid'
GPT Pilot may be eligible for the AutoGPT Arena Hacks if it implements the Agent Protocol
There's a Python SDK and client SDKs (coming), for now Python seems to be more supported.
There are other reasons why it would be good to adopt a common interface.
A Task
denotes one specific goal for the agent, it can be specific like:
Create a file named
hello.txt
and writeWorld
to it.
or very broad as:
Book a flight from Berlin to New York next week, optimize for price and duration.
POST /agent/tasks
- for creating tasks{
"input": "As a user I want to see 'Hello World' so that I know the app is working",
"additional_input": { "app_id': "my-app" }
}
additional_input
can be any object, for GPT Pilot it might look like:
{
"app_id": "my-app",
"user_id": "user",
}
Response, a new task with generated task_id
and empty artifacts
:
{
"task_id": "my-app-1",
"input": "As a user I want to see 'Hello World' so that I know the app is working",
"artifacts": []
}
The AgentProtocol task_id
would need to be prefixed by the GPT Pilot app_id
(and user_id?).
POST /agent/tasks/{id}/steps
- for triggering next step for the task{
"input": "step input prompt",
"additional_input": { }
}
response:
{
"task_id": "task_id",
"step_id": "1",
"input": "step input prompt",
"additional_input": str,
name: str,
status: 'created' | 'completed',
output: '',
additional_output: {},
artifacts: [{ as below }, ...],
is_last: boolean,
}
GET /agent/tasks - current_page, page_size
GET /agent/tasks/{task_id}
GET /agent/tasks/{task_id}/steps
POST /agent/tasks/{task_id}/steps
GET /agent/tasks/{task_id}/steps/{step_id}
GET /agent/tasks/{task_id}/artifacts
POST /agent/tasks/{task_id}/artifacts
GET /agent/tasks/{task_id}/artifacts/{artifact_id}
Task
object{
task_id: str,
input: str,
additional_input: {},
steps: [Step, ...],
artifacts: [{
artifact_id: str,
file_name: str,
relative_path: str,
}, ...]
}
Step
object{
task_id: str,
step_id: str,
input: str,
additional_input: str,
name: str,
status: 'created' | 'completed',
output: '',
additional_output: {},
artifacts: [{ as below }, ...],
is_last: boolean,
}
I think that Developer.implement_task()
is trying to do too much in too quickly.
I've got so many TODO
s here because I don't understand what/why it is this way.
def implement_task(self):
convo_dev_task = AgentConvo(self)
# TODO: why "This should be a simple version of the app so you don't need to aim to provide a production ready code"?
# TODO: why `no_microservices`? Is that even applicable?
task_description = convo_dev_task.send_message('development/task/breakdown.prompt', {
"name": self.project.args['name'],
"app_type": self.project.args['app_type'],
"app_summary": self.project.project_description,
"clarification": [],
# TODO: why all stories at once?
"user_stories": self.project.user_stories,
# "user_tasks": self.project.user_tasks,
# TODO: "I'm currently in an empty folder" may not always be true?
"technologies": self.project.architecture,
# TODO: `array_of_objects_to_string` does not seem to be used by the prompt template?
"array_of_objects_to_string": array_of_objects_to_string,
# TODO: prompt lists `files` if `current_task_index` != 0
"directory_tree": self.project.get_directory_tree(True),
})
task_steps = convo_dev_task.send_message('development/parse_task.prompt', {}, IMPLEMENT_TASK)
convo_dev_task.remove_last_x_messages(2)
self.execute_task(convo_dev_task, task_steps, continue_development=True)
(I'm also getting errors about "maximum context length is 8192 tokens" when sending dev_ops/ran_command.prompt
- there are a lot of them)
Changes that I'd like to see:
user_stories
and fleshes out the body with BDD scenarios with Given/When/Then steps. This would probably done one at a time.this command:
pip install -r requirements.txt
throws an error:
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [25 lines of output]
/data/gpt-pilot/pilot-env/lib/python3.11/site-packages/setuptools/config/setupcfg.py:515: SetuptoolsDeprecationWarning: The license_file parameter is deprecated, use license_files instead.
warnings.warn(msg, warning_class)
running egg_info
creating /tmp/pip-pip-egg-info-x2xgedwt/psycopg2.egg-info
writing /tmp/pip-pip-egg-info-x2xgedwt/psycopg2.egg-info/PKG-INFO
writing dependency_links to /tmp/pip-pip-egg-info-x2xgedwt/psycopg2.egg-info/dependency_links.txt
writing top-level names to /tmp/pip-pip-egg-info-x2xgedwt/psycopg2.egg-info/top_level.txt
writing manifest file '/tmp/pip-pip-egg-info-x2xgedwt/psycopg2.egg-info/SOURCES.txt'
Error: pg_config executable not found.
pg_config is required to build psycopg2 from source. Please add the directory
containing pg_config to the $PATH or specify the full executable path with the
option:
python setup.py build_ext --pg-config /path/to/pg_config build ...
or with the pg_config option in 'setup.cfg'.
If you prefer to avoid building psycopg2 from source, please install the PyPI
'psycopg2-binary' package instead.
For further information please check the 'doc/src/install.rst' file (also at
<https://www.psycopg.org/docs/install.html>).
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
I keep hitting the limits of my GPT4 API access, and I'd love to have the capacity to change over to using GPT3.5
Has any testing been done with that? is it not delivering what is needed or would it be possible to allow the swapping of modesl via a config item perhaps? Some things I might need the power and complexity of GPT4, but I suspect we could likely make do with GPT 3.5 in some situations perhaps?
If you are looking for a powerful and affordable platform for text generation, I highly recommend Vertex AI.
Vertex AI offers a variety of generative models, such as Text Bison, Chat Bison, Code Generation, Code Chat, and Code Completion. These models are fine-tuned for code generation, code chat, and code completion.
The pricing for Vertex AI generative models is very reasonable compared to OpenAI. You only pay for the input and output characters that you use, and the price per 1,000 characters is $0.0005 for most models.Vertex AI Pricing
Hi,
Your article is quite interesting, I was wondering if it would be relatively simple to branch another LLM on the market (like Claude or LLaMa 2) ?
If I understood properly your files , it is https://github.com/Pythagora-io/gpt-pilot/blob/main/pilot/utils/llm_connection.py that is in charge managing the connection. Are there other part that takes care of the communication with the LLM model ?
Getting immediate:
API responded with status code: 429. Response text: {
"error": {
"message": "Rate limit reached for 10KTPM-200RPM in organization org-WyXXXXXXXXX on tokens per min. Limit: 10000 / min. Please try again in 6ms. Contact us through our help center at help.openai.com if you continue to have issues.",
"type": "tokens",
"param": null,
"code": "rate_limit_exceeded"
}
}
This is my first attempt to access OpenAPI for today and I am already getting this error. I am running other applications to generate python code and I am not getting this error.
I think that execute_step(matching_step, current_step)
should be renamed should_execute_step(arg_step, current_step)
.
Also, this new test_no_step_arg()
that I've written fails - am I misunderstanding the intention?
class TestExecuteStep:
def test_no_step_arg(self):
assert execute_step(None, 'project_description') is True
assert execute_step(None, 'architecture') is True
assert execute_step(None, 'coding') is True
def test_skip_step(self):
assert execute_step('architecture', 'project_description') is False
assert execute_step('architecture', 'architecture') is True
assert execute_step('architecture', 'coding') is True
def test_unknown_step(self):
assert execute_step('architecture', 'unknown') is False
assert execute_step('unknown', 'project_description') is False
assert execute_step('unknown', None) is False
assert execute_step(None, None) is False
As a user
I want to start GPT Pilot with various types of initial prompts
So that I can build a new app, modify an existing app or debug an issue.
See also #73
A simple chat app with real time communication
Write the word 'Washington' to a .txt file
Issue 89 is done
I created a new API key
and added it my .env file. Do
There was a problem with request to openai API:
API responded with status code: 404. Response text: {
"error": {
"message": "The model `gpt-4` does not exist or you do not have access to it. Learn more: https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4.",
"type": "invalid_request_error",
"param": null,
"code": "model_not_found"
}
}
Running docker-compose results in:
> [ 7/10] RUN python -m venv pilot-env:
#0 0.289 Error: [Errno 2] No such file or directory: '/usr/src/app/pilot-env/bin/python'
------
failed to solve: executor failed running [/bin/sh -c python -m venv pilot-env]: exit code: 1
Documentation for setting it up manually also doesn't work, as there is no env example to copy
While GPT Pilot was creating code and attempted to run the app.py, an error was displayed indicating that the boolean True was written in lower case, the same with False. the code writer would try to debug the error but it continued to write the true or false with lower case. It would attempt to correct it several time until I killed the process. Here is a sample of one of the line it occurred in. app.run(debug=true) It also occurred in db queries. This was a simple python/flask app.
It would be helpful, if the program asked to continue to debug rather than continuing in a loop, at that prompt we could give input on how to correct the error.
Context:
Traceback (most recent call last):
File ".../gpt-pilot/pilot/main.py", line 35, in
project.start()
File ".../gpt-pilot/pilot/helpers/Project.py", line 81, in start
self.developer.start_coding()
File ".../gpt-pilot/pilot/helpers/agents/Developer.py", line 32, in start_coding
self.implement_task()
File ".../gpt-pilot/pilot/helpers/agents/Developer.py", line 53, in implement_task
self.execute_task(convo_dev_task, task_steps, continue_development=True)
File ".../gpt-pilot/pilot/helpers/agents/Developer.py", line 71, in execute_task
run_command_until_success(data['command'], data['timeout'], convo, additional_message=additional_message)
File ".../gpt-pilot/pilot/helpers/cli.py", line 222, in run_command_until_success
debug(convo, {'command': command, 'timeout': timeout})
File ".../gpt-pilot/pilot/helpers/cli.py", line 242, in debug
success = convo.agent.project.developer.execute_task(
File ".../gpt-pilot/pilot/helpers/agents/Developer.py", line 71, in execute_task
run_command_until_success(data['command'], data['timeout'], convo, additional_message=additional_message)
File ".../gpt-pilot/pilot/helpers/cli.py", line 222, in run_command_until_success
debug(convo, {'command': command, 'timeout': timeout})
File ".../gpt-pilot/pilot/helpers/cli.py", line 237, in debug
debugging_plan = convo.send_message('dev_ops/debug.prompt',
File ".../gpt-pilot/pilot/helpers/AgentConvo.py", line 59, in send_message
raise Exception("OpenAI API error happened.")
Exception: OpenAI API error happened.
Trying to resume gpt-pilot project leads to the same error and quitting.
PS C:\Users\yensi\Documents\CODING AND DEV\VISUAL STUDIO CODE\gpt-pilot> & "c:/Users/yensi/Documents/CODING AND DEV/VISUAL STUDIO CODE/gpt-pilot/pilot-env/Scripts/Activate.ps1"
(pilot-env) PS C:\Users\yensi\Documents\CODING AND DEV\VISUAL STUDIO CODE\gpt-pilot> & "c:/Users/yensi/Documents/CODING AND DEV/VISUAL STUDIO CODE/gpt-pilot/pilot-env/Scripts/python.exe" "c:/Users/yensi/Documents/CODING AND DEV/VISUAL STUDIO CODE/gpt-pilot/pilot/main.py"
Traceback (most recent call last):
File "c:\Users\yensi\Documents\CODING AND DEV\VISUAL STUDIO CODE\gpt-pilot\pilot\main.py", line 31, in
args = init()
^^^^^^
File "c:\Users\yensi\Documents\CODING AND DEV\VISUAL STUDIO CODE\gpt-pilot\pilot\main.py", line 17, in init
create_database()
File "c:\Users\yensi\Documents\CODING AND DEV\VISUAL STUDIO CODE\gpt-pilot\pilot\database\database.py", line 396, in create_database
conn = psycopg2.connect(
^^^^^^^^^^^^^^^^^
File "C:\Users\yensi\Documents\CODING AND DEV\VISUAL STUDIO CODE\gpt-pilot\pilot-env\Lib\site-packages\psycopg2_init_.py", line 122, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
psycopg2.OperationalError: connection to server at "localhost" (::1), port 5432 failed: Connection refused (0x0000274D/10061)
Is the server running on that host and accepting TCP/IP connections?
connection to server at "localhost" (127.0.0.1), port 5432 failed: Connection refused (0x0000274D/10061)
Is the server running on that host and accepting TCP/IP connections?
(pilot-env) PS C:\Users\yensi\Documents\CODING AND DEV\VISUAL STUDIO CODE\gpt-pilot>
I don't have any experience with it, but LocalAI might be more attractive for people working in environments where sending source code out to the interwebs is frowned upon.
LocalAI is a drop-in replacement REST API that’s compatible with OpenAI API specifications for local inferencing. It allows you to run LLMs (and not only) locally or on-prem with consumer grade hardware, supporting multiple model families that are compatible with the ggml format. Does not require GPU.
gpt-pilot/pilot/utils/llm_connection.py
Line 179 in 538f2e0
When I try run python db_init.py
terminal throw the error
Traceback (most recent call last): File "/home/ces-user/Desktop/gpt-pilot/pilot/db_init.py", line 3, in <module> from database.database import create_tables, drop_tables File "/home/ces-user/Desktop/gpt-pilot/pilot/database/database.py", line 1, in <module> from playhouse.shortcuts import model_to_dict ModuleNotFoundError: No module named 'playhouse'
I see an area that needs to be addressed.
larger applications are characterized by the fact that they have a complex structure.
the problem of the designer is that such a complex structure must be maintained either in the head (which is difficult) or in some external tool. This means that without some visualization mode it will be difficult for the person issuing the commands to write such an application. Unless the analyst makes an application model and then the programmer will commission tasks according to this model
Maybe I'm missing something obvious, but I'm constantly losing the app_id
In instances where GPT Pilot prompts the user if they would like to install a module or other item, if the user types no, it is ignored and the program continues with the install process.
Hi,
I can tell from the code that this is something to be worked on, but I just thought I'd mention that it is the thing that keeps on causing all the apps I've tried to create to fail.
f step['type'] == 'command': TypeError: string indices must be integers
I'm not entirely sure what the issue is, otherwise I'd have a bash at it - I'm not clever, just stubborn! I've tried to print the step variable to the console, but got nothing, so I'm not much use I'm afraid.
Awesome work on gpt-pilot!
I am trying to have it create a node/JS project. A few things I've observed:
npm install
, the timeout of 2000ms is too short for dependencies to install, causing the step to error out and debugging to startexample of 1:
--------- EXECUTE COMMAND ----------
Can i execute the command: `npm install --save-dev okta-oidc-js @okta/jwt-verifier @okta/okta-react` with 30000ms timeout?
Restoring user input id 15:
t: 10695ms : CLI ERROR:npm ERR! code E404
t: 10697ms : CLI ERROR:npm ERR! 404 Not Found - GET https://registry.npmjs.org/okta-oidc-js - Not found
t: 10697ms : CLI ERROR:npm ERR! 404
t: 10697ms : CLI ERROR:npm ERR! 404 'okta-oidc-js@*' is not in this registry.
t: 10697ms : CLI ERROR:npm ERR! 404
t: 10697ms : CLI ERROR:npm ERR! 404 Note that you can also install from a
t: 10697ms : CLI ERROR:npm ERR! 404 tarball, folder, http url, or git url.
t: 10697ms : CLI ERROR:
t: 10697ms : CLI ERROR:npm ERR! A complete log of this run can be found in: /Users/tom/.npm/_logs/2023-
Saving file /package.json
Dev step 18
NEEDS_DEBUGGING
Example of 2
Can i execute the command: `npm install --save-dev okta-oidc-js @okta/jwt-verifier @okta/okta-react` with 2000ms timeout?
Restoring user input id 11:
t: 2000ms :
Saving file /package.json
Dev step 11
NEEDS_DEBUGGING
Got incorrect CLI response:
stdout:
It might be good to allow the timeout to be user-configurable?
Windows
(gpt-pilot) D:\AI\gpt-pilot\pilot>python main.py
Traceback (most recent call last):
File "D:\AI\gpt-pilot\pilot\main.py", line 31, in <module>
args = init()
^^^^^^
File "D:\AI\gpt-pilot\pilot\main.py", line 17, in init
create_database()
File "D:\AI\gpt-pilot\pilot\database\database.py", line 396, in create_database
conn = psycopg2.connect(
^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\psycopg2\__init__.py", line 122, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
psycopg2.OperationalError: connection to server at "localhost" (::1), port 5432 failed: Connection refused (0x0000274D/10061)
Is the server running on that host and accepting TCP/IP connections?
connection to server at "localhost" (127.0.0.1), port 5432 failed: Connection refused (0x0000274D/10061)
Is the server running on that host and accepting TCP/IP connections?
(gpt-pilot) D:\AI\gpt-pilot\pilot>
Full error for context when trying to create a GPT4 connection: -
File "/Users/vijayashok/code/gpt-pilot/pilot/utils/llm_connection.py", line 94, in create_gpt_chat_completion
raise ValueError(f'Too many tokens in messages: {tokens_in_messages}. Please try a different test.')
I believe this error happens after fairly deep into the development. For me it happened after 90 DEV tasks.
Doubt: Each time we make a gpt4 request, are we sending all previous conversations to GPT4? If that's the case each subsequent request will have more tokens than the previous ones? Which will exhaust our GPT quota pretty fast
Folks, Let me know if any specific details are needed
Thanks!!!
Wondering if I can ask a feature request? Since there is a database in play , will it be possible to:
This way, a project could be developed from simple to complex in phases.
Also for the existing main.py - For "Do you want to try make the same request again? If yes, just press ENTER. Otherwise, type 'no'.", perhaps add flag to 'continuously try until succeed', instead of prompting user to hit 'Enter' every time.
While building a Flutter app using gpt-pilot, an error occurs when GPT-Pilot attempts to save generated PNG files.
38d5627
flutter create <project_name>
.The command executes successfully and the generated files are saved in the database.
An error occurs when trying to save a PNG
file.
Saving file \test_flutter\android\app\src\main\res\mipmap-hdpi\ic_launcher.png/ic_launcher.png
--- Logging error ---
...
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 3246, in execute_sql
cursor.execute(sql, params or ())
ValueError: A string literal cannot contain NUL (0x00) characters.
(Note: Truncated for brevity.)
Saving file \test_flutter\android\app\src\main\res\mipmap-hdpi\ic_launcher.png/ic_launcher.png --- Logging error --- Traceback (most recent call last): File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 7117, in get return clone.execute(database)[0] ~~~~~~~~~~~~~~~~~~~~~~~^^^ File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 4481, in __getitem__ return self.row_cache[item] ~~~~~~~~~~~~~~^^^^^^ IndexError: list index out of range
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 6702, in get_or_create
return query.get(), False
^^^^^^^^^^^
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 7120, in get
raise self.model.DoesNotExist('%s instance matching query does '
database.models.file_snapshot.FileSnapshotDoesNotExist: <Model: FileSnapshot> instance matching query does not exist:
SQL: SELECT "t1"."id", "t1"."created_at", "t1"."updated_at", "t1"."app_id", "t1"."development_step_id", "t1"."file_id", "t1"."content" FROM "file_snapshot" AS "t1" WHERE ((("t1"."app_id" = %s) AND ("t1"."development_step_id" = %s)) AND ("t1"."file_id" = %s)) LIMIT %s OFFSET %s
Params: ['72dda6189f5a4272992f7a9f465369f0', 8, 56, 1, 0]During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Python311\Lib\logging_init_.py", line 1113, in emit
stream.write(msg + self.terminator)
File "C:\Python311\Lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
UnicodeEncodeError: 'charmap' codec can't encode character '\u03ae' in position 956: character maps to
Call stack:
File "C:\Users\nenup\tools\gpt-pilot\pilot\main.py", line 35, in
project.start()
File "C:\Users\nenup\tools\gpt-pilot\pilot\helpers\Project.py", line 97, in start
self.developer.start_coding()
File "C:\Users\nenup\tools\gpt-pilot\pilot\helpers\agents\Developer.py", line 34, in start_coding
self.implement_task(i, dev_task)
File "C:\Users\nenup\tools\gpt-pilot\pilot\helpers\agents\Developer.py", line 60, in implement_task
self.execute_task(convo_dev_task, task_steps, development_task=development_task, continue_development=True)
File "C:\Users\nenup\tools\gpt-pilot\pilot\helpers\agents\Developer.py", line 78, in execute_task
run_command_until_success(data['command'], data['timeout'], convo, additional_message=additional_message)
File "C:\Users\nenup\tools\gpt-pilot\pilot\helpers\cli.py", line 266, in run_command_until_success
response = convo.send_message('dev_ops/ran_command.prompt',
File "C:\Users\nenup\tools\gpt-pilot\pilot\helpers\AgentConvo.py", line 70, in send_message
development_step = save_development_step(self.agent.project, prompt_path, prompt_data, self.messages, response)
File "C:\Users\nenup\tools\gpt-pilot\pilot\database\database.py", line 222, in save_development_step
project.save_files_snapshot(development_step.id)
File "C:\Users\nenup\tools\gpt-pilot\pilot\helpers\Project.py", line 214, in save_files_snapshot
file_snapshot, created = FileSnapshot.get_or_create(
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 6708, in get_or_create
return cls.create(**kwargs), True
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 6577, in create
inst.save(force_insert=True)
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 6787, in save
pk = self.insert(**field_dict).execute()
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 1966, in inner
return method(self, database, *args, **kwargs)
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 2037, in execute
return self._execute(database)
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 2842, in _execute
return super(Insert, self)._execute(database)
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 2553, in _execute
cursor = self.execute_returning(database)
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 2560, in execute_returning
cursor = database.execute(self)
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 3254, in execute
return self.execute_sql(sql, params)
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 3243, in execute_sql
logger.debug((sql, params))
Message: ('INSERT INTO "file_snapshot" ("id", "created_at", "updated_at", "app_id", "development_step_id", "file_id", "content") VALUES (%s, %s, %s, %s, %s, %s, %s) RETURNING "file_snapshot"."id"', ['0c054be44f8948b49d0ab5dcec7add7a', datetime.datetime(2023, 9, 5, 11, 10, 50, 724756), datetime.datetime(2023, 9, 5, 11, 10, 50, 724756), '72dda6189f5a4272992f7a9f465369f0', 8, 56, 'PNG\n\x1a\n\x00\x00\x00\nIHDR\x00\x00\x00H\x00\x00\x00H\x08\x03\x00\x00\x00b3Cu\x00\x00\x00\x19tEXtSoftware\x00Adobe ImageReadyqe<\x00\x00\x00PLTE\x00\x00\x00\x01N\x01W)T)T\x01V\x01W)FTT\x01V\x01W\x18v)T=\x002Y\x008d\x01+K\x010S\x01>n\x01>o\x01G\x7f\x01I\x01L\x01N\x01N\x01N\x01Q\x01R\x01R\x01S\x01U\x01U\x01V\x01V\x01W\x01W\x02;g\x02@p\x02Cv\x02I\x03M\x03O\x03P\x16h\x17o\x19x\x1a~\x1b\x1b\x1c)DT\x1a=\x00\x00\x00\x13tRNS\x00\x10\x10\x10\x10PP`````\x19\x10\x00\x00\x00IDATX\x0e@\x10@QT\uf28c\x1d+?f\x08\x0bK"ή\x0fƹ79BILC\x0e9C\x0e9T/p!~\t0Nsx\t\x04%\\\'Jn;8\x16\'Ep^\tJ\x1cG\n8~)8";LIK\x12w\x1cI0N!\x1c%Q>Ȑ.;\x1d2\x12vf\x04\x19\x0b\x10Ԇ\x04ò3lHH2N\x7f\x03\x08JVo\x06dj\x1cpRy5
()V\x04#K$=\x00$#\x04;n\x00\x00\x00\x00IENDB`'])
Arguments: ()
Traceback (most recent call last):
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 7117, in get
return clone.execute(database)[0]
~~~~~~~~~~~~~~~~~~~~~~~^^^
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 4481, in getitem
return self.row_cache[item]
~~~~~~~~~~~~~~^^^^^^
IndexError: list index out of rangeDuring handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 6702, in get_or_create
return query.get(), False
^^^^^^^^^^^
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 7120, in get
raise self.model.DoesNotExist('%s instance matching query does '
database.models.file_snapshot.FileSnapshotDoesNotExist: <Model: FileSnapshot> instance matching query does not exist:
SQL: SELECT "t1"."id", "t1"."created_at", "t1"."updated_at", "t1"."app_id", "t1"."development_step_id", "t1"."file_id", "t1"."content" FROM "file_snapshot" AS "t1" WHERE ((("t1"."app_id" = %s) AND ("t1"."development_step_id" = %s)) AND ("t1"."file_id" = %s)) LIMIT %s OFFSET %s
Params: ['72dda6189f5a4272992f7a9f465369f0', 8, 56, 1, 0]During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\nenup\tools\gpt-pilot\pilot\main.py", line 35, in
project.start()
File "C:\Users\nenup\tools\gpt-pilot\pilot\helpers\Project.py", line 97, in start
self.developer.start_coding()
File "C:\Users\nenup\tools\gpt-pilot\pilot\helpers\agents\Developer.py", line 34, in start_coding
self.implement_task(i, dev_task)
File "C:\Users\nenup\tools\gpt-pilot\pilot\helpers\agents\Developer.py", line 60, in implement_task
self.execute_task(convo_dev_task, task_steps, development_task=development_task, continue_development=True)
File "C:\Users\nenup\tools\gpt-pilot\pilot\helpers\agents\Developer.py", line 78, in execute_task
run_command_until_success(data['command'], data['timeout'], convo, additional_message=additional_message)
File "C:\Users\nenup\tools\gpt-pilot\pilot\helpers\cli.py", line 266, in run_command_until_success
response = convo.send_message('dev_ops/ran_command.prompt',
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\nenup\tools\gpt-pilot\pilot\helpers\AgentConvo.py", line 70, in send_message
development_step = save_development_step(self.agent.project, prompt_path, prompt_data, self.messages, response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\nenup\tools\gpt-pilot\pilot\database\database.py", line 222, in save_development_step
project.save_files_snapshot(development_step.id)
File "C:\Users\nenup\tools\gpt-pilot\pilot\helpers\Project.py", line 214, in save_files_snapshot
file_snapshot, created = FileSnapshot.get_or_create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 6708, in get_or_create
return cls.create(**kwargs), True
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 6577, in create
inst.save(force_insert=True)
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 6787, in save
pk = self.insert(**field_dict).execute()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 1966, in inner
return method(self, database, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 2037, in execute
return self._execute(database)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 2842, in _execute
return super(Insert, self)._execute(database)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 2553, in _execute
cursor = self.execute_returning(database)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 2560, in execute_returning
cursor = database.execute(self)
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 3254, in execute
return self.execute_sql(sql, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\nenup\tools\gpt-pilot\pilot-env\Lib\site-packages\peewee.py", line 3246, in execute_sql
cursor.execute(sql, params or ())
ValueError: A string literal cannot contain NUL (0x00) characters.
Getting this error on running command python main.py
in terminal
------------------ STARTING NEW PROJECT ----------------------
If you wish to continue with this project in future run:
python main.py app_id=8337ff83-bde9-413b-af89-d141dae36b4f
--------------------------------------------------------------
What is the project name? New Project
Describe your app in as many details as possible. New Sample Project
There was a problem with request to openai API:
API responded with status code: 404. Response text: {
"error": {
"message": "The model `gpt-4` does not exist or you do not have access to it. Learn more: https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4.",
"type": "invalid_request_error",
"param": null,
"code": "model_not_found"
}
}
Do you want to try make the same request again? If yes, just press ENTER. Otherwise, type 'no'.
#OPENAI or AZURE
ENDPOINT=OPENAI
OPENAI_API_KEY=xyz
AZURE_API_KEY=
AZURE_ENDPOINT=
MODEL_NAME=gpt-3.5-turbo
MAX_TOKENS=8192
DB_NAME=gpt-pilot
DB_HOST=localhost
DB_PORT=5432
DB_USER=postgres
DB_PASSWORD=postgres
Operating system - Windows
python --version - Python 3.11.4
Collecting psycopg2==2.9.6 (from -r requirements.txt (line 9))
Using cached psycopg2-2.9.6.tar.gz (383 kB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [25 lines of output]
/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/setuptools/config/setupcfg.py:508: SetuptoolsDeprecationWarning: The license_file parameter is deprecated, use license_files instead.
warnings.warn(msg, warning_class)
running egg_info
creating /private/var/folders/8_/91fb2xx1191gcq8jtvctckk00000gn/T/pip-pip-egg-info-jp1es49t/psycopg2.egg-info
writing /private/var/folders/8_/91fb2xx1191gcq8jtvctckk00000gn/T/pip-pip-egg-info-jp1es49t/psycopg2.egg-info/PKG-INFO
writing dependency_links to /private/var/folders/8_/91fb2xx1191gcq8jtvctckk00000gn/T/pip-pip-egg-info-jp1es49t/psycopg2.egg-info/dependency_links.txt
writing top-level names to /private/var/folders/8_/91fb2xx1191gcq8jtvctckk00000gn/T/pip-pip-egg-info-jp1es49t/psycopg2.egg-info/top_level.txt
writing manifest file '/private/var/folders/8_/91fb2xx1191gcq8jtvctckk00000gn/T/pip-pip-egg-info-jp1es49t/psycopg2.egg-info/SOURCES.txt'
Error: pg_config executable not found.
pg_config is required to build psycopg2 from source. Please add the directory
containing pg_config to the $PATH or specify the full executable path with the
option:
python setup.py build_ext --pg-config /path/to/pg_config build ...
or with the pg_config option in 'setup.cfg'.
If you prefer to avoid building psycopg2 from source, please install the PyPI
'psycopg2-binary' package instead.
For further information please check the 'doc/src/install.rst' file (also at
<https://www.psycopg.org/docs/install.html>).
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
I'm running this in a new Conda environment.
Macbook Pro, Apple Silicone M2 Max
Remove function calling so that we can properly test other LLMs
Can gpt-polit run on colab?
Please give a colab link, Thank you.
I found "advanced" mode, which is not mentioned in arguments.py
, but is checked for in Architect.py
.
I asked it to build a simple "Hello World" script in javascript and it proposed the architecture:
As expected, get_additional_info_from_user()
prompted:
Please check this message and say what needs to be changed. If everything is ok just press ENTER
for Node.js
, I accepted that, and then again for MongoDB
and I said "no database is required" and it started treating me as the LLM:
Please check this message and say what needs to be changed. If everything is ok just press ENTER
? You are an experienced software architect. Your expertise is in creating an architecture for an MVP (minimum viable products) that can be developed as fast as possible by using as many ready-made technologies as possible. The technologies that you prefer using when other technologies are not explicitly sp
**Scripts**: You prefer using Node.js for writing scripts that are meant to be ran just with the CLI.
**Backend**: You prefer using Node.js. As no database is required for the specific project, you won't be using any ORM like Mongoose or PeeWee.
**Testing**: To create unit and integration tests, you prefer using Jest for Node.js projects and pytest for Python projects. To create end-to-end tests, you prefer using Cypress.
**Frontend**: You prefer using Bootstrap for creating HTML and CSS while you use plain (vanilla) Javascript.
**Other**: From other technologies, if they are needed for the project, you prefer using cronjob (for making automated tasks), Socket.io for web sockets.
...actually, that was the LLM generating the response.
create_gpt_chat_completion()
returns { "text": llmResponse }
but get_additional_info_from_user()
adds this object to the updated_messages
list, which usually includes strings when the user just presses ENTER to accept.
updated_messages
then looks like:
[
'Node.js',
'You are an experienced software architect... **Backend**: You prefer using Node.js. As no database is required for the specific project, you won't be using any ORM like Mongoose or PeeWee.',
'You are an experienced software architect...',
'Bootstrap',
'**Frontend**: You prefer using Bootstrap for creating HTML and CSS while you use TypeScript instead of plain (vanilla) Javascript.'
]
...okay, so now I can see that it is actually updating the original prompt from system_messages/architect.prompt
but the UX is a bit off. Ideally, the user should just see something like:
Okay, as no database is required for this project, I won't be using any ORM like Mongoose
or PeeWee.
GPT 4 error coming as I don't have that how can I Use this for got 3.5 api key
I've not completed a project yet as it always crashes into a token limit.
When I restart the app I get the following:
Restoring development step with id 28
Updated file /Users//SDP/gpt-pilot/workspace/My_App/index.js
Updated file /Users//SDP/gpt-pilot/workspace/My_App/package.json
Updated file /Users//SDP/gpt-pilot/workspace/My_App/models/Trade.js
Updated file /Users//SDP/gpt-pilot/workspace/My_App/public/index.html
Updated file /Users//SDP/gpt-pilot/workspace/My_App/public/index.js
Updated file /Users//SDP/gpt-pilot/workspace/My_App/routes/index.js
Dev step 28
DONE
Restoring development step with id 29
Updated file /Users//SDP/gpt-pilot/workspace/My_App/index.js
Updated file /Users//SDP/gpt-pilot/workspace/My_App/package.json
Updated file /Users//SDP/gpt-pilot/workspace/My_App/models/Trade.js
Updated file /Users//SDP/gpt-pilot/workspace/My_App/public/index.html
Updated file /Users//SDP/gpt-pilot/workspace/My_App/public/index.js
Updated file /Users//SDP/gpt-pilot/workspace/My_App/routes/index.js
Dev step 29
NO
Restoring development step with id 30
Updated file /Users//SDP/gpt-pilot/workspace/My_App/package.json
Updated file /Users//SDP/gpt-pilot/workspace/My_App/models/Trade.js
Updated file /Users//SDP/gpt-pilot/workspace/My_App/public/index.html
Updated file /Users//SDP/gpt-pilot/workspace/My_App/public/index.js
Updated file /Users//SDP/gpt-pilot/workspace/My_App/routes/index.js
Updated file /Users//SDP/gpt-pilot/workspace/My_App/index.js
Dev step 30
npm run start
Can you check if the app works?
If you want to run the app, just type "r" and press ENTER
Restoring user input id 23: I want it to figure that out for itself
Restoring development step with id 31
Updated file /Users//SDP/gpt-pilot/workspace/My_App/index.js
Updated file /Users//SDP/gpt-pilot/workspace/My_App/package.json
Updated file /Users//SDP/gpt-pilot/workspace/My_App/models/Trade.js
Updated file /Users//SDP/gpt-pilot/workspace/My_App/public/index.html
Updated file /Users//SDP/gpt-pilot/workspace/My_App/public/index.js
Updated file /Users//SDP/gpt-pilot/workspace/My_App/routes/index.js
Dev step 31
NEEDS_DEBUGGING
Restoring development step with id 32
Updated file /Users//SDP/gpt-pilot/workspace/My_App/index.js
Updated file /Users//SDP/gpt-pilot/workspace/My_App/package.json
Updated file /Users//SDP/gpt-pilot/workspace/My_App/models/Trade.js
Updated file /Users//SDP/gpt-pilot/workspace/My_App/public/index.html
Updated file /Users//SDP/gpt-pilot/workspace/My_App/public/index.js
Updated file /Users//SDP/gpt-pilot/workspace/My_App/routes/index.js
Traceback (most recent call last):
File "/Users//SDP/gpt-pilot/pilot/main.py", line 35, in
project.start()
File "/Users//SDP/gpt-pilot/pilot/helpers/Project.py", line 78, in start
self.developer.start_coding()
File "/Users//SDP/gpt-pilot/pilot/helpers/agents/Developer.py", line 32, in start_coding
self.implement_task()
File "/Users//SDP/gpt-pilot/pilot/helpers/agents/Developer.py", line 53, in implement_task
self.execute_task(convo_dev_task, task_steps, continue_development=True)
File "/Users//SDP/gpt-pilot/pilot/helpers/agents/Developer.py", line 112, in execute_task
self.continue_development(convo)
File "/Users//SDP/gpt-pilot/pilot/helpers/agents/Developer.py", line 140, in continue_development
task_steps = iteration_convo.send_message('development/parse_task.prompt', {}, IMPLEMENT_TASK)
File "/Users//SDP/gpt-pilot/pilot/helpers/AgentConvo.py", line 60, in send_message
response = self.postprocess_response(response, function_calls)
File "/Users//SDP/gpt-pilot/pilot/helpers/AgentConvo.py", line 117, in postprocess_response
response = function_calls['functions']response['function_calls']['name']
KeyError: 'start_debugging'
Not sure where to go from here.
FWIW a great 'upgrade' would be a text file with the app id's logged in them alongside their project names. When the terminal loses text then it's a massive pain to go through the debug and hope you've got the right one!
Absolutely loving what I'm seeing so far - really want to get to a point where I can see what it outputs because this is slicker than an oilfield!
Currently the GPT Pilot Workflow is linear and does not resemble how a real-world project is run:
I'd like to see integration with issue and project management tools such as Jira and GitHub Issues/Projects.
I'd also like to be able to make simple edits, or add new features. I think the flow would look something like this:
(edit: see updated architectural plan in #91)
It would be great to be able to leverage Azure OpenAI services to get access to the gpt4-32k model.
Implementation should be relatively easy, if a developer can build this, I can supply testing credentials to help with the integration.
Right now I've tried several projects and all have errored out on too many tokens requested.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.