GithubHelp home page GithubHelp logo

kreneskyp / ix Goto Github PK

View Code? Open in Web Editor NEW
1.0K 12.0 119.0 6.61 MB

Autonomous GPT-4 agent platform

License: MIT License

Shell 0.05% Dockerfile 0.21% Makefile 0.71% JavaScript 25.77% Python 73.11% HTML 0.08% CSS 0.05% HCL 0.02%
ai gpt-4 openai python

ix's Introduction

iX - Autonomous GPT-4 Agent Platform

Unit Tests Discord Server Twitter Follow

midjourney prompt: The ninth planet around the sun





Amidst the swirling sands of the cosmos, Ix stands as an enigmatic jewel, where the brilliance of human ingenuity dances on the edge of forbidden knowledge, casting a shadow of intrigue over the galaxy.

- Atreides Scribe, The Chronicles of Ixian Innovation






๐ŸŒŒ About

IX is a platform for designing and deploying autonomous and [semi]-autonomous LLM powered agents and workflows. IX provides a flexible and scalable solution for delegating tasks to AI powered agents. Agents created with the platform can automate a wide variety of tasks, while running in parallel and communicating with each other.
Build AI powered workflows:
  • QA chat bots
  • Code generation
  • Data extraction
  • Data analysis
  • Data augmentation
  • Research assistants

Key Features

๐Ÿง  Models

  • OpenAI
  • Google PaLM (Experimental)
  • Anthropic (Experimental)
  • Llama (Experimental)

โš’๏ธ No-code Agent Editor

No-code editor for creating and testing agents. The editor provides an interface to drop and connect nodes into a graph representing the cognitive logic of an agent. Chat is embedded in the editor to allow for rapid testing and debugging.

MetaphorCreateShort_V2.mp4

๐Ÿ’ฌ Multi-Agent Chat interface

Create your own teams of agents and interact with them through a single interface. Chat room support multiple agents. By default it includes the IX moderator agent, which delegates tasks to other agents. You can @mention specific agents to complete the tasks.

fizzbuzz.mkv_V2.mp4

๐Ÿ’ก Smart Input

The smart input bar auto-completes agent @mentions and file & data {artifacts} created by tasks.

smart_input.mkv_V2.mp4

โšก Message Queue Drive Agent Workers

The agent runner backend is dockerized and is triggered with a celery message queue. This allows the backend to scale horizontally to support a fleet of agents running in parallel.

WorkerScalingTest_V3

โš™๏ธ Component Config Layer

IX implements a component config layer that maps LangChain components to the configuration graph. The config layer powers a number of other systems and features. For example, component field and connector definitions are used to render nodes and forms dynamically in the no-code editor.

๐Ÿ› ๏ธ Getting Started

Prerequisites
Windows Linux Subsystem (windows only)
  1. Open powershell
  2. run `wsl --install` to install and/or activate WSL
Docker Install Docker Desktop for your OS:
https://www.docker.com/products/docker-desktop/

Detailed install instructions:

Python Python 3.8 or higher is required for the CLI. The app python version is managed by the image.

Agent-IX CLI

The quickest way to start IX is with the agent-ix CLI. The CLI starts a preconfigured docker cluster with docker-compose. It downloads the required images automatically and starts the app cluster.

pip install agent-ix
ix up

Scale agent workers with the scale command. Each worker will run agent processes in parallel. The limit to the number of workers is based on available memory and CPU capacity.

ix scale 5

The client may start a specific version, including the unstable dev image built on master branch.

ix up --version dev

How does it work

Basic Usage

You chat with an agent that uses that direction to investigate, plan, and complete tasks. The agents are capable of searching the web, writing code, creating images, interacting with other APIs and services. If it can be coded, it's within the realm of possibility that an agent can be built to assist you.

  1. Setup the server and visit http://0.0.0.0:8000, a new chat will be created automatically with the default agents.

  2. Enter a request and the IX moderator will delegate the task to the agent best suited for the response. Or @mention an agent to request a specific agent to complete the task.

  3. Customized agents may be added or removed from the chat as needed to process your tasks

Creating Custom Agents and Chains

IX provides the moderator agent IX, a coder agent, and other example agents. Custom agents may be built using the chain editor or the python API.

Chain Editor

  1. Navigate to the chain editor
  2. Click on the root connector to open the component search
  3. Drag agents, chains, tools, and other components into the editor
  4. Connect the components to create a chain
  5. Open the test chat to try it out!

Python API

Chains python API docs

๐Ÿง™ Development setup

1. Prerequisites

Before getting started, ensure you have the following software installed on your system:

Windows Linux Subsystem (windows only)
  1. Open powershell
  2. run `wsl --install` to install and/or activate WSL
Docker Install Docker Desktop for your OS:
https://www.docker.com/products/docker-desktop/

Detailed install instructions:

Git & Make
  • Mac: brew install git make
  • Linux: apt install git make
  • Windows (WSL): apt install git make

2. Clone the repository

git clone https://github.com/kreneskyp/ix.git
cd ix

3. Setup env

Setup config in .env

cp .env.template .env
OPENAI_API_KEY=YOUR_KEY_HERE

4. Build & Initialize the IX cluster.

The image will build automatically when needed in most cases. Set NO_IMAGE_BUILD=1 to skip rebuilding the image.

Use the image target to build and start the IX images. The dev_setup target will build the frontend and initialize the database. See the developer tool section for more commands to manage the dev environment.

make dev_setup

5. Run the IX cluster

The IX cluster runs using docker-compose. It will start containers for the web server, app server, agent workers, database, redis, and other supporting services.

make cluster

6. View logs

Web and app container logs

make server

Agent worker container logs

make worker

7. Open User Interface

Visit http://0.0.0.0:8000 to access the user interface. From there you may create and edit agents and chains. The platform will automatically spawn agent processes to complete tasks as needed.

Scaling workers

Adjust the number of active agent workers with the scale target. The default is 1 agent worker to handle tasks. There is no hard limit on agents, but the number of workers is limited by available memory and CPU capacity.

make scale N=5

Developer Tools

Here are some helpful commands for developers to set up and manage the development environment:

Running:

  • make up / make cluster: Start the application in development mode at http://0.0.0.0:8000.
  • make server: watch logs for web and app containers.
  • make worker: watch logs for agent worker containers.

Building:

  • make image: Build the Docker image.
  • make frontend: Rebuild the front end (GraphQL, relay, webpack).
  • make webpack: Rebuild JavaScript only.
  • make webpack-watch: Rebuild JavaScript on file changes.
  • make dev_setup: Builds frontend and generates database.
  • make node_types_fixture: Builds database fixture for component type definitions.

Database

  • make migrate: Run Django database migrations.
  • make migrations: Generate new Django database migration files.

Utility

  • make bash: Open a bash shell in the Docker container.
  • make shell: Open a Django shell_plus session.

Agent Fixtures

Dump fixtures with the dump_agent django command. This command will gather and dump the agent and chain, including the component graph.

  1. make bash
    
  2. ./manage.py dump_agent -a alias

ix's People

Contributors

billschumacher avatar eltociear avatar gwc4github avatar kreneskyp avatar sebastienfi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ix's Issues

Convert Chat API to FastAPI

The chains API and frontend need to be converted to Pydantic + FastAPI.

  • Pydantic Types & FastAPI endpoints: #153
    - Create, update, delete chats
    - send_input
  • UI updates
    • conversion to FastAPI
      • ChatInput (send inputs to agent) (in progress)
      • ChatMessages (loading complete messages at page load) (in progress)
      • chats can be deleted #157
    • Chat view should not create chat until a message is sent. Currently created when loading view and generates lots of spam and empty chats. #171

Tasks & Artifacts do not refresh

Tasks and artifacts do not refresh when created by the agent.

Need to implement:

  • graphql subscription class
  • react hook/component for receiving updates

see useChatMessageSubscription for reference

Gracefully handle quota issues

3d-b129-d4d6d8311c50 message_id=bd76ec63-538b-46c2-bec3-a306b0decd27 error_type=RateLimitError
2023-12-22 14:24:45 [2023-12-22 20:24:45,459: ERROR/MainProcess] @@@@ EXECUTE ERROR Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}
2023-12-22 14:24:45 [2023-12-22 20:24:45,477: ERROR/MainProcess] File "/var/app/ix/utils/asyncio.py", line 25, in wrapper

This should probably be displayed in the chat.

cannot start a task

[17/Apr/2023 15:59:58] "POST /graphql HTTP/1.1" 400 289
Bad Request: /graphql
2023-04-17 16:00:14,677 WARNING Bad Request: /graphql
[17/Apr/2023 16:00:14] "POST /graphql HTTP/1.1" 400 289
Bad Request: /graphql
2023-04-17 16:00:15,487 WARNING Bad Request: /graphql
[17/Apr/2023 16:00:15] "POST /graphql HTTP/1.1" 400 289
Bad Request: /graphql
2023-04-17 16:00:15,700 WARNING Bad Request: /graphql
[17/Apr/2023 16:00:15] "POST /graphql HTTP/1.1" 400 289
Bad Request: /graphql
2023-04-17 16:00:15,843 WARNING Bad Request: /graphql
[17/Apr/2023 16:00:15] "POST /graphql HTTP/1.1" 400 289
Screenshot from 2023-04-17 19-02-39

Implicit chain doesn't form from the root

Chains are supposed to be grouped into a SequentialChain whenever they are connected input to output. That's not working when the chain starts at the Chat Input root.

image

Workaround.

Workaround for now is to use an explicit Sequence node.

image

Notes

Haven't looked into this yet but I suspect the code that detects when to start forming an implicit sequence is only considering when the root is a property connection.

Examples Not Working

When running the examples most of them seem to throw an error. It seems like the Chat Input is not being parsed correctly. Looking at the example Ingest Url chain it looks to have an issue passing the Chat Input to the Web Loader. I was able to fix it by using a JSON Transform but in the example videos that it seems to work without them. I built with the latest code on the master branch.
image
image

With the JSON Transform it works
image

Is there something else that needs to be set so that the web loader properly reads the chat input?

Don't create chat until a message is sent

New chat workflow currently creates a new chat whenever the "new chat" button is clicked. This was an easy way to start chats but it results in a lot of clutter.

Requirements:

  • new Chat instance should not be created until chat message is entered
  • also handles agents being added / removed from chat before message
  • chat name is updated from first message

Feature Request - Better Documentation for using the Python API

I installed ix via its pip package.

I would like to implement my own chains/tools into ix to build own agents. I do not understand how I am supposed to load my code into ix. The documentation is very sparse on it:

Chains may be generated through the visual editor or a python code run as a management command or via shell_plus. JSON config import is not supported yet.

I tried to run the CLI command ix import path_to_pythonfile with the provided example code but this throws an error. But I do not think this is meant by management command or via shell_plus anyway?

x ~/P/ix-devs [1]> ix import ./chains/greeting.py
...
  File "x/.pyenv/versions/3.10.6/lib/python3.10/site-packages/agent_ix/cli.py", line 206, in import_chain
    f"{args.file_path}:/var/fixture.json",
AttributeError: 'Namespace' object has no attribute 'file_path'

Questions

  1. How do i load own code into ix?
  2. is it possible to write own tools with the ix python api? In the Web UI it is stated that one can transform a chain to a tool but it says not how
  3. Do I need to use ix as its dev version or using the package?
  4. Is the folder "workdir" only used for artifacts or should my codebase live in this folder?

Better Documentation

Make node Root/User Input findable in the component search: When i first tried to create a new agent i looked up the Node Graph of one of the existing agents. They all start with the Chat Input node. I tried to search for it in Add Node -> search components but i cloud not find it using the keywords input, chat, user. I could only find it searching for "root". This is very counterintuitive.

detailed example of how to use the python api:

  • How to load a custom chain using the cli ...other ways
  • how to write a custom tool/chain/skill and make it accessible for ix in the web ui

btw ix looks awesome and very polished, good work

exec /usr/bin/ix/celery.sh: no such file or directory

PS E:\ix2\ix> make worker
docker-compose run --rm web celery.sh
[+] Building 0.0s (0/0)
[+] Creating 2/0
โœ” Container ix-redis-1 Running 0.0s
โœ” Container ix-db-1 Running 0.0s
[+] Building 0.0s (0/0)
exec /usr/bin/ix/celery.sh: no such file or directory
make: *** [worker] Error 1

pytest pipeline is not mocking OpenAI for all tests

Need to mock open AI usage in VectorMemory tests. It's using openAI embeddings. Works locally with key in environment

______________________________ test_delete_vector ______________________________

mock_memory = <ix.memory.tests.mock_vector_memory.MockMemory object at 0x7f6d8f43f6d0>

    def test_delete_vector(mock_memory):
>       mock_memory.add_vector(key="key1", text="text1")

ix/memory/tests/test_mock_vector_memory.py:54: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
ix/memory/plugin.py:71: in add_vector
    vector = get_embeddings(text)
ix/memory/plugin.py:14: in get_embeddings
    response = openai.Embedding.create(input=[text], model="text-embedding-ada-002")
/usr/local/lib/python3.11/site-packages/openai/api_resources/embedding.py:33: in create
    response = super().create(*args, **kwargs)
/usr/local/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py:149: in create
    ) = cls.__prepare_create_request(
/usr/local/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py:106: in __prepare_create_request
    requestor = api_requestor.APIRequestor(
/usr/local/lib/python3.11/site-packages/openai/api_requestor.py:130: in __init__
    self.api_key = key or util.default_api_key()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

    def default_api_key() -> str:
        if openai.api_key_path:
            with open(openai.api_key_path, "rt") as k:
                api_key = k.read().strip()
                if not api_key.startswith("sk-"):
                    raise ValueError(f"Malformed API key in {openai.api_key_path}.")
                return api_key
        elif openai.api_key is not None:
            return openai.api_key
        else:
>           raise openai.error.AuthenticationError(
                "No API key provided. You can set your API key in code using 'openai.api_key = <API-KEY>', or you can set the environment variable OPENAI_API_KEY=<API-KEY>). If your API key is stored in a file, you can point the openai module at it with 'openai.api_key_path = <PATH>'. You can generate API keys in the OpenAI web interface. See https://onboard.openai.com/ for details, or email [email protected] if you have any questions."
            )
E           openai.error.AuthenticationError: No API key provided. You can set your API key in code using 'openai.api_key = <API-KEY>', or you can set the environment variable OPENAI_API_KEY=<API-KEY>). If your API key is stored in a file, you can point the openai module at it with 'openai.api_key_path = <PATH>'. You can generate API keys in the OpenAI web interface. See https://onboard.openai.com/ for details, or email [email protected] if you have any questions.

/usr/local/lib/python3.11/site-packages/openai/util.py:186: AuthenticationError

make for windows?

Very excited to try this package! One question: you list the link for docker containers as the third dependency, but you don't have a link for "make" under windows. Do you have a suggested version of make for Windows that works with your project?

Thanks

Error when run make dev_setup

Hi, thank you for the awesome project.

I encounter this issue when trying to run
make dev_setup

Creating ix_web_run ... done /usr/bin/env: โ€˜python\rโ€™: No such file or directory /usr/bin/env: use -[v]S to pass options in shebang lines ERROR: 127 make: *** [Makefile:109: graphene_to_graphql] Error 127

Thank you for your effort.

ImportError Could not import sentence_transformers python package. Please install it with `pip install sentence-transformers`.

Hi, I bootstrapped ix with the docker solution and tried to do build a RAG agent with huggingface embedding, which needs sentence-transformers the python lib.

But it seems that sentence-transformers is not included in the docker image and I encountered reply with "ImportError
Could not import sentence_transformers python package. Please install it with pip install sentence-transformers."

Do I miss anything here?

Below is the flow under construction:

image

Support Local LLMs & custom API-Endpoints

I just found out about that impressive work, which has be done here.

Is there a plan to support Local LLMs and custom API-Points or something like openrouter.ai or even better litellm.ai, while the latter is a API-Proxy/loadblancer/router/cache?

THINK/THOUGHT langchain refactor

The introduction of chains introduced several issues with THINK / THOUGHT messages:

  • usage is not reported in responses from langchain chains.
  • a chain can have multiple THINK/THOUGHT messages since there are multiple

Plan:

  • update IxCallbackManager to record THINK / THOUGHT for all LLM calls, including usage
  • likely need to update TaskLogMessage to have root and parent
    • root for querying message root message stream
    • parent to build message stream tree
    • IxCallbackManager needs to support building this tree
  • add a RUN_START/END message to be represent the bounds of a chain run.
    • Update messages reference parents accordingly
    • Update UI accordingly

Artifact Memory

Implement a langchain memory class + config loader for artifacts.

Artifacts referenced in chat messages should be added to the context when prompting the agent. This ticket is for a basic version with a simple lookup mechanism.

Longer term version should use a similarity search to resolve artifacts. That requires additional work to enable pg_vector and store embeddings for artifacts.

Add test chat to Chain Editor

The chain editor should have a chat window so a chain can be tested outside a chat.

Notes:
  • should be able to re-use components:
    • useChatStream provides access to message stream over websocket.
    • ChatMessages component encapsulates chat message rendering.
    • ChatInput component encapsulates the rich input bar
    • API provides access to inputs and complete messages
  • places where customization might be needed:
    • may need to create a fake Agent and Chat model instances, but otherwise agent worker should just work.
    • Need to think if/how artifacts are available within editor for testing.
    • chat history doesn't need to persist so may consider excluding this from history somehow.
  • might make sense to tackle this issue after chat APIs are converted to FastAPI #151

Langflow support?

First off, great work! Love the chain builder, it'd be great to see more support for things like Retrieval QA and Doc Loaders and text splitters. Have you considered adding support for Langflow?

WSL manage.py shebang error

/usr/bin/env: โ€˜python\rโ€™: No such file or directory
/usr/bin/env: use -[v]S to pass options in shebang lines
make: *** [Makefile:148: graphene_to_graphql] Error 127

User input sent in quick succession raises an error.

When send korean word to input, This Error printed

I think langchain does'nt support multi language

`
Unexpected Application Error!
Cannot resolve a DOM point from Slate point: {"path":[0,0],"offset":2}
Error: Cannot resolve a DOM point from Slate point: {"path":[0,0],"offset":2}
at Object.toDOMPoint (webpack-internal:///./node_modules/slate-react/dist/index.es.js:736:13)
at Object.toDOMRange (webpack-internal:///./node_modules/slate-react/dist/index.es.js:747:33)
at setDomSelection (webpack-internal:///./node_modules/slate-react/dist/index.es.js:3366:50)
at eval (webpack-internal:///./node_modules/slate-react/dist/index.es.js:3383:23)
at commitHookEffectListMount (webpack-internal:///./node_modules/react-dom/cjs/react-dom.development.js:23145:26)
at commitLayoutEffectOnFiber (webpack-internal:///./node_modules/react-dom/cjs/react-dom.development.js:23268:15)
at commitLayoutMountEffects_complete (webpack-internal:///./node_modules/react-dom/cjs/react-dom.development.js:24683:9)
at commitLayoutEffects_begin (webpack-internal:///./node_modules/react-dom/cjs/react-dom.development.js:24669:7)
at commitLayoutEffects_begin (webpack-internal:///./node_modules/react-dom/cjs/react-dom.development.js:24651:11)
at commitLayoutEffects (webpack-internal:///./node_modules/react-dom/cjs/react-dom.development.js:24607:3)
๐Ÿ’ฟ Hey developer ๐Ÿ‘‹

You can provide a way better UX than this when your app throws errors by providing your own ErrorBoundary or errorElement prop on your route.`

Follow is my terminal mointor

ix-worker-1 | - Consider the CHAT_HISTORY in your decisions. ix-worker-1 | ix-worker-1 | Human: ์•ˆ๋…• ix-worker-1 | [2023-07-07 02:46:54,897: INFO/MainProcess] message='OpenAI API response' path=https://api.openai.com/v1/chat/completions processing_ms=2269 request_id=d2a391363dd202fde2b89b90e455dd01 response_code=200 ix-worker-1 | [2023-07-07 02:46:54,901: WARNING/MainProcess] ix-worker-1 | > Finished chain. ix-worker-1 | 2023-07-07 02:46:54 DEBUG Moderator returned response=์•ˆ๋…•ํ•˜์„ธ์š”! ์–ด๋–ป๊ฒŒ ๋„์™€๋“œ๋ฆด๊นŒ์š”? ix-worker-1 | [2023-07-07 02:46:54,903: DEBUG/MainProcess] Moderator returned response=์•ˆ๋…•ํ•˜์„ธ์š”! ์–ด๋–ป๊ฒŒ ๋„์™€๋“œ๋ฆด๊นŒ์š”? ix-worker-1 | 2023-07-07 02:46:54 DEBUG Response from model, task_id=6c053c94-eccc-4703-a655-80148c7bebb2 response=None ix-worker-1 | [2023-07-07 02:46:54,946: DEBUG/MainProcess] Response from model, task_id=6c053c94-eccc-4703-a655-80148c7bebb2 response=Non

Streaming Messages

Tracking ticket for streaming responses from AI models.

  • Implement LangChain callbacks (#130)
  • define stream protocol (message objects and streams) #142
  • django channels / graphql subscription (worker connection partially implemented in #130) #142
  • frontend updates #142

JSONDecodeError

Sometime when I ask it to generate a detail fully functional code, it give me
"JSONDecodeError Invalid control character at: line 4 column 88 (char 574)"

Here is the full stacktrace:

[2023-08-16 08:25:17,286: ERROR/MainProcess] @@@@ EXECUTE ERROR logged as id=cd00c283-ee6e-4dec-b45f-620a7ed9f607 message_id=d6b1d35a-722f-4dab-82b3-6306dd0a71ec error_type=JSONDecodeError
[2023-08-16 08:25:17,286: ERROR/MainProcess] @@@@ EXECUTE ERROR Invalid control character at: line 4 column 88 (char 574)
[2023-08-16 08:25:17,288: ERROR/MainProcess] File "/var/app/ix/utils/asyncio.py", line 25, in wrapper
    return asyncio.get_event_loop().run_until_complete(f(*args, **kwargs))
File "/usr/local/lib/python3.11/asyncio/base_events.py", line 640, in run_until_complete
    self.run_forever()
File "/usr/local/lib/python3.11/asyncio/base_events.py", line 607, in run_forever
    self._run_once()
File "/usr/local/lib/python3.11/asyncio/base_events.py", line 1922, in _run_once
    handle._run()
File "/usr/local/lib/python3.11/asyncio/events.py", line 80, in _run
    self._context.run(self._callback, *self._args)
File "/var/app/ix/task_log/tasks/agent_runner.py", line 42, in start_agent_loop
    return await process.start(inputs)
File "/var/app/ix/agents/process.py", line 41, in start
    response = await self.chat_with_ai(inputs)
File "/var/app/ix/agents/process.py", line 67, in chat_with_ai
    await handler.send_error_msg(e)
File "/usr/local/lib/python3.11/site-packages/asgiref/sync.py", line 349, in main_wrap
    raise exc_info[1]
File "/usr/local/lib/python3.11/site-packages/asgiref/sync.py", line 534, in thread_handler
    raise exc_info[1]
File "/var/app/ix/agents/process.py", line 64, in chat_with_ai
    return await chain.arun(callbacks=[handler], **extra_kwargs, **user_input)
File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 556, in arun
    await self.acall(
File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 349, in acall
    raise e
File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 343, in acall
    await self._acall(inputs, run_manager=run_manager)
File "/usr/local/lib/python3.11/site-packages/langchain/chains/sequential.py", line 118, in _acall
    outputs = await chain.acall(
File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 349, in acall
    raise e
File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 343, in acall
    await self._acall(inputs, run_manager=run_manager)
File "/var/app/ix/chains/routing.py", line 144, in _acall
    iteration_outputs = await self.chain.arun(
File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 556, in arun
    await self.acall(
File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 349, in acall
    raise e
File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 343, in acall
    await self._acall(inputs, run_manager=run_manager)
File "/usr/local/lib/python3.11/site-packages/langchain/chains/sequential.py", line 118, in _acall
    outputs = await chain.acall(
File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 349, in acall
    raise e
File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 343, in acall
    await self._acall(inputs, run_manager=run_manager)
File "/usr/local/lib/python3.11/site-packages/langchain/chains/llm.py", line 239, in _acall
    return self.create_outputs(response)[0]
File "/usr/local/lib/python3.11/site-packages/langchain/chains/llm.py", line 221, in create_outputs
    result = [
File "/usr/local/lib/python3.11/site-packages/langchain/chains/llm.py", line 224, in <listcomp>
    self.output_key: self.output_parser.parse_result(generation),
File "/var/app/ix/chains/functions.py", line 43, in parse_result
    function_call["arguments"] = json.loads(function_call["arguments"])
File "/usr/local/lib/python3.11/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
File "/usr/local/lib/python3.11/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/local/lib/python3.11/json/decoder.py", line 353, in raw_decode
    obj, end = self.scan_once(s, idx)
    ```

make dev_setup not working

every time I run make dev_setup I get this

docker tag ghcr.io/ix/sandbox:351deee50d1620b5f0d4551ce88cba95 ghcr.io/ix/sandbox:latest
touch .sentinel/image
docker-compose run --rm sandbox ./manage.py graphql_schema --out ./frontend/schema.graphql
Dev Containers CLI: RPC pipe not configured. Message: {"args":["docker-credential-helper","get"],"stdin":"https://index.docker.io/v1/"}
Dev Containers CLI: RPC pipe not configured. Message: {"args":["docker-credential-helper","get"],"stdin":"https://index.docker.io/v1/"}
[+] Running 0/0
โ ‹ redis Pulling 0.0s
โ ‹ db Pulling 0.0s
error getting credentials - err: exit status 255, out: ``
make: *** [Makefile:73: graphene_to_graphql] Error 1

I tried again from inside VScode and got a little bit different error

Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "./manage.py": stat ./manage.py: no such file or directory: unknown
make: *** [Makefile:73: graphene_to_graphql] Error 1

I found this issue in manage.py as well

Import "django.core.management" could not be resolved from source Pylance(reportMissingModuleSource) [Ln 11, Col 14|

Help with installation

I have followed the instructions step-by-step but I am still running into errors. I can get the docker cluster to run: bd-1, nginx-1, redis-1, web-1 but worker-1 will not stay up with the following error: exec /usr/bin/ix/celery.sh: no such file or directory

additionally when i try to go to the web interface i get just a blank white screen and an error saying /static/js/main.js tossing "ERR_CONNECTION_REFUSED" with nginx throwing: [error] 29#29: *10 open() "/var/static/js/main.js" failed (2: No such file or directory), client: 172.26.0.1, server: 0.0.0.0, request: "GET /static/js/main.js HTTP/1.1", host: "localhost:8000", referrer: "http://localhost:8000/" nginx is running on 8000 and web-1 doesnt show any errors and is running on 8001

any tips?

No command returned by gpt can lead to the worker failing to carry on

In the event that the first task doesn't return a command for some reason, eg. the prompt was too complex and just required a discussion on designing something, we can end up causing an error in the celery worker as follows:

image

See complete stack trace below:

[2023-04-15 16:11:59,948: INFO/MainProcess] Task ix.task_log.tasks.agent_runner.start_agent_loop[a1473004-be68-4afd-b9a4-b31d6d3c92a5] received
[2023-04-15 16:11:59,951: INFO/ForkPoolWorker-4] AgentProcess initializing task_id=4
[2023-04-15 16:12:00,091: INFO/ForkPoolWorker-4] intialized command registry
[2023-04-15 16:12:00,106: INFO/ForkPoolWorker-4] AgentProcess loaded n=0 chat messages from persistence
[2023-04-15 16:12:00,119: INFO/ForkPoolWorker-4] AgentProcess initialized
[2023-04-15 16:12:00,119: INFO/ForkPoolWorker-4] starting process loop task_id=4
[2023-04-15 16:12:00,121: INFO/ForkPoolWorker-4] first tick for task_id=4
[2023-04-15 16:12:00,122: INFO/ForkPoolWorker-4] ticking task_id=4
[2023-04-15 16:12:17,475: WARNING/ForkPoolWorker-4] --- Logging error ---
[2023-04-15 16:12:17,483: WARNING/ForkPoolWorker-4] Traceback (most recent call last):
[2023-04-15 16:12:17,484: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/site-packages/celery/app/trace.py", line 451, in trace_task
    R = retval = fun(*args, **kwargs)
                 ^^^^^^^^^^^^^^^^^^^^
[2023-04-15 16:12:17,484: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/site-packages/celery/app/trace.py", line 734, in __protected_call__
    return self.run(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
[2023-04-15 16:12:17,485: WARNING/ForkPoolWorker-4]   File "/var/app/ix/task_log/tasks/agent_runner.py", line 13, in start_agent_loop
    return process.start()
           ^^^^^^^^^^^^^^^
[2023-04-15 16:12:17,486: WARNING/ForkPoolWorker-4]   File "/var/app/ix/agents/process.py", line 188, in start
    self.loop(n=authorized_for, tick_input=tick_input)
[2023-04-15 16:12:17,486: WARNING/ForkPoolWorker-4]   File "/var/app/ix/agents/process.py", line 193, in loop
    self.tick(execute=execute)
[2023-04-15 16:12:17,486: WARNING/ForkPoolWorker-4]   File "/var/app/ix/agents/process.py", line 213, in tick
    command = self.command_registry.get(command_name)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[2023-04-15 16:12:17,486: WARNING/ForkPoolWorker-4]   File "/var/app/ix/commands/registry.py", line 73, in get
    return self.commands[name]
           ~~~~~~~~~~~~~^^^^^^
[2023-04-15 16:12:17,487: WARNING/ForkPoolWorker-4] KeyError: ''
[2023-04-15 16:12:17,487: WARNING/ForkPoolWorker-4]
During handling of the above exception, another exception occurred:
[2023-04-15 16:12:17,487: WARNING/ForkPoolWorker-4] Traceback (most recent call last):
[2023-04-15 16:12:17,487: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/logging/__init__.py", line 1110, in emit
    msg = self.format(record)
          ^^^^^^^^^^^^^^^^^^^
[2023-04-15 16:12:17,487: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/logging/__init__.py", line 953, in format
    return fmt.format(record)
           ^^^^^^^^^^^^^^^^^^
[2023-04-15 16:12:17,488: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/site-packages/celery/utils/log.py", line 146, in format
    msg = super().format(record)
          ^^^^^^^^^^^^^^^^^^^^^^
[2023-04-15 16:12:17,488: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/logging/__init__.py", line 695, in format
    record.exc_text = self.formatException(record.exc_info)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[2023-04-15 16:12:17,488: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/site-packages/celery/utils/log.py", line 142, in formatException
    r = super().formatException(ei)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[2023-04-15 16:12:17,488: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/logging/__init__.py", line 645, in formatException
    traceback.print_exception(ei[0], ei[1], tb, None, sio)
[2023-04-15 16:12:17,488: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/traceback.py", line 124, in print_exception
    te = TracebackException(type(value), value, tb, limit=limit, compact=True)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[2023-04-15 16:12:17,488: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/traceback.py", line 690, in __init__
    self.stack = StackSummary._extract_from_extended_frame_gen(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[2023-04-15 16:12:17,488: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/traceback.py", line 416, in _extract_from_extended_frame_gen
    for f, (lineno, end_lineno, colno, end_colno) in frame_gen:
[2023-04-15 16:12:17,488: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/traceback.py", line 353, in _walk_tb_with_full_positions
    positions = _get_code_position(tb.tb_frame.f_code, tb.tb_lasti)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[2023-04-15 16:12:17,488: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/traceback.py", line 366, in _get_code_position
    positions_gen = code.co_positions()
                    ^^^^^^^^^^^^^^^^^
[2023-04-15 16:12:17,489: WARNING/ForkPoolWorker-4] AttributeError: '_Code' object has no attribute 'co_positions'
[2023-04-15 16:12:17,489: WARNING/ForkPoolWorker-4] Call stack:
[2023-04-15 16:12:17,491: WARNING/ForkPoolWorker-4]   File "/usr/local/bin/celery", line 8, in <module>
    sys.exit(main())
[2023-04-15 16:12:17,491: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/site-packages/celery/__main__.py", line 15, in main
    sys.exit(_main())
[2023-04-15 16:12:17,491: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/site-packages/celery/bin/celery.py", line 217, in main
    return celery(auto_envvar_prefix="CELERY")
[2023-04-15 16:12:17,491: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
[2023-04-15 16:12:17,492: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
[2023-04-15 16:12:17,492: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1657, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
[2023-04-15 16:12:17,492: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
[2023-04-15 16:12:17,492: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/site-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
[2023-04-15 16:12:17,492: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/site-packages/click/decorators.py", line 26, in new_func
    return f(get_current_context(), *args, **kwargs)
[2023-04-15 16:12:17,492: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/site-packages/celery/bin/base.py", line 134, in caller
    return f(ctx, *args, **kwargs)
[2023-04-15 16:12:17,492: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/site-packages/celery/bin/worker.py", line 351, in worker
    worker.start()
[2023-04-15 16:12:17,492: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/site-packages/celery/worker/worker.py", line 203, in start
    self.blueprint.start(self)
[2023-04-15 16:12:17,492: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/site-packages/celery/bootsteps.py", line 116, in start
    step.start(parent)
[2023-04-15 16:12:17,492: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/site-packages/celery/bootsteps.py", line 365, in start
    return self.obj.start()
[2023-04-15 16:12:17,492: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/site-packages/celery/concurrency/base.py", line 129, in start
    self.on_start()
[2023-04-15 16:12:17,492: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/site-packages/celery/concurrency/prefork.py", line 109, in on_start
    P = self._pool = Pool(processes=self.limit,
[2023-04-15 16:12:17,492: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/site-packages/celery/concurrency/asynpool.py", line 463, in __init__
    super().__init__(processes, *args, **kwargs)
[2023-04-15 16:12:17,493: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/site-packages/billiard/pool.py", line 1046, in __init__
    self._create_worker_process(i)
[2023-04-15 16:12:17,493: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/site-packages/celery/concurrency/asynpool.py", line 480, in _create_worker_process
    return super()._create_worker_process(i)
[2023-04-15 16:12:17,493: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/site-packages/billiard/pool.py", line 1158, in _create_worker_process
    w.start()
[2023-04-15 16:12:17,493: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/site-packages/billiard/process.py", line 124, in start
    self._popen = self._Popen(self)
[2023-04-15 16:12:17,493: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/site-packages/billiard/context.py", line 333, in _Popen
    return Popen(process_obj)
[2023-04-15 16:12:17,493: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/site-packages/billiard/popen_fork.py", line 24, in __init__
    self._launch(process_obj)
[2023-04-15 16:12:17,493: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/site-packages/billiard/popen_fork.py", line 79, in _launch
    code = process_obj._bootstrap()
[2023-04-15 16:12:17,493: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/site-packages/billiard/process.py", line 327, in _bootstrap
    self.run()
[2023-04-15 16:12:17,493: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/site-packages/billiard/process.py", line 114, in run
    self._target(*self._args, **self._kwargs)
[2023-04-15 16:12:17,493: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/site-packages/billiard/pool.py", line 292, in __call__
    sys.exit(self.workloop(pid=pid))
[2023-04-15 16:12:17,494: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/site-packages/billiard/pool.py", line 362, in workloop
    result = (True, prepare_result(fun(*args, **kwargs)))
[2023-04-15 16:12:17,494: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/site-packages/celery/app/trace.py", line 649, in fast_trace_task
    R, I, T, Rstr = tasks[task].__trace__(
[2023-04-15 16:12:17,494: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/site-packages/celery/app/trace.py", line 468, in trace_task
    I, R, state, retval = on_error(task_request, exc, uuid)
[2023-04-15 16:12:17,494: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/site-packages/celery/app/trace.py", line 379, in on_error
    R = I.handle_error_state(
[2023-04-15 16:12:17,494: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/site-packages/celery/app/trace.py", line 178, in handle_error_state
    return {
[2023-04-15 16:12:17,494: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/site-packages/celery/app/trace.py", line 237, in handle_failure
    self._log_error(task, req, einfo)
[2023-04-15 16:12:17,495: WARNING/ForkPoolWorker-4]   File "/usr/local/lib/python3.11/site-packages/celery/app/trace.py", line 265, in _log_error
    logger.log(policy.severity, policy.format.strip(), context,
[2023-04-15 16:12:17,495: WARNING/ForkPoolWorker-4] Message: 'Task %(name)s[%(id)s] %(description)s: %(exc)s'
Arguments: {'hostname': 'celery@576889d96cb7', 'id': 'a1473004-be68-4afd-b9a4-b31d6d3c92a5', 'name': 'ix.task_log.tasks.agent_runner.start_agent_loop', 'exc': "KeyError('')", 'traceback': 'Traceback (most recent call last):\n  File "/usr/local/lib/python3.11/site-packages/celery/app/trace.py", line 451, in trace_task\n    R = retval = fun(*args, **kwargs)\n                 ^^^^^^^^^^^^^^^^^^^^\n  File "/usr/local/lib/python3.11/site-packages/celery/app/trace.py", line 734, in __protected_call__\n    return self.run(*args, **kwargs)\n           ^^^^^^^^^^^^^^^^^^^^^^^^^\n  File "/var/app/ix/task_log/tasks/agent_runner.py", line 13, in start_agent_loop\n    return process.start()\n           ^^^^^^^^^^^^^^^\n  File "/var/app/ix/agents/process.py", line 188, in start\n    self.loop(n=authorized_for, tick_input=tick_input)\n  File "/var/app/ix/agents/process.py", line 193, in loop\n    self.tick(execute=execute)\n  File "/var/app/ix/agents/process.py", line 213, in tick\n    command = self.command_registry.get(command_name)\n              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n  File "/var/app/ix/commands/registry.py", line 73, in get\n    return self.commands[name]\n           ~~~~~~~~~~~~~^^^^^^\nKeyError: \'\'\n', 'args': '[]', 'kwargs': "{'task_id': 4}", 'description': 'raised unexpected', 'internal': False}

:bug: Unexpected Application Error! agent.chain is null

Outcome

  • I met error same as the title.

Expected outcome

  • I expected to see default agents or be able to remove the agent causing the error.

To duplicate

  1. make dev_setup
  2. make server
  3. make worker
  4. Create an agent without "Chain"
  5. Access agent list

Screenshots

This is when I created an agent,
Screenshot from 2023-06-03 00-17-52

And this is when I accessed agent list.
Screenshot from 2023-06-03 00-14-49

I couldn't remove the agent from UI, but maybe I could remove the agent somehow in DB.. but I didn't look at it yet.

There is a problem displaying the site

Hello, I have a problem displaying the site. Judging by the error it can't find the js file. Can anyone suggest what this is related to?
ix-nginx-1 | 2023/05/28 17:07:01 [error] 23#23: *4 open() "/var/static/js/main.js" failed (2: No such file or directory), client: 172.18.0.1,

Websockets for frontend

Front end currently polls for updates. The ideal end solution here is adding a graphql subscription over websockets.

preliminary work

  • #21 includes all the setup for nginx + uvicorn to provide asgi, websockets, and http2
  • I started working on a websocket handler for graphene but all the libraries for this are very out of date. I open a PR (jaydenwindle/graphene-subscriptions#43) with one the projects but it isn't complete yet. There are other library options but of similar age.

make-frontend not working: relay-compiler: not found

When I'm trying to run make-frontend or make dev_setup, the process is stopping at the following step

docker-compose run --rm sandbox ./manage.py graphql_schema --out ./frontend/schema.graphql
Creating ix_sandbox_run ... done
Successfully dumped GraphQL schema to ./frontend/schema.graphql
docker-compose run --rm sandbox npm run relay
Creating ix_sandbox_run ... done

> [email protected] relay
> relay-compiler

sh: 1: relay-compiler: not found
ERROR: 127
make: *** [compile_relay] Error 127

I'm using docker-compose on MacOS 12.6 (Intel i-9)

Docker version 20.10.17, build 100c701
docker-compose version 1.29.2, build 5becea4c

Currently on master at commit: "310ff930f1af315562a1df9f3a6efbea79fa3875"

Chat message memory

Implement a memory component that adds TaskLogMessage (chat messages) to the context. Should be implemented as a Langchain memory class and be loadable via config.

There is existing code from Ix v0 agent that queries and loads messages. It may be a partial solution.

Database error on dev_setup

Hey! The project looks really promising!

However, I'm getting the following error every time I try to follow instructions in the README:

django.db.utils.IntegrityError: Problem installing fixtures: insert or update on table "chains_chainnode" violates foreign key constraint "chains_chainnode_node_type_id_506db102_fk_chains_nodetype_id"
DETAIL:  Key (node_type_id)=(7fff77c2-7ccc-4493-9f2b-54244c3d15e0) is not present in table "chains_nodetype".

node_type_id is the same on every attempt

Memory issue

I am trying to add memory to a chain. I want to use conversation summary buffer because the generated information is bound to get large.

Here is my configuration:
image

If I don't add the prompt I get this error:
`UnboundLocalError
cannot access local variable 'response' where it is not associated with a value'

When I add the prompt as shown I get:

ValidationError
1 validation error for ConversationSummaryBufferMemory root Got unexpected prompt input variables. The prompt expects ['history', 'input'], but it should have {'new_lines', 'summary'}. (type=value_error)

You can see that the config has different field names than 'new_lines' and 'summary':
image

So then I tried the following prompt:
image
And was back to the error:
UnboundLocalError cannot access local variable 'response' where it is not associated with a value

The documentation seems to expect dict input somewhere so I am not sure how to use that info:
https://github.com/kreneskyp/ix/blob/master/docs/chains/memory.rst

main.js permission errors in Nginx

Occasionally, I'm seeing these errors in the nginx container. The app is unusable once these start popping up.

ix-nginx-1 | 2023/08/01 19:46:21 [error] 27#27: *520 open() "/var/static/js/main.js" failed (13: Permission denied), client: 172.18.0.1, server: 0.0.0.0, request: "GET /static/js/main.js HTTP/1.1", host: "localhost:8000", referrer: "http://localhost:8000/"

Bad Request

I got

ix-web-1    | INFO:     172.18.0.5:48972 - "GET / HTTP/1.0" 200 OK
ix-nginx-1  | 172.18.0.1 - - [16/Aug/2023:05:54:30 +0000] "GET / HTTP/1.1" 200 174 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36"
ix-web-1    | INFO:     ('172.18.0.5', 48978) - "WebSocket /graphql-ws/" [accepted]
ix-web-1    | INFO:     connection open
ix-web-1    | Bad Request: /graphql
ix-web-1    | INFO:     172.18.0.5:48982 - "POST /graphql HTTP/1.0" 400 Bad Request
ix-nginx-1  | 172.18.0.1 - - [16/Aug/2023:05:54:30 +0000] "POST /graphql HTTP/1.1" 400 118 "http://localhost:8000/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36"
ix-web-1    | INFO:     172.18.0.5:48984 - "GET /favicon.ico HTTP/1.0" 200 OK
ix-nginx-1  | 172.18.0.1 - - [16/Aug/2023:05:54:30 +0000] "GET /favicon.ico HTTP/1.1" 200 174 "http://localhost:8000/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36"

When trying to access the app from "http://localhost:8000"

The loading circle just keep spinning..

ix backend utils does not contain a count_tokens module

The PromptBuilder class uses the num_tokens_from_messages method imported from ix.utils.count_takens at the top of the file.

However, this file doesn't actually exist in the master branch. This change was made as part of a 'deprecations' commit straight to the master branch - I would advise having separate PRs for each change for an easier audit trail of changes, particularly deprecations that move from using one method to another. Would also have allowed us to see a failing test in the pipeline.

Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/graphene_django/settings.py", line 78, in import_from_string
    module = importlib.import_module(module_path)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "<frozen importlib._bootstrap>", line 1206, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 940, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/var/app/ix/schema/__init__.py", line 5, in <module>
    from ix.schema.mutations.chat import TaskFeedbackMutation, AuthorizeCommandMutation
  File "/var/app/ix/schema/mutations/chat.py", line 7, in <module>
    from ix.task_log.tasks.agent_runner import (
  File "/var/app/ix/task_log/tasks/__init__.py", line 1, in <module>
    from ix.task_log.tasks.agent_runner import *
  File "/var/app/ix/task_log/tasks/agent_runner.py", line 3, in <module>
    from ix.agents.process import AgentProcess
  File "/var/app/ix/agents/process.py", line 7, in <module>
    from ix.agents.prompt_builder import PromptBuilder
  File "/var/app/ix/agents/prompt_builder.py", line 4, in <module>
    from ix.utils.count_tokens import num_tokens_from_messages
ModuleNotFoundError: No module named 'ix.utils.count_tokens'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/var/app/./manage.py", line 22, in <module>
    main()
  File "/var/app/./manage.py", line 18, in main
    execute_from_command_line(sys.argv)
  File "/usr/local/lib/python3.11/site-packages/django/core/management/__init__.py", line 442, in execute_from_command_line
    utility.execute()
  File "/usr/local/lib/python3.11/site-packages/django/core/management/__init__.py", line 436, in execute
    self.fetch_command(subcommand).run_from_argv(self.argv)
  File "/usr/local/lib/python3.11/site-packages/django/core/management/base.py", line 404, in run_from_argv
    parser = self.create_parser(argv[0], argv[1])
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/django/core/management/base.py", line 367, in create_parser
    self.add_arguments(parser)
  File "/usr/local/lib/python3.11/site-packages/graphene_django/management/commands/graphql_schema.py", line 19, in add_arguments
    default=graphene_settings.SCHEMA,
            ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/graphene_django/settings.py", line 125, in __getattr__
    val = perform_import(val, attr)
          ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/graphene_django/settings.py", line 64, in perform_import
    return import_from_string(val, setting_name)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/graphene_django/settings.py", line 87, in import_from_string
    raise ImportError(msg)
ImportError: Could not import 'ix.schema.schema' for Graphene setting 'SCHEMA'. ModuleNotFoundError: No module named 'ix.utils.count_tokens'.
make: *** [graphene_to_graphql] Error 1

celery.sh issue

Unsure of what to try next...

shaunjohann@DESKTOP:/mnt/c/repo/ix$ make worker
docker-compose run --rm web celery.sh
Creating ix_web_run ... done
exec /usr/bin/ix/celery.sh: no such file or directory
ERROR: 1
make: *** [Makefile:136: worker] Error 1
shaunjohann@DESKTOP-6QNCSD0:/mnt/c/repo/ix$

shaunjohann@DESKTOP:/mnt/c/repo/ix$ make bash
docker-compose run --rm web /bin/bash
Creating ix_web_run ... done
root@56bfb0a309b0:/var/app# cd ..
root@56bfb0a309b0:/var# cd ..
root@56bfb0a309b0:/# cd usr/bin/ix
root@56bfb0a309b0:/usr/bin/ix# ls
celery.sh install_node.sh ix_asgi.sh nvm_init.sh webpack.sh
root@56bfb0a309b0:/usr/bin/ix#

Help me Implement Custom Agent for Backlog Optimization

๐Ÿ“ข Calling All Problem Solvers! ๐Ÿ“ข

Given just 3 days, I reverse-engineered a backlog for an aging monolithic app without any existing one. Using a multi-step manual prompting method, I extracted around 500 user stories. These stories are "isolated" since my method wasn't aware of prior created stories. The goal now is a coherent backlog for a total app rewrite with Domain-Driven Design and microservices. And yes, my deadline is 26th October to provide the backlog :)

The Challenge:

The backlog has redundant stories, misplaced user stories, and other non-optimal structures. The mission? Process it and create a streamlined version.

  • Clarity: Each User Story should clearly describe the desired outcome from a user's perspective. The "who", "what", and "why" should be evident.
  • Value: The User Stories should clearly deliver value to the end user or the business.
  • Independence: Ideally, User Stories should be independent of each other so they can be developed, tested, and released in any sequence.
  • Epic Granularity: An Epic should represent a larger chunk of work that encompasses multiple related User Stories. It should be thematic and provide a clear narrative of the work.
  • Feasibility: Check if there are any technical challenges or dependencies in the User Stories that need to be addressed.
  • Consistency: Ensure there are no overlapping or contradictory User Stories.
  • Related Components: This column should help in identifying technical components or modules that would be affected. Ensure they are correctly identified.
  • Epic ID: Ensure that the User Stories are correctly mapped to their respective Epics.

Key Steps:

  1. CSV Processing: Load a CSV with ~500 entries consisting of Epics and User Stories. The CSV format is ID, Description, Tags, ParentId. Tags are a comma-separated list categorizing the user-story. Description is a typical one-liner about the task "As..., I need to ..., in order to ...".
  2. Domain Clarification: Post-loading, the agent should discern the software's domain. The agent should ask the user a series of questions to clarify the domain.
  3. Epic Refinement: The agent should propose a systematic approach to restructure epics, ensuring thematic coherence. This step might involve regrouping epics under new or existing thematics.
  4. User Story Refinement: The agent should consider ways to reassign and potentially merge user stories for maximum clarity and relevance.
  5. Tagging Insight: The agent should suggest potential tags for each user story based on its content and its epic. User should have the option to approve, modify, or retry these tag suggestions. Once approved, the agent should apply the tags to the user stories.
  6. Output: The agent's final task is to save this polished backlog as a CSV.

How would you approach this? Can you envision an agent design flow or have questions to shape the direction? Your insights could be the game-changer as I am new to IX and don't have a clear vision of components! ๐Ÿš€

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.