GithubHelp home page GithubHelp logo

jina-ai / langchain-serve Goto Github PK

View Code? Open in Web Editor NEW
1.6K 36.0 137.0 39.43 MB

⚡ Langchain apps in production using Jina & FastAPI

Home Page: https://cloud.jina.ai

License: Apache License 2.0

Python 97.88% Dockerfile 0.50% Shell 1.62%
gpt langchain autonomous-agents fastapi production python autogpt babyagi llm chatbot

langchain-serve's Introduction

⚠️ IMPORTANT NOTICE: This repository is no longer maintained.

⚡ LangChain Apps on Production with Jina & FastAPI 🚀

PyPI PyPI - Downloads from official pypistats Github CD status

Jina is an open-source framework for building scalable multi modal AI apps on Production. LangChain is another open-source framework for building applications powered by LLMs.

langchain-serve helps you deploy your LangChain apps on Jina AI Cloud in a matter of seconds. You can benefit from the scalability and serverless architecture of the cloud without sacrificing the ease and convenience of local development. And if you prefer, you can also deploy your LangChain apps on your own infrastructure to ensure data privacy. With langchain-serve, you can craft REST/Websocket APIs, spin up LLM-powered conversational Slack bots, or wrap your LangChain apps into FastAPI packages on cloud or on-premises.

Give us a ⭐ and tell us what more you'd like to see!

☁️ LLM Apps as-a-service

langchain-serve currently wraps following apps as a service to be deployed on Jina AI Cloud with one command.

🔮 AutoGPT-as-a-service

AutoGPT is an "AI agent" that given a goal in natural language, will attempt to achieve it by breaking it into sub-tasks and using the internet and other tools in an automatic loop.

Show usage
  • Deploy autogpt on Jina AI Cloud with one command

    lc-serve deploy autogpt
    Show command output
    ╭──────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────╮
    │ App ID       │                                           autogpt-6cbd489454                                           │
    ├──────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────┤
    │ Phase        │                                                Serving                                                 │
    ├──────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────┤
    │ Endpoint     │                                 wss://autogpt-6cbd489454.wolf.jina.ai                                  │
    ├──────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────┤
    │ App logs     │                                        dashboards.wolf.jina.ai                                         │
    ├──────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────┤
    │ Swagger UI   │                              https://autogpt-6cbd489454.wolf.jina.ai/docs                              │
    ├──────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────┤
    │ OpenAPI JSON │                          https://autogpt-6cbd489454.wolf.jina.ai/openapi.json                          │
    ╰──────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────╯
    
  • Integrate autogpt with external services using APIs. Get a flavor of the integration on your CLI with

    lc-serve playground autogpt
    Show playground

🧠 Babyagi-as-a-service

Babyagi is a task-driven autonomous agent that uses LLMs to create, prioritize, and execute tasks. It is a general-purpose AI agent that can be used to automate a wide variety of tasks.

Show usage
  • Deploy babyagi on Jina AI Cloud with one command

    lc-serve deploy babyagi
  • Integrate babyagi with external services using our Websocket API. Get a flavor of the integration on your CLI with

    lc-serve playground babyagi
    Show playground

🐼 pandas-ai-as-a-service

pandas-ai integrates LLM capabilities into Pandas, to make dataframes conversational in Python code. Thanks to langchain-serve, we can now expose pandas-ai APIs on Jina AI Cloud in just a matter of seconds.

Show usage
  • Deploy pandas-ai on Jina AI Cloud

    lc-serve deploy pandas-ai
    Show command output
    ╭──────────────┬─────────────────────────────────────────────────────────────────────────────────╮
    │ App ID       │                               pandasai-06879349ca                               │
    ├──────────────┼─────────────────────────────────────────────────────────────────────────────────┤
    │ Phase        │                                     Serving                                     │
    ├──────────────┼─────────────────────────────────────────────────────────────────────────────────┤
    │ Endpoint     │                     wss://pandasai-06879349ca.wolf.jina.ai                      │
    ├──────────────┼─────────────────────────────────────────────────────────────────────────────────┤
    │ App logs     │                             dashboards.wolf.jina.ai                             │
    ├──────────────┼─────────────────────────────────────────────────────────────────────────────────┤
    │ Swagger UI   │                  https://pandasai-06879349ca.wolf.jina.ai/docs                  │
    ├──────────────┼─────────────────────────────────────────────────────────────────────────────────┤
    │ OpenAPI JSON │              https://pandasai-06879349ca.wolf.jina.ai/openapi.json              │
    ╰──────────────┴─────────────────────────────────────────────────────────────────────────────────╯
    
  • Upload your DataFrame to Jina AI Cloud (Optional - you can also use a publicly available CSV)

    • Define your DataFrame in a Python file

      # dataframe.py
      import pandas as pd
      df = pd.DataFrame(some_data)
    • Upload your DataFrame to Jina AI Cloud using <module>:<variable> syntax

      lc-serve util upload-df dataframe:df
  • Conversationalize your DataFrame using pandas-ai APIs. Get a flavor of the integration with a local playground on your CLI with

    lc-serve playground pandas-ai <host>
    Show playground

💬 Question Answer Bot on PDFs

pdfqna is a simple question answering bot that uses LLMs to answer questions on PDF documents, showcasing the how easy it is to integrate langchain apps on Jina AI Cloud.

Show usage
  • Deploy pdf_qna on Jina AI Cloud with one command

    lc-serve deploy pdf-qna
  • Get a flavor of the integration with Streamlit playground on your CLI with

    lc-serve playground pdf-qna
    Show playground
  • Expand the Q&A bot to multiple languages, different document types & integrate with external services using simple REST APIs.

    @serving
    def ask(urls: Union[List[str], str], question: str) -> str:
    content = load_pdf_content(urls)
    chain = get_qna_chain(OpenAI())
    return chain.run(input_document=content, question=question)

💪 Features

🎉 LLM Apps on production

🔥 Secure, Scalable, Serverless, Streaming REST/Websocket APIs on Jina AI Cloud.

  • 🌎 Globally available REST/Websocket APIs with automatic TLS certs.
  • 🌊 Stream LLM interactions in real-time with Websockets.
  • 👥 Enable human in the loop for your agents.
  • 💬 Build, deploy & distribute Slack bots built with langchain.
  • 🔑 Protect your APIs with API authorization using Bearer tokens.
  • 📄 Swagger UI, and OpenAPI spec included with your APIs.
  • ⚡️ Serverless, autoscaling apps that scales automatically with your traffic.
  • 🗝️ Secure handling of secrets and environment variables.
  • 📁 Persistent storage (EFS) mounted on your app for your data.
  • ⏱️ Trigger one-time jobs to run asynchronously, allowing for non-blocking execution.
  • 📊 Builtin logging, monitoring, and traces for your APIs.
  • 🤖 No need to change your code to manage APIs, or manage dockerfiles, or worry about infrastructure!

🏠 Self-host LLM Apps with Docker Compose or Kubernetes

🧰 Usage

Let's first install langchain-serve using pip.

pip install langchain-serve

🔄 REST APIs using @serving decorator

👉 Let's go through a step-by-step guide to build, deploy and use a REST API using @serving decorator.


🤖💬 Build, Deploy & Distribute Slack bots built with LangChain

langchain-serve exposes a @slackbot decorator to quickly build, deploy & distribute LLM-powered Slack bots without worrying about the infrastructure. It provides a simple interface to any langchain app on and makes them super accessible to users a platform they're already comfortable with.

✨ Ready to dive in?

  • There's a step-by-step guide in the repository to help you build your own bot for helping with reasoning.
  • Here's another step-by-step guide to help you chat over own internal HR-realted documents (like onboarding, policies etc.) with your employees right inside your Slack workspace.

🔐 Authorize your APIs

To add an extra layer of security, we can integrate any custom API authorization by adding a auth argument to the @serving decorator.

Show code & gotchas
from lcserve import serving

def authorizer(token: str) -> Any:
    if not token == 'mysecrettoken':            # Change this to add your own authorization logic
        raise Exception('Unauthorized')         # Raise an exception if the request is not authorized

    return 'userid'                             # Return any user id or object

@serving(auth=authorizer)
def ask(question: str, **kwargs) -> str:
    auth_response = kwargs['auth_response']     # This will be 'userid'
    return ...

@serving(websocket=True, auth=authorizer)
async def talk(question: str, **kwargs) -> str:
    auth_response = kwargs['auth_response']     # This will be 'userid'
    return ...
🤔 Gotchas about the auth function
  • Should accept only one argument token.
  • Should raise an Exception if the request is not authorized.
  • Can return any object, which will be passed to the auth_response object under kwargs to the functions.
  • Expects Bearer token in the Authorization header of the request.
  • Sample HTTP request with curl:
    curl -X 'POST' 'http://localhost:8080/ask' -H 'Authorization: Bearer mysecrettoken' -d '{ "question": "...", "envs": {} }'
  • Sample WebSocket request with wscat:
    wscat -H "Authorization: Bearer mysecrettoken" -c ws://localhost:8080/talk

🙋‍♂️ Enable streaming & human-in-the-loop (HITL) with WebSockets

HITL for LangChain agents on production can be challenging since the agents are typically running on servers where humans don't have direct access. langchain-serve bridges this gap by enabling websocket APIs that allow for real-time interaction and feedback between the agent and a human operator.

Check out this example to see how you can enable HITL for your agents.

📁 Persistent storage on Jina AI Cloud

Every app deployed on Jina AI Cloud gets a persistent storage (EFS) mounted locally which can be accessed via workspace kwarg in the @serving function.

Show code
from lcserve import serving

@serving
def store(text: str, **kwargs):
    workspace: str = kwargs.get('workspace')
    path = f'{workspace}/store.txt'
    print(f'Writing to {path}')
    with open(path, 'a') as f:
        f.writelines(text + '\n')
    return 'OK'


@serving(websocket=True)
async def stream(**kwargs):
    workspace: str = kwargs.get('workspace')
    websocket: WebSocket = kwargs.get('websocket')
    path = f'{workspace}/store.txt'
    print(f'Streaming {path}')
    async with aiofiles.open(path, 'r') as f:
        async for line in f:
            await websocket.send_text(line)
    return 'OK'

Here, we are using the workspace to store the incoming text in a file via the REST endpoint and streaming the contents of the file via the WebSocket endpoint.

🚀 Bring your own FastAPI app

If you already have a FastAPI app with pre-defined endpoints, you can use lc-serve to deploy it on Jina AI Cloud.

lc-serve deploy jcloud --app filename:app 
Show details

Let's take an example of a simple FastAPI app with directory structure

.
└── endpoints.py
# endpoints.py
from typing import Union

from fastapi import FastAPI

app = FastAPI()


@app.get("/status")
def read_root():
    return {"Hello": "World"}


@app.get("/items/{item_id}")
def read_item(item_id: int, q: Union[str, None] = None):
    return {"item_id": item_id, "q": q}
lc-serve deploy jcloud --app endpoints:app

🗝️ Using Secrets during Deployment

You can use secrets during app deployment by passing a secrets file to deployment with the --secrets flag. The secrets file should be a .env file containing the secrets.

lcserve deploy jcloud app --secrets .env
Show details

Let's take an example of a simple app that uses OPENAI_API_KEY stored as secrets.

This app directory contains the following files:

.
├── main.py             # The app
├── jcloud.yml          # JCloud deployment config file
├── README.md           # This README file
├── requirements.txt    # The requirements file for the app
└── secrets.env         # The secrets file containing the redis credentials

Note secret.env in this directory is a dummy file. You should replace it with your own secrets after creating a Redis instance. (For example with Upstash), such as:

OPENAI_API_KEY=sk-xxx

main.py will look like:

# main.py
from lcserve import serving
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain.chat_models import ChatOpenAI

prompt = PromptTemplate(
    input_variables=["subject"],
    template="Write me a short poem about {subject}?",
)


@serving(openai_tracing=True)
def poem(subject: str, **kwargs):
    tracing_handler = kwargs.get("tracing_handler")

    chat = ChatOpenAI(temperature=0.5, callbacks=[tracing_handler])
    chain = LLMChain(llm=chat, prompt=prompt, callbacks=[tracing_handler])

    return chain.run(subject)

In the above example, the app will use OPENAI_API_KEY provided by the secrets to interact with OpenAI.

Then you can deploy using the following command and interact with the deployed endpoint.

lc-serve deploy jcloud main --secrets secrets.env

⏱️ Trigger one-time jobs to run asynchronously

Here's a step-by-step guide to trigger one-time jobs to run asynchronously using @job decorator.

💻 lc-serve CLI

lc-serve is a simple CLI that helps you to deploy your agents on Jina AI Cloud (JCloud)

Description Command
Deploy your app locally lc-serve deploy local app
Export your app as Kubernetes YAML lc-serve export app --kind kubernetes --path .
Export your app as Docker Compose YAML lc-serve export app --kind docker-compose --path .
Deploy your app on JCloud lc-serve deploy jcloud app
Deploy FastAPI app on JCloud lc-serve deploy jcloud --app <app-name>:<app-object>
Update existing app on JCloud lc-serve deploy jcloud app --app-id <app-id>
Get app status on JCloud lc-serve status <app-id>
List all apps on JCloud lc-serve list
Remove app on JCloud lc-serve remove <app-id>
Pause app on JCloud lc-serve pause <app-id>
Resume app on JCloud lc-serve resume <app-id>

💡 JCloud Deployment

⚙️ Configurations

For JCloud deployment, you can configure your application infrastructure by providing a YAML configuration file using the --config option. The supported configurations are:

  • Instance type (instance), as defined by Jina AI Cloud.
  • Minimum number of replicas for your application (autoscale_min). Setting it 0 enables serverless.
  • Disk size (disk_size), in GB. The default value is 1 GB.

For example:

instance: C4
autoscale_min: 0
disk_size: 1.5G

You can alternatively include a jcloud.yaml file in your application directory with the desired configurations. However, please note that if the --config option is explicitly used in the command line interface, the local jcloud.yaml file will be disregarded. The command line provided configuration file will take precedence.

If you don't provide a configuration file or a specific configuration isn't specified, the following default settings will be applied:

instance: C3
autoscale_min: 1
disk_size: 1G

💰 Pricing

Applications hosted on JCloud are priced in two categories:

Base credits

  • Base credits are charged to ensure high availability for your application by maintaining at least one instance running continuously, ready to handle incoming requests. If you wish to stop the serving application, you can either remove the app completely or put it on pause, the latter allows you to resume the app serving based on persisted configurations (refer to lc-serve CLI section for more information). Both options will halt the consumption of credits.
  • Actual credits charged for base credits are calculated based on the instance type as defined by Jina AI Cloud.
  • By default, instance type C3 is used with a minimum of 1 instance and Amazon EFS disk of size 1G, which means that if your application is served on JCloud, you will be charged ~10 credits per hour.
  • You can change the instance type and the minimum number of instances by providing a YAML configuration file using the --config option. For example, if you want to use instance type C4 with a minimum of 0 replicas, and 2G EFS disk, you can provide the following configuration file:
    instance: C4
    autoscale_min: 0
    disk_size: 2G

Serving credits

  • Serving credits are charged when your application is actively serving incoming requests.
  • Actual credits charged for serving credits are calculated based on the credits for the instance type multiplied by the duration for which your application serves requests.
  • You are charged for each second your application is serving requests.

Total credits charged = Base credits + Serving credits. (Jina AI Cloud defines each credit as €0.005)

Examples

Example 1

Consider an HTTP application that has served requests for 10 minutes in the last hour and uses a custom config:

instance: C4
autoscale_min: 0
disk_size: 2G

Total credits per hour charged would be 3.33. The calculation is as follows:

C4 instance has an hourly credit rate of 20.
EFS has hourly credit rate of 0.104 per GB.
Base credits = 0 + 2 * 0.104 = 0.208 (since `autoscale_min` is 0)
Serving credits = 20 * 10/60 = 3.33
Total credits per hour = 0.208 + 3.33 = 3.538
Example 2

Consider a WebSocket application that had active connections for 20 minutes in the last hour and uses the default configuration.

instance: C3
autoscale_min: 1
disk_size: 1G

Total credits per hour charged would be 13.33. The calculation is as follows:

C3 instance has an hourly credit rate of 10.
EFS has hourly credit rate of 0.104 per GB.
Base credits = 10 + 1 * 0.104 = 10.104 (since `autoscale_min` is 1)
Serving credits = 10 * 20/60 = 3.33
Total credits per hour = 10.104 + 3.33 = 13.434

❓ Frequently Asked Questions

lc-serve command not found

Expand

lc-serve command is registered during langchain-serve installation. If you get command not found: lc-serve error, please replace lc-serve command with python -m lcserve & retry.

My client that connects to the JCloud hosted App gets timed-out, what should I do?

Expand

If you make long HTTP/ WebSocket requests, the default timeout value (2 minutes) might not be suitable for your use case. You can provide a custom timeout value during JCloud deployment by using the --timeout argument.

Additionally, for HTTP, you may also experience timeouts due to limitations in the OSS we used in langchain-serve. While we are working to permanently address this issue, we recommend using HTTP/1.1 in your client as a temporary workaround.

For WebSocket, please note that the connection will be closed if idle for more than 5 minutes.

How to pass environment variables to the app?

Expand

We provide 2 options to pass environment variables:

  1. Use --env during app deployment to load env variables from a .env file. For example, lc-serve deploy jcloud app --env some.env will load all env variables from some.env file and pass them to the app. These env variables will be available in the app as os.environ['ENV_VAR_NAME'].

  2. You can also pass env variables while sending requests to the app both in HTTP and WebSocket. envs field in the request body is used to pass env variables. For example

    {
        "question": "What is the meaning of life?",
        "envs": {
            "ENV_VAR_NAME": "ENV_VAR_VALUE"
        }
    }

JCloud deployment failed at pushing image to Jina Hubble, what should I do?

Expand

Please use --verbose and retry to get more information. If you are operating on computer with arm64 arch, please retry with --platform linux/amd64 so the image can be built correctly.

Debug babyagi playground request/response for external integration

Expand 1. Start textual console in a terminal (exclude following groups to reduce the noise in logging)
```bash
textual console -x EVENT -x SYSTEM -x DEBUG
```
  1. Start the playground with --verbose flag. Start interacting and see the logs in the console.

    lc-serve playground babyagi --verbose

📣 Reach out to us

Want to deploy your LLM apps on your own infrastructure with all capabilities of Jina AI Cloud?

  • Serverless
  • Autoscaling
  • TLS certs
  • Persistent storage
  • End to end LLM observability
  • and more on auto-pilot!

Join us on Discord and we'd be happy to hear more about your use case.

langchain-serve's People

Contributors

deepankarm avatar fogx avatar hanxiao avatar jina-bot avatar notandor avatar zac-li avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

langchain-serve's Issues

Error: Invalid value

lc-serve playground pdf-qna
Error: Invalid value: File does not exist: lcserve/playground/pdf_qna/playground.py

How to allow CORS requests?

When I publish my api to jina using lc-serve deploy jcloud api, my client app complains that CORS is not configured on the remote side. How can this be configured ? Thanks!

Add Support for Updating App and Changing Name Simultaneously with lc-serve deploy jcloud

This feature request aims to add support for updating an app and changing its name simultaneously using the lc-serve deploy jcloud command. Currently, updating the app and changing its name requires two separate steps, which can be time-consuming and inefficient for users. I propose adding an optional --name flag to the lc-serve deploy jcloud app command to streamline the process.

Proposed Usage:

lc-serve deploy jcloud app --app-id <prev-app-id> --name

By adding the --name flag to the command, users can provide the desired new name for their app while also updating its content. This change will reduce the steps needed to update and rename an app, making it more user-friendly and efficient.

DEPRECATION: langchain-serve is being installed using the legacy 'setup.py install'

I'm getting the following warning when installing the package:

DEPRECATION: langchain-serve is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/8559

bug: missing dependencies

Running lc-serve deploy babagi on Python 3.10 returns following:

Could not find module lcserve.apps.babyagi.app
Task exception was never retrieved
future: <Task finished name='Task-1' coro=<babyagi() done, defined at /mnt/data/work/sandbox/lc-serve-test/env/lib/python3.10/site-packages/lcserve/__main__.py:170> exception=SystemExit(1)>
Traceback (most recent call last):
  File "/mnt/data/work/sandbox/lc-serve-test/env/lib/python3.10/site-packages/lcserve/flow.py", line 183, in push_app_to_hubble
    app = import_module(mod)
  File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/mnt/data/work/sandbox/lc-serve-test/env/lib/python3.10/site-packages/lcserve/apps/babyagi/app.py", line 7, in <module>
    from babyagi import BabyAGI, CustomTool, PredefinedTools, get_tools, get_vectorstore
  File "/mnt/data/work/sandbox/lc-serve-test/env/lib/python3.10/site-packages/lcserve/apps/babyagi/babyagi.py", line 5, in <module>
    import faiss
ModuleNotFoundError: No module named 'faiss'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/mnt/data/work/sandbox/lc-serve-test/env/lib/python3.10/site-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
  File "/mnt/data/work/sandbox/lc-serve-test/env/lib/python3.10/site-packages/click/core.py", line 1657, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/mnt/data/work/sandbox/lc-serve-test/env/lib/python3.10/site-packages/click/core.py", line 1657, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/mnt/data/work/sandbox/lc-serve-test/env/lib/python3.10/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/mnt/data/work/sandbox/lc-serve-test/env/lib/python3.10/site-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "/mnt/data/work/sandbox/lc-serve-test/env/lib/python3.10/site-packages/lcserve/flow.py", line 31, in wrapper
    return asyncio.run(f(*args, **kwargs))
  File "/mnt/data/work/sandbox/lc-serve-test/env/lib/python3.10/site-packages/nest_asyncio.py", line 35, in run
    return loop.run_until_complete(task)
  File "/mnt/data/work/sandbox/lc-serve-test/env/lib/python3.10/site-packages/nest_asyncio.py", line 84, in run_until_complete
    self._run_once()
  File "/mnt/data/work/sandbox/lc-serve-test/env/lib/python3.10/site-packages/nest_asyncio.py", line 120, in _run_once
    handle._run()
  File "/usr/lib/python3.10/asyncio/events.py", line 80, in _run
    self._context.run(self._callback, *self._args)
  File "/mnt/data/work/sandbox/lc-serve-test/env/lib/python3.10/site-packages/nest_asyncio.py", line 196, in step
    step_orig(task, exc)
  File "/usr/lib/python3.10/asyncio/tasks.py", line 232, in __step
    result = coro.send(None)
  File "/mnt/data/work/sandbox/lc-serve-test/env/lib/python3.10/site-packages/lcserve/__main__.py", line 214, in babyagi
    await serve_babyagi_on_jcloud(
  File "/mnt/data/work/sandbox/lc-serve-test/env/lib/python3.10/site-packages/lcserve/__main__.py", line 76, in serve_babyagi_on_jcloud
    await serve_on_jcloud(
  File "/mnt/data/work/sandbox/lc-serve-test/env/lib/python3.10/site-packages/lcserve/__main__.py", line 44, in serve_on_jcloud
    gateway_id_wo_tag, is_websocket = push_app_to_hubble(
  File "/mnt/data/work/sandbox/lc-serve-test/env/lib/python3.10/site-packages/lcserve/flow.py", line 192, in push_app_to_hubble
    sys.exit(1)
SystemExit: 1

Issue with lc-serve deploy jcloud

Hello,

When running lc-serve deploy jcloud app with the example code snippet given in the readme (updated agent code app.py, the refactored one), I get the following error:
Exception: DownstreamServiceFailureError: Executor normalization failed(5000): 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte session_id: c006e4bf-e9e8-11ed-bb99-80913334fb46

The session ID is the following:
[yellow bold]c006e4bf-e9e8-11ed-bb99-80913334fb46[/]

  • I have made sure that the file is encoded with UTF-8 and I'm still getting the error
  • lc-serve deploy local app works
  • I'm using Windows 10

Thank you for the help.

Add support for streaming with chains

Would be great to support streaming for Langchain chains - e.g.

streaming_handler = kwargs.get('streaming_handler')

    model = ChatOpenAI(
        model='gpt-3.5-turbo',
        temperature=0.0,
        verbose=True,
        streaming=True,  # Pass `streaming=True` to make sure the client receives the data.
        callback_manager=CallbackManager(
            [streaming_handler]
        ),
    )
    qa = ConversationalRetrievalChain.from_llm(model,retriever=retriever)

Regarding exposing unstructured document loader and langchain FAISS text splitter

Hi,

Thank you for a much needed help on exposing langchain as apis. I dont use python but need to experiment with functionalities like document load using unstructured and FAISS both I think are used in Index module. Could you please provide some documentation example or a hint if its possible to implement some custom say post api which takes as an example a json or string input executes custom langchain functionalities and returns output. Would importing the functions in a custom function and putting them with the decorator be enough? Thanks again for this repository.

Thanks

[Bug] Calling a langchain function in a separate util file causes "Error: maximum recursion depth exceeded"

I'm using the LLMSummarizationCheckerChain and put it into a separate util function. When I use this pattern of code and run lc-serve deploy local test_api, I got the error - ""Error: maximum recursion depth exceeded" when testing the endpoint with the example text - "Mammals can lay eggs, birds can lay eggs, therefore birds are mammals" in the Swagger doc.

# test_api.py
import os
from loguru import logger
from lcserve import serving
from test_util import get_fact_check

@serving
def get_fact_check(query: str, **kwargs) -> str:
    #logger.info("Query:", query)
    check_output = get_fact_check(query)
    return check_output
# test_util.py
import os
from langchain.chat_models import ChatOpenAI
from langchain.chains import LLMSummarizationCheckerChain

def get_fact_check(query):
    openai_api_key = os.environ['OPENAI_API_KEY']
    llm = ChatOpenAI(openai_api_key=openai_api_key, temperature=0)
    checker_chain = LLMSummarizationCheckerChain.from_llm(llm, max_checks=2, verbose=True)
    check_output = checker_chain.run(query)
    return check_output

Note that if I put the logic of the get_fact_check function into test_api.py - ,

# test_api.py
import os
from loguru import logger
from lcserve import serving
from langchain.chat_models import ChatOpenAI
from langchain.chains import LLMSummarizationCheckerChain
from langchain_utils import get_fact_check

@serving
def get_fact_check(query: str, **kwargs) -> str:
    logger.info("Query:", query)
    openai_api_key = os.environ['OPENAI_API_KEY']
    llm = ChatOpenAI(openai_api_key=openai_api_key, temperature=0)
    checker_chain = LLMSummarizationCheckerChain.from_llm(llm, max_checks=2, verbose=True)
    check_output = checker_chain.run(query)
    return check_output

somehow it's working, which is more confusing...

I'm wondering whether the langchain-serve has some internal scoping conflict with LLMSummarizationCheckerChain during the iteration.

Error:
Screenshot 2023-05-26 at 12 51 21 AM

ImportError: cannot import name 'StructuredTool' from 'langchain.tools'

trying on a MacBook M2

(venv_pdfGPT)  ~/pdfGPT/ [main*] lc-serve deploy local api --platform linux/amd64
Traceback (most recent call last):
File "/opt/homebrew/bin/lc-serve", line 5, in
from lcserve.main import serve
File "/opt/homebrew/lib/python3.11/site-packages/lcserve/init.py", line 16, in
from .backend.slackbot import SlackBot
File "/opt/homebrew/lib/python3.11/site-packages/lcserve/backend/slackbot/init.py", line 1, in
from .slackbot import SlackBot
File "/opt/homebrew/lib/python3.11/site-packages/lcserve/backend/slackbot/slackbot.py", line 24, in
from langchain.tools import StructuredTool
ImportError: cannot import name 'StructuredTool' from 'langchain.tools' (/Users/andi/Library/Python/3.11/lib/python/site-packages/langchain/tools/init.py)
(venv_pdfGPT)  ~/pdfGPT/ [main*] lc-serve deploy local api
Traceback (most recent call last):
File "/opt/homebrew/bin/lc-serve", line 5, in
from lcserve.main import serve
File "/opt/homebrew/lib/python3.11/site-packages/lcserve/init.py", line 16, in
from .backend.slackbot import SlackBot
File "/opt/homebrew/lib/python3.11/site-packages/lcserve/backend/slackbot/init.py", line 1, in
from .slackbot import SlackBot
File "/opt/homebrew/lib/python3.11/site-packages/lcserve/backend/slackbot/slackbot.py", line 24, in
from langchain.tools import StructuredTool
ImportError: cannot import name 'StructuredTool' from 'langchain.tools' (/Users/andi/Library/Python/3.11/lib/python/site-packages/langchain/tools/init.py)

my streaming_handler is None

Here is my code: I run debug and see that streaming_handler is None, I see in the example and the streaming_handler only appear in hitl file, I still don't know how to use it.

@app.route('/api/chatbot', methods=['GET', 'POST'])
@token_required
@serving(websocket=True)
def chatbot(**kwargs) -> str:
    streaming_handler = kwargs.get('streaming_handler')
    input_text = request.data.decode("utf-8")

`@serving` functions in separate files

Hi, I'm having an issue with inconsistent deployment behavior between deploying locally and on jCloud.

For my specific need, I have my app package, and all the @serving functions are declared in separate files in this app package like this:

├── app
│   ├── __init__.py
│   ├── app.py
│   ├── lcapp1.py
│   └── lcapp2.py

The @serving functions are declared in lcapp1.py and lcapp2.py, and if I import them in __init__.py like this:

from .lcapp1 import *

from .lcapp2 import *

__all__ = ["lcapp1function", "lcapp2function"]

And with lc-serve deploy local I will have lcapp1function and lcapp2function APIs ready. But for lc-serve deploy jcloud these APIs will not be created.

I looked up the official implementations of apps, and they all have an app.py located in their app packages, I wonder if there's a recommended way to declare @serving function separately.

Files within deployment directory are not found in cloud

I have an app.py that reads a vectorstore file vectorstore.pkl in the same directory with:

path = "vectorstore.pkl"
with open(path, "rb") as f:
    ...

While this works fine locally, when deployed to Jina cloud with lc-serve deploy jcloud app, this throws a File not found error. By running the os.listdir() command, I am able to confirm that the vectorstore.pkl is not found in the cloud working directory.

A fix should be implemented such that all files within the deployment directory are included in the cloud working directory. A related issue can be found here: #34.

Interim solution

After chatting with @deepankarm, I solved this issue by using an absolute path with the /appdir/ directory where the vectorstore file lives, i.e., using path = "/appdir/vectorstore.pkl".

HTTP timeout in 60s

Hi all, as I am testing langchain-serve with a llama index in the back-end I noticed that, while locally works fine, once deployed (for the first run) the timeout is preventing the chain to work as expected.

This has to do with the fact that it needs to create the indexes in the first place and this requires more than 60s.

Cannot install jina because these package versions have conflicting dependencies

Macbook M1

steps:
run pip install langchain-serve

ERROR: Cannot install jina because these package versions have conflicting dependencies.

The conflict is caused by:
opentelemetry-instrumentation-aiohttp-client 0.33b0 depends on opentelemetry-instrumentation==0.33b0
opentelemetry-instrumentation-fastapi 0.40b0 depends on opentelemetry-instrumentation==0.40b0
opentelemetry-instrumentation-aiohttp-client 0.33b0 depends on opentelemetry-instrumentation==0.33b0
opentelemetry-instrumentation-fastapi 0.39b0 depends on opentelemetry-instrumentation==0.39b0
opentelemetry-instrumentation-aiohttp-client 0.33b0 depends on opentelemetry-instrumentation==0.33b0
opentelemetry-instrumentation-fastapi 0.38b0 depends on opentelemetry-instrumentation==0.38b0
opentelemetry-instrumentation-aiohttp-client 0.33b0 depends on opentelemetry-instrumentation==0.33b0
opentelemetry-instrumentation-fastapi 0.37b0 depends on opentelemetry-instrumentation==0.37b0
opentelemetry-instrumentation-aiohttp-client 0.33b0 depends on opentelemetry-instrumentation==0.33b0
opentelemetry-instrumentation-fastapi 0.36b0 depends on opentelemetry-instrumentation==0.36b0
opentelemetry-instrumentation-aiohttp-client 0.33b0 depends on opentelemetry-instrumentation==0.33b0
opentelemetry-instrumentation-fastapi 0.35b0 depends on opentelemetry-instrumentation==0.35b0
opentelemetry-instrumentation-aiohttp-client 0.33b0 depends on opentelemetry-instrumentation==0.33b0
opentelemetry-instrumentation-fastapi 0.34b0 depends on opentelemetry-instrumentation==0.34b0

To fix this you could try to:

  1. loosen the range of package versions you've specified
  2. remove package versions to allow pip attempt to solve the dependency conflict

Deploy directories through lc-serve

Lc-serve deploy jcloud [filename] currently does deploy all files in that directory however its not intuitive because it takes in the filename not the directory name. Ideally we should be able to deploy the directory and have a file for routing / exposing the endpoints separately within that folder.

Installation

Will someone help me install and run the qna_pdf application locally or through Jina. I would greatly appreciate it.

cant lc serve

'lc-serve' is not recognized as an internal or external command,
operable program or batch file.

Add JSON parameters for babyagi websocket

I am trying to integrate the babyagi websocket socket that I have deployed on jcloud. It would be great to have an example and/or walkthrough integration of the WebSocket (e.g. required parameters to pass) to integrate on the client side.

Allow sharing user details from the authorizer function

Allow sharing user details from the authorizer function - right now the authorization callback function only returns a boolean value regarding if the auth token was valid or not. We would like to additionally be able to extract the decoded id from that header. In general a bit more flexibility around the inputs / outputs of the auth callback would be great for supporting different use cases.

slackbot-demo issue and local deploy issue on Mac M1

Tried to run your default https://github.com/jina-ai/langchain-serve/tree/main/lcserve/apps/slackbot and faced the issue below:

❯ lc-serve deploy slackbot-demo --env .env

⠼ Pushing `/var/folders/w9/h00950gs1jb8fhvzphb53lbh0000gn/T/tmp8uwkazhr` ...🔐 You are logged in to Jina AI as USER (username: APP). To log out, use jina auth logout.

Failed on building Docker image. Potential solutions:
  - If you haven't provide a Dockerfile in the executor bundle, you may want to provide one,
    as the auto-generated one on the cloud did not work.
  - If you have provided a Dockerfile, you may want to check the validity of this Dockerfile.

Please report this session_id: [yellow bold]41eb12b4-32ee-11ee-b328-c210cfc82d17[/] to https://github.com/jina-ai/jina-hubble-sdk/issues
Traceback (most recent call last):
  File "/Users/USER/workspace/APP-chat/venv/bin/lc-serve", line 8, in <module>
    sys.exit(serve())
  File "/Users/USER/workspace/APP-chat/venv/lib/python3.9/site-packages/click/core.py", line 1157, in __call__
    return self.main(*args, **kwargs)
  File "/Users/USER/workspace/APP-chat/venv/lib/python3.9/site-packages/click/core.py", line 1078, in main
    rv = self.invoke(ctx)
  File "/Users/USER/workspace/APP-chat/venv/lib/python3.9/site-packages/click/core.py", line 1688, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/Users/USER/workspace/APP-chat/venv/lib/python3.9/site-packages/click/core.py", line 1688, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/Users/USER/workspace/APP-chat/venv/lib/python3.9/site-packages/click/core.py", line 1434, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/Users/USER/workspace/APP-chat/venv/lib/python3.9/site-packages/click/core.py", line 783, in invoke
    return __callback(*args, **kwargs)
  File "/Users/USER/workspace/APP-chat/venv/lib/python3.9/site-packages/lcserve/flow.py", line 48, in wrapper
    return asyncio.run(f(*args, **kwargs))
  File "/Users/USER/workspace/APP-chat/venv/lib/python3.9/site-packages/nest_asyncio.py", line 35, in run
    return loop.run_until_complete(task)
  File "/Users/USER/workspace/APP-chat/venv/lib/python3.9/site-packages/nest_asyncio.py", line 90, in run_until_complete
    return f.result()
  File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/asyncio/futures.py", line 201, in result
    raise self._exception
  File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/asyncio/tasks.py", line 256, in __step
    result = coro.send(None)
  File "/Users/USER/workspace/APP-chat/venv/lib/python3.9/site-packages/lcserve/__main__.py", line 924, in slackbot_demo
    await serve_slackbot_demo_on_jcloud(
  File "/Users/USER/workspace/APP-chat/venv/lib/python3.9/site-packages/lcserve/__main__.py", line 297, in serve_slackbot_demo_on_jcloud
    await serve_on_jcloud(
  File "/Users/USER/workspace/APP-chat/venv/lib/python3.9/site-packages/lcserve/__main__.py", line 89, in serve_on_jcloud
    gateway_id = push_app_to_hubble(
  File "/Users/USER/workspace/APP-chat/venv/lib/python3.9/site-packages/lcserve/flow.py", line 381, in push_app_to_hubble
    return _push_to_hubble(tmpdir, image_name, tag, platform, verbose, public)
  File "/Users/USER/workspace/APP-chat/venv/lib/python3.9/site-packages/lcserve/flow.py", line 351, in _push_to_hubble
    gateway_id = HubIO(args).push().get('id')
  File "/Users/USER/workspace/APP-chat/venv/lib/python3.9/site-packages/hubble/__init__.py", line 48, in arg_wrapper
    return func(*args, **kwargs)
  File "/Users/USER/workspace/APP-chat/venv/lib/python3.9/site-packages/hubble/executor/hubio.py", line 596, in push
    raise e
  File "/Users/USER/workspace/APP-chat/venv/lib/python3.9/site-packages/hubble/executor/hubio.py", line 579, in push
    image = self._send_push_request(
  File "/Users/USER/workspace/APP-chat/venv/lib/python3.9/site-packages/hubble/executor/hubio.py", line 399, in _send_push_request
    raise Exception(
Exception: 
Failed on building Docker image. Potential solutions:
  - If you haven't provide a Dockerfile in the executor bundle, you may want to provide one,
    as the auto-generated one on the cloud did not work.
  - If you have provided a Dockerfile, you may want to check the validity of this Dockerfile.
 session_id: 41eb12b4-32ee-11ee-b328-c210cfc82d17

Beside of this on my system when I try to run the local deploy it won't run and give me warning like below:

❯ lc-serve deploy local app

⠋ Waiting ... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0/1 -:--:--Traceback (most recent call last):
  File "/Users/USER/workspace/APP-chat/venv/lib/python3.9/site-packages/lcserve/backend/gateway.py", line 433, in _register_mod
    app_module = import_module(mod)
  File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
  File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 850, in exec_module
  File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
  File "/Users/USER/workspace/APP-chat/app.py", line 12, in <module>
    import config
ModuleNotFoundError: No module named 'config'
ERROR  gateway/rep-0@8969 Unable to import module: app as No module named 'config'                                                                                                                                                                                                              [08/04/23 12:02:17]
⠙ Waiting gateway... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0/1 0:00:00objc[8969]: +[__NSCFConstantString initialize] may have been in progress in another thread when fork() was called.
objc[8969]: +[__NSCFConstantString initialize] may have been in progress in another thread when fork() was called. We cannot safely call it or ignore it in the fork() child process. Crashing instead. Set a breakpoint on objc_initializeAfterForkError to debug.
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── 🎉 Flow is ready to serve! ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
╭────────────── 🔗 Endpoint ───────────────╮
│  ⛓      Protocol                   HTTP  │
│  🏠        Local           0.0.0.0:8080  │
│  🔒      Private         10.0.0.30:8080  │
│  🌍       Public     68.144.94.140:8080  │
╰──────────────────────────────────────────╯
╭─────────── 💎 HTTP extension ────────────╮
│  💬          Swagger UI        .../docs  │
│  📚               Redoc       .../redoc  │
╰──────────────────────────────────────────╯
Do you love open source? Help us improve Jina in just 1 minute and 30 seconds by taking our survey: https://10sw1tcpld4.typeform.com/jinasurveyfeb23?utm_source=jina (Set environment variable JINA_HIDE_SURVEY=1 to hide this message.)

Is there any support for Data persistence?

Hey! We are the team of LaWGPT, and your work is so wonderful!

Now, We have a demand that LaWGPT needs persistent storage of user question-and-answer data (data collection) for iterative project updates. So is there any support to cope with this problem?

Thank you very much!

Support websockets / SSE for langchain-serve @serving decorator

I'm working on building out a chat application that will need to support websockets for a person to chat live with a langchain agent. The current documentation around websocket support for deploying endpoints through JINA seemed fragmented and I was told this could be supported better with the new @Serving decorator. Currently that only supports REST endpoints, would like to request support for websockets / SSE to allow for real time langchain agent response streaming.

lc-serve command not found

Screenshot 2023-05-28 at 4 43 45 PM

Sorry for the lack of detail, but after creating a jina.ai account and then performing the auth, and installing the project, I still can't seem to locate lc-serve - where is this file supposed to be or where does this script live?

lc-serve deploy local - Doesn't work on Windows 10

I've tried deploying an api.py app on Windows 10 and keep getting the same error. Script works fine on WSL through PowerShell.

Here are my logs:

(myenv) C:\Users\Użytkownik\Desktop\chatpdf>lc-serve deploy local api
⠙ Waiting gateway... ---------------------------------------- 0/1 0:00:04DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html (raised from C:\Users\Użytkownik\AppData\Local\Programs\Python\Python39\lib\site-packages\hubble\executor\requirements.py:7)
DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('google')`.
Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages (raised from C:\Users\Użytkownik\AppData\Local\Programs\Python\Python39\lib\site-packages\pkg_resources\__init__.py:2871)
DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('google.logging')`.
Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages (raised from C:\Users\Użytkownik\AppData\Local\Programs\Python\Python39\lib\site-packages\pkg_resources\__init__.py:2871)
DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('google')`.
Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages (raised from C:\Users\Użytkownik\AppData\Local\Programs\Python\Python39\lib\site-packages\pkg_resources\__init__.py:2350)
⠹ Waiting gateway... ---------------------------------------- 0/1 0:00:04DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('mpl_toolkits')`.
Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages (raised from C:\Users\Użytkownik\AppData\Local\Programs\Python\Python39\lib\site-packages\pkg_resources\__init__.py:2871)
DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('google')`.
Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages (raised from C:\Users\Użytkownik\AppData\Local\Programs\Python\Python39\lib\site-packages\pkg_resources\__init__.py:2871)
⠹ Waiting gateway... ---------------------------------------- 0/1 0:00:08ERROR  gateway/rep-0@13296 FileNotFoundError('can not find                                           [08/03/23 19:12:18]
       C:\\Users\\Użytkownik\\AppData\\Local\\Programs\\Python\\Python39\\\nlib\\site-packages\\lcs…
       during 'GatewayRuntime' initialization
        add "--quiet-error" to suppress the exception details
       Traceback (most recent call last):
         File
       "C:\Users\Użytkownik\AppData\Local\Programs\Python\Python39\lib\site-packages\jina\serve\exe…
       line 140, in run
           runtime = AsyncNewLoopRuntime(
         File
       "C:\Users\Użytkownik\AppData\Local\Programs\Python\Python39\lib\site-packages\jina\serve\run…
       line 90, in __init__
           self._loop.run_until_complete(self.async_setup())
         File
       "C:\Users\Użytkownik\AppData\Local\Programs\Python\Python39\lib\site-packages\nest_asyncio.p…
       line 99, in run_until_complete
           return f.result()
         File "C:\Users\Użytkownik\AppData\Local\Programs\Python\Python39\lib\asyncio\futures.py",
       line 201, in result
           raise self._exception
         File "C:\Users\Użytkownik\AppData\Local\Programs\Python\Python39\lib\asyncio\tasks.py",
       line 256, in __step
           result = coro.send(None)
         File
       "C:\Users\Użytkownik\AppData\Local\Programs\Python\Python39\lib\site-packages\jina\serve\run…
       line 270, in async_setup
           self.server = self._get_server()
         File
       "C:\Users\Użytkownik\AppData\Local\Programs\Python\Python39\lib\site-packages\jina\serve\run…
       line 168, in _get_server
           server = BaseGateway.load_config(
         File
       "C:\Users\Użytkownik\AppData\Local\Programs\Python\Python39\lib\site-packages\jina\jaml\__in…
       line 695, in load_config
           stream, s_path = parse_config_source(
         File
       "C:\Users\Użytkownik\AppData\Local\Programs\Python\Python39\lib\site-packages\jina\jaml\help…
       line 191, in parse_config_source
           PathImporter.add_modules(module_name)
         File
       "C:\Users\Użytkownik\AppData\Local\Programs\Python\Python39\lib\site-packages\jina\importer.…
       line 161, in add_modules
           _path_import(complete_path(m))
         File
       "C:\Users\Użytkownik\AppData\Local\Programs\Python\Python39\lib\site-packages\jina\jaml\help…
       line 229, in complete_path
           raise FileNotFoundError(f'can not find {path}')
       FileNotFoundError: can not find C:\Users\Użytkownik\AppData\Local\Programs\Python\Python39\
       lib\site-packages\lcserve\servinggateway_config
ERROR  Flow@24540 An exception occurred:                                                            [08/03/23 19:12:18]
ERROR  Flow@24540 Flow is aborted due to ['gateway'] can not be started.
Traceback (most recent call last):
  File "C:\Users\Użytkownik\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "C:\Users\Użytkownik\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "C:\Users\Użytkownik\AppData\Local\Programs\Python\Python39\Scripts\lc-serve.exe\__main__.py", line 7, in <module>
  File "C:\Users\Użytkownik\AppData\Local\Programs\Python\Python39\lib\site-packages\click\core.py", line 1157, in __call__
    return self.main(*args, **kwargs)
  File "C:\Users\Użytkownik\AppData\Local\Programs\Python\Python39\lib\site-packages\click\core.py", line 1078, in main
    rv = self.invoke(ctx)
  File "C:\Users\Użytkownik\AppData\Local\Programs\Python\Python39\lib\site-packages\click\core.py", line 1688, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "C:\Users\Użytkownik\AppData\Local\Programs\Python\Python39\lib\site-packages\click\core.py", line 1688, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "C:\Users\Użytkownik\AppData\Local\Programs\Python\Python39\lib\site-packages\click\core.py", line 1434, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "C:\Users\Użytkownik\AppData\Local\Programs\Python\Python39\lib\site-packages\click\core.py", line 783, in invoke
    return __callback(*args, **kwargs)
  File "C:\Users\Użytkownik\AppData\Local\Programs\Python\Python39\lib\site-packages\lcserve\__main__.py", line 660, in local
    serve_locally(module_str=module_str, fastapi_app_str=app, port=port, env=env)
  File "C:\Users\Użytkownik\AppData\Local\Programs\Python\Python39\lib\site-packages\lcserve\__main__.py", line 51, in serve_locally
    with Flow.load_config(f_yaml) as f:
  File "C:\Users\Użytkownik\AppData\Local\Programs\Python\Python39\lib\site-packages\jina\orchestrate\orchestrator.py", line 14, in __enter__
    return self.start()
  File "C:\Users\Użytkownik\AppData\Local\Programs\Python\Python39\lib\site-packages\jina\orchestrate\flow\builder.py", line 33, in arg_wrapper
    return func(self, *args, **kwargs)
  File "C:\Users\Użytkownik\AppData\Local\Programs\Python\Python39\lib\site-packages\jina\orchestrate\flow\base.py", line 1843, in start
    self._wait_until_all_ready()
  File "C:\Users\Użytkownik\AppData\Local\Programs\Python\Python39\lib\site-packages\jina\orchestrate\flow\base.py", line 2009, in _wait_until_all_ready
    raise RuntimeFailToStart
jina.excepts.RuntimeFailToStart

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.