GithubHelp home page GithubHelp logo

lightning-ai / litserve Goto Github PK

View Code? Open in Web Editor NEW
51.0 51.0 5.0 191 KB

Deploy AI models at scale. High-throughput serving engine for AI/ML models that uses the latest state-of-the-art model deployment techniques.

Home Page: https://lightning.ai

License: Apache License 2.0

Python 100.00%
ai api serving

litserve's People

Contributors

aniketmaurya avatar borda avatar dependabot[bot] avatar lantiga avatar williamfalcon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

litserve's Issues

use async test client for testing

๐Ÿ› Bug

Current test client with threads causing "BlockingPortal not running issue" with asyncio. Migrate the sync test client to async to fix that.

To Reproduce

Steps to reproduce the behavior:

  1. Go to '...'
  2. Run '....'
  3. Scroll down to '....'
  4. See error

Code sample

Expected behavior

Environment

  • PyTorch Version (e.g., 1.0):
  • OS (e.g., Linux):
  • How you installed PyTorch (conda, pip, source):
  • Build command you used (if compiling from source):
  • Python version:
  • CUDA/cuDNN version:
  • GPU models and configuration:
  • Any other relevant information:

Additional context

readme example code doesn't work

๐Ÿ› Bug

We need to guard the server.run() method under the __main__ block otherwise it leads to the following issue -

ERROR:    Traceback (most recent call last):
  File "/Users/aniket/miniconda3/envs/am/lib/python3.10/site-packages/starlette/routing.py", line 738, in lifespan
    async with self.lifespan_context(app) as maybe_state:
  File "/Users/aniket/miniconda3/envs/am/lib/python3.10/contextlib.py", line 199, in __aenter__
    return await anext(self.gen)
  File "/Users/aniket/Projects/github/litserve/src/litserve/server.py", line 135, in lifespan
    manager = Manager()
  File "/Users/aniket/miniconda3/envs/am/lib/python3.10/multiprocessing/context.py", line 57, in Manager
    m.start()
  File "/Users/aniket/miniconda3/envs/am/lib/python3.10/multiprocessing/managers.py", line 562, in start
    self._process.start()
  File "/Users/aniket/miniconda3/envs/am/lib/python3.10/multiprocessing/process.py", line 121, in start
    self._popen = self._Popen(self)
  File "/Users/aniket/miniconda3/envs/am/lib/python3.10/multiprocessing/context.py", line 288, in _Popen
    return Popen(process_obj)
  File "/Users/aniket/miniconda3/envs/am/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 32, in __init__
    super().__init__(process_obj)
  File "/Users/aniket/miniconda3/envs/am/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__
    self._launch(process_obj)
  File "/Users/aniket/miniconda3/envs/am/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 42, in _launch
    prep_data = spawn.get_preparation_data(process_obj._name)
  File "/Users/aniket/miniconda3/envs/am/lib/python3.10/multiprocessing/spawn.py", line 154, in get_preparation_data
    _check_not_importing_main()
  File "/Users/aniket/miniconda3/envs/am/lib/python3.10/multiprocessing/spawn.py", line 134, in _check_not_importing_main
    raise RuntimeError('''
RuntimeError: 
        An attempt has been made to start a new process before the
        current process has finished its bootstrapping phase.

        This probably means that you are not using fork to start your
        child processes and you have forgotten to use the proper idiom
        in the main module:

            if __name__ == '__main__':
                freeze_support()
                ...

        The "freeze_support()" line can be omitted if the program
        is not going to be frozen to produce an executable.

ERROR:    Application startup failed. Exiting.

To Reproduce

Run the following code -

# server.py
import litserve as ls

# STEP 1: DEFINE YOUR MODEL API
class SimpleLitAPI(ls.LitAPI):
    def setup(self, device):
        # Setup the model so it can be called in `predict`.
        self.model = lambda x: x**2

    def decode_request(self, request):
        # Convert the request payload to your model input.
        return request["input"]

    def predict(self, x):
        # Run the model on the input and return the output.
        return self.model(x)

    def encode_response(self, output):
        # Convert the model output to a response payload.
        return {"output": output}

# STEP 2: START THE SERVER
api = SimpleLitAPI()
server = ls.LitServer(api, accelerator="gpu")
server.run(port=8000)

Code sample

Expected behavior

Environment

  • PyTorch Version (e.g., 1.0):
  • OS (e.g., Linux):
  • How you installed PyTorch (conda, pip, source):
  • Build command you used (if compiling from source):
  • Python version:
  • CUDA/cuDNN version:
  • GPU models and configuration:
  • Any other relevant information:

Additional context

Handle streaming loop if client has disconnected

๐Ÿš€ Feature

If the client is disconnected while streaming the response we still complete the loop. We can stop the loop on BrokenPipeError.

x = lit_api.decode_request(x_enc)
y_gen = lit_api.predict(x)
y_enc_gen = lit_api.encode_response(y_gen)
for y_enc in y_enc_gen:
    ####### Detect disconnected client on BrokenPipeError  ############
    with contextlib.suppress(BrokenPipeError):
        pipe_s.send(y_enc)

Motivation

Pitch

Alternatives

Additional context

cc: @lantiga

Only enable either `predict` or `stream-predict` API based on `stream` argument to LitServer

๐Ÿ› Bug

image

To Reproduce

Steps to reproduce the behavior:

Run a server stream=True flag

Code sample

Expected behavior

Environment

  • PyTorch Version (e.g., 1.0):
  • OS (e.g., Linux):
  • How you installed PyTorch (conda, pip, source):
  • Build command you used (if compiling from source):
  • Python version:
  • CUDA/cuDNN version:
  • GPU models and configuration:
  • Any other relevant information:

Additional context

auto accelerator for JAX

Creating this issue to refer for implementing auto accelerator for JAX users.

          The thing is that we need to be opinionated about how other frameworks decide to support devices and what devices they support. So the semantics of `auto` will need to be framework-specific.

For instance with JAX you can call device_put on GPU, but you don't have mps. So we can add JAX with a different implementation for _choose_gpu_accelerator_backend.

We should actually have framework-specific _choose_gpu_accelerator_backend functions, like _choose_gpu_accelerator_torch etc.

Originally posted by @lantiga in #44 (comment)

default unbatch is always a generator

๐Ÿ› Bug

Since LitAPI.unbatch has an yield statement, it is always considered as a generator even if the line is not executed. [Reference]

This is causing the server to fail silently with dynamic batching.

handle exceptions from `LitAPI` in the `run_inference` process

๐Ÿ› Bug

Need to handle user logic for exception. Right now, if the LitAPI methods raise any exception then server doesn't handle it and fails silently. The side effect is that no new request will be processed.

# Pseudo code

def inference_loop():
    while True:
         uid = get_uid()
        ######## User logic #########
         x = lit_api.decode_request(x_enc)
        y = lit_api.predict(x)
        y_enc = lit_api.encode_response(y)
        ###########################
        with contextlib.suppress(BrokenPipeError):
            pipe_s.send(y_enc)

To Reproduce

Steps to reproduce the behavior:

  1. Go to '...'
  2. Run '....'
  3. Scroll down to '....'
  4. See error

Code sample

Expected behavior

Environment

  • PyTorch Version (e.g., 1.0):
  • OS (e.g., Linux):
  • How you installed PyTorch (conda, pip, source):
  • Build command you used (if compiling from source):
  • Python version:
  • CUDA/cuDNN version:
  • GPU models and configuration:
  • Any other relevant information:

Additional context

Improve canceling via Ctrl+C

Right now the server hangs after keyboard interrupt with

ERROR:    Traceback (most recent call last):
  File "/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/starlette/routing.py", line 743, in lifespan
    await receive()
  File "/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/uvicorn/lifespan/on.py", line 137, in receive
    return await self.receive_queue.get()
  File "/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/asyncio/queues.py", line 159, in get
    await getter
asyncio.exceptions.CancelledError

This may be related to cancelling async tasks.

We need to handle termination gracefully.

Running litserve with HTTPS

Thanks for this great project! Iโ€™m currently exploring litserve and Iโ€™m curious whether it supports HTTPS encryption for communication, similar to what uvicorn offers:

$ uvicorn main:app --ssl-keyfile=./key.pem --ssl-certfile=./cert.pem

504 gateway timeouts

Hi guys,

using one GPU works better than working with 4 GUPs e.g.
I am running it on LightningAI and not even one request goes through at all if running on 4 devices.

server = LitServer(SimpleLitAPI(), accelerator="cuda", devices=1, timeout=60)

vs

server = LitServer(SimpleLitAPI(), accelerator="cuda", devices=4, timeout=60)

Any hint?

Add devices="auto"

Right now we have accelerator="auto", we can now detect how many accelerators to run on automatically

Add a detached mode

๐Ÿš€ Feature

It would be nice to add a --detach mode similar to jekyll serve to detach the session.

Motivation

This could be useful for testing purposes and usability etc so it doesn't block the main terminal.

Pitch

The equivalent of

jekyll serve --detach
...
      Generating... 
                    done in 3.982 seconds.
 Auto-regeneration: disabled when running server detached.
    Server address: http://127.0.0.1:4000/
Server detached with pid '76023'. Run `pkill -f jekyll' or `kill -9 76023' to stop the server.

Alternatives

Additional context

avoid sending too much content during batched streaming

Find a way to avoid sending a lot of tokens past the last token for a particular item in the batch (i.e. we need to trim past the EOS in encode_response, let's open an issue and create an example about it)

          as a more immediate improvement, we need to find a way to avoid sending a lot of tokens past the last token for a particular item in the batch (i.e. we need to trim past the EOS in `encode_response`, let's open an issue and create an example about it)

Originally posted by @lantiga in #55 (comment)

end-to-end tests

Add end-to-end tests for:

  • dynamic Batching - addressed by #68
  • Dynamic batching with streaming - addressed by #68
  • single prediction
  • single streaming

Disable timeout

how do you disable it?

also, add a nice message when something times out to let the user there is a timeout argument that has been defaulted to 30

A request timedout. current timeout set to 30.
change timeout by LitServe(..., timeout=30)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.