GithubHelp home page GithubHelp logo

sse-starlette's Introduction

Server Sent Events for Starlette and FastAPI

Downloads PyPI Version Build Status Code Coverage

Implements the Server-Sent Events specification.

Background: https://sysid.github.io/server-sent-events/

Installation:

pip install sse-starlette

Usage:

import asyncio
import uvicorn
from starlette.applications import Starlette
from starlette.routing import Route
from sse_starlette.sse import EventSourceResponse

async def numbers(minimum, maximum):
    for i in range(minimum, maximum + 1):
        await asyncio.sleep(0.9)
        yield dict(data=i)

async def sse(request):
    generator = numbers(1, 5)
    return EventSourceResponse(generator)

routes = [
    Route("/", endpoint=sse)
]

app = Starlette(debug=True, routes=routes)

if __name__ == "__main__":
    uvicorn.run(app, host="0.0.0.0", port=8000, log_level='info')

Output: output

Caveat: SSE streaming does not work in combination with GZipMiddleware.

Be aware that for proper server shutdown your application must stop all running tasks (generators). Otherwise you might experience the following warnings at shutdown: Waiting for background tasks to complete. (CTRL+C to force quit).

Client disconnects need to be handled in your Request handler (see example.py):

async def endless(req: Request):
    async def event_publisher():
        i = 0
        try:
          while True:
              i += 1
              yield dict(data=i)
              await asyncio.sleep(0.2)
        except asyncio.CancelledError as e:
          _log.info(f"Disconnected from client (via refresh/close) {req.client}")
          # Do any other cleanup, if any
          raise e
    return EventSourceResponse(event_publisher())

Special use cases

Customize Ping

By default, the server sends a ping every 15 seconds. You can customize this by:

  1. setting the ping parameter
  2. by changing the ping event to a comment event so that it is not visible to the client
@router.get("")
async def handle():
    generator = numbers(1, 100)
    return EventSourceResponse(
        generator,
        headers={"Server": "nini"},
        ping=5,
        ping_message_factory=lambda: ServerSentEvent(**{"comment": "You can't see\r\nthis ping"}),
    )

SSE Send Timeout

To avoid 'hanging' connections in case HTTP connection from a certain client was kept open, but the client stopped reading from the connection you can specifiy a send timeout (see #89).

EventSourceResponse(..., send_timeout=5)  # terminate hanging send call after 5s

Fan out Proxies

Fan out proxies usually rely on response being cacheable. To support that, you can set the value of Cache-Control. For example:

return EventSourceResponse(
        generator(), headers={"Cache-Control": "public, max-age=29"}
    )

Error Handling

See example: examples/error_handling.py

Sending Responses without Async Generators

Async generators can expose tricky error and cleanup behavior especially when they are interrupted.

Background: Cleanup in async generators.

Example no_async_generators.py shows an alternative implementation that does not rely on async generators but instead uses memory channels (examples/no_async_generators.py).

Development, Contributing

  1. install pdm: pip install pdm
  2. install dependencies using pipenv: pdm install -d.
  3. To run tests:

Makefile

  • make sure your virtualenv is active
  • check Makefile for available commands and development support, e.g. run the unit tests:
make test
make tox

For integration testing you can use the provided examples in tests and examples.

If you are using Postman, please see: #47 (comment)

sse-starlette's People

Contributors

alairock avatar blodow avatar cyprienc avatar ejlangev avatar fancyweb avatar gagantrivedi avatar havardthom avatar jakkdl avatar justindujardin avatar maksimzayats avatar metakot avatar nekonoshiri avatar paxcodes avatar phagara avatar synodriver avatar sysid avatar truh avatar uspike avatar vaibhavmule avatar yummybacon5 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

sse-starlette's Issues

sse-starlette generator getting stuck after closing the sse client

I am trying to use SSE as fastApi endpoint. And the event generator never exits and gets stuck.

Machine details:

  • Ubuntu 20
  • Python 3.8

Related Packages:

  • fastapi==0.67.0
  • sse-starlette==0.6.2
  • uvicorn==0.13.3

Server code:

import asyncio
import uvicorn
from fastapi import FastAPI, Request
from sse_starlette.sse import EventSourceResponse
from concurrent.futures import CancelledError


app = FastAPI()


@app.get("/")
async def get_events_stream(request: Request):

    async def event_generator():
        count = 0
        print("starting the loop")
        while True:
            try:
                print(f"Sleeping - {count}")
                await asyncio.sleep(1)
                print("Finished sleeping")
            except CancelledError:
                if not await request.is_disconnected():
                    print('Cancelled future error')
            except Exception:
                print('Exception!')

            if await request.is_disconnected():
                print("disconnected")
                break

            yield {'event': 'message', 'data': count}
            count += 1
        print("ended the loop - finished serving")

    return EventSourceResponse(event_generator(), ping=1)


uvicorn.run(app=app, port=5555, loop="asyncio")

Client (It is possible to just go to browser http://localhost:5555) :

from sseclient import SSEClient

address = "http://127.0.0.1:5555/"


read_client = SSEClient(address)

for new_event in read_client:
    print(f'event: {new_event.event}, {new_event.data}')

Output:

INFO:     Started server process [15989]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://127.0.0.1:5555 (Press CTRL+C to quit)
INFO:     127.0.0.1:51114 - "GET / HTTP/1.1" 200 OK
starting the loop
Sleeping - 0
Finished sleeping
Sleeping - 1
Finished sleeping
Sleeping - 2
Finished sleeping
Sleeping - 3
Finished sleeping
Sleeping - 4  # The client stopped.

Handling Errors Best Practices

What is the best practice when it comes to handling errors with this libary? Should I be wrapping my event generator with a try, except block and return a {event: "error") response? This would be a great addition to the docs and I'll gladly open a pull request.

Here's some example code with the situation in question with fastapi.

async def numbers(minimum: int, maximum: int) -> Any:
    for i in range(minimum, maximum + 1):
        if i == 3:
            raise HTTPException(status=400)
        await asyncio.sleep(0.9)
        yield dict(data=i)


@router.post(
    "/test-sse",
)
async def test_sse(
    request: Request,
) -> Any:
    generator = numbers(1, 5)
    return EventSourceResponse(generator)

Is there any way to yield messages on demand?

All examples feature a simple use case with frequent yields with asyncio.sleep() delay between them. Is there any way I can yield a message on demand?

My use case is to send sporadic messages which can be hours or seconds apart from each other, so spinning up an endless loop which would fetch messages every 100 ms would be a waste of resources. Is there any way I can avoid doing that?

Thank you for your work, awesome module.

Custom async generators support

Hi!

The EventSourceResponse class won't work with custom async generators. (like class Stream in the example below)

But if add one more check here:

if inspect.isasyncgen(content):

Like this:

if inspect.isasyncgen(content) or isinstance(content, AsyncIterable)

Or even like this:

if isinstance(content, AsyncIterable)

Code below will work.

The code:

import asyncio

from fastapi import FastAPI, Depends
from sse_starlette import EventSourceResponse, ServerSentEvent
from starlette import status


class Stream:
    def __init__(self) -> None:
        self._queue = asyncio.Queue[ServerSentEvent]()

    def __aiter__(self) -> "Stream":
        return self

    async def __anext__(self) -> ServerSentEvent:
        return await self._queue.get()

    async def asend(self, value: ServerSentEvent) -> None:
        await self._queue.put(value)


app = FastAPI()

_stream = Stream()
app.dependency_overrides[Stream] = lambda: _stream


@app.get("/sse")
async def sse(stream: Stream = Depends()) -> EventSourceResponse:
    return EventSourceResponse(stream)


@app.post("/message", status_code=status.HTTP_201_CREATED)
async def send_message(message: str, stream: Stream = Depends()) -> None:
    await stream.asend(
        ServerSentEvent(data=message)
    )


if __name__ == "__main__":
    import uvicorn

    uvicorn.run(app, host="127.0.0.1", port=8080)

Authentication issue when using sysid/sse-starlette

I'm developing an API using Starlette and I've implemented authentication following their example (see https://www.starlette.io/authentication/). I've also added sysid/sse-starlette to the project and adding @requires("authenticated") to my streaming API endpoint always returns a 403 Forbidden return code. No issues with the other GET or POST endpoints. Also note that the authentication logic succeeds at validating my credentials when hitting the streaming endpoint but somehow fails when returning the AuthCredentials for the user.

Unfortunately, I cannot post my code here but I was wondering if you had any idea of what could be happening. If not, I'll put together a simple and shareable example to hopefully replicate the issue and we can go from there.

Thank you!

Contribution guidelines missing

Hey there! I have an issue with your project and was willing to try to fix it myself or at least make minimal reproducible example for issue.
First, I need to install the project locally to test it out, look at codebase, etcโ€”but there's no guide for that. Packaging system is unclear to me: you have a pipfile and setup.py. Also a makefile that doesn't have install instruction. What is the preferred way to install it for development?

Terminating client stream hangs background task on shutdown

python: 3.8.1 (windows 10)
starlette: 0.13.2
client: curl localhost:8080/streaming-endpoint

code snippet:

@server.route('/streaming-endpoint', methods=['GET'])
async def stream_stats(req: Request):
  async def event_publisher():
    while True:
      yield dict(id=..., event=..., data=...)
      await asyncio.sleep(0.2)
  return EventSourceResponse(event_publisher())

log output:

INFO:     Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
INFO:     127.0.0.1:63985 - "GET /streaming-endpoint HTTP/1.1" 200
<<< disconnect client, wait a few seconds, then ^C >>>

INFO:     Shutting down
INFO:     Waiting for background tasks to complete. (CTRL+C to force quit)
<<< hangs here until ^C >>>

INFO:     Finished server process [364]
INFO:     ASGI 'lifespan' protocol appears unsupported.
ERROR:    Task was destroyed but it is pending!
task: <Task pending name='Task-2' coro=<LifespanOn.main() done, defined at ...\lib\site-packages\uvicorn\lifespan\on.py:44> wait_for=<Future cancelled>>

Is GeneratorExit one of expected exceptions?

Hi,
I have some simple endpoint using sse:

async def subscribe(request: Request):
    topics = request.path_params["topics"]

    async def event_publisher(topics: List[str]):
        receiver = Receiver(topics)
        try:
            while True:
                try:
                    async with timeout(WAIT_FOR_NEW_MESSAGE_TIMEOUT):
                        message = await receiver.get_message()
                except TimeoutError:
                    continue

                yield message
        except CancelledError as e:
            logger.info(f"Disconnected from client (via refresh/close) {request.client}")
            raise e

    return EventSourceResponse(event_publisher(topics.split(",")), ping=config["SSE_KEEPALIVE"])

Every once in a while I notice that GeneratorExit shows up at this endpoint. Is this an expected behavior? I thought client hang-up is handled by CanceledError. So where does this GeneratorExit come from?

UnicodeDecodeError

I'm trying to use sse-starlette to stream raw bytes (e.g. live videos). I've noticed that EventSourceResponse tries to decode chunk for debug logging (

_log.debug(f"chunk: {chunk.decode()}")
and
_log.debug(f"ping: {ping.decode()}")
), which raises a UnicodeDecodeError as chunk in my case is binary. What would be the best way to fix it? Happy to send a PR based on your suggestion.

Is there a way to stop receiving 'event: ping'?

When receiving output from sse-starlette using EventSourceResponse(), my output looks something like this:

data: myoutput
data: myoutput

event: ping
data: 2020-12-22 15:33:27.463789

data: myoutput
data: myoutput

I get the event:ping messages every 15 seconds of streaming my output. Is there a way to disable this behaviour and stop getting those occasional 'event' messages?

returning a stream event without "data:"

Hi

It seems like the "data:" prefix is always returned when a stream text fragment is returned to client.

In most cases this can be fixed on the client but it would be better to be able to specify if you really need
this prefix or just return that raw data as it is.

code is in sse.py:

    if self.data is not None:
        for chunk in self.LINE_SEP_EXPR.split(str(self.data)):
            buffer.write(f"data: {chunk}")
            buffer.write(self._sep)

I can do a PR/MR if makes it easier,
simplest is to add to the yield that is going to the ServerSentEvent:

for example:
yield dict(data={
"text": "my text",
}, data_prefix="")

GPU memory footprint

The streaming service was built with Fastapi, postman or program or curl are used for testing. When the request is completely responded to, it still occupies GPU memory resources and has not been reclaimed.How do I solve๏ผŸ๏ผŸ๏ผŸ

The following code๏ผš

@app.post("/v1/chat/completions", response_model=ChatCompletionResponse)
async def create_chat_completion(request: ChatCompletionRequest):
global model, tokenizer

if request.messages[-1].role != "user":
    raise HTTPException(status_code=400, detail="Invalid request")
query = request.messages[-1].content
print("query: ",query)
prev_messages = request.messages[:-1]
if len(prev_messages) > 0 and prev_messages[0].role == "system":
    query = prev_messages.pop(0).content + query

history = []
if len(prev_messages) % 2 == 0:
    for i in range(0, len(prev_messages), 2):
        if prev_messages[i].role == "user" and prev_messages[i+1].role == "assistant":
            history.append([prev_messages[i].content, prev_messages[i+1].content])

if request.stream:
    generate = predict(query, history, request.model)
    return EventSourceResponse(generate, media_type="text/event-stream")
   # response = EventSourceResponse(generate)
    #asyncio.create_task(manage_response(response))
    #return response
response, _ = model.chat(tokenizer, query, history=history)
choice_data = ChatCompletionResponseChoice(
    index=0,
    message=ChatMessage(role="assistant", content=response),
    finish_reason="stop"
)

return ChatCompletionResponse(model=request.model, choices=[choice_data], object="chat.completion")

async def predict(query: str, history: List[List[str]], model_id: str):
global model, tokenizer

choice_data = ChatCompletionResponseStreamChoice(
    index=0,
    delta=DeltaMessage(role="assistant"),
    finish_reason=None
)
chunk = ChatCompletionResponse(model=model_id, choices=[choice_data], object="chat.completion.chunk")
yield "{}".format(chunk.json(exclude_unset=True, ensure_ascii=False))

current_length = 0

for new_response, _ in model.stream_chat(tokenizer, query, history):
    if len(new_response) == current_length:
        continue

    new_text = new_response[current_length:]
    current_length = len(new_response)

    choice_data = ChatCompletionResponseStreamChoice(
        index=0,
        delta=DeltaMessage(content=new_text),
        finish_reason=None
    )
    chunk = ChatCompletionResponse(model=model_id, choices=[choice_data], object="chat.completion.chunk")
    yield "{}".format(chunk.json(exclude_unset=True, ensure_ascii=False))


choice_data = ChatCompletionResponseStreamChoice(
    index=0,
    delta=DeltaMessage(),
    finish_reason="stop"
)
chunk = ChatCompletionResponse(model=model_id, choices=[choice_data], object="chat.completion.chunk")
yield "{}".format(chunk.json(exclude_unset=True, ensure_ascii=False))
#torch_gc()
yield '[DONE]'
await asyncio.sleep(0.0001)

Ping task exception was never retrieved

Hi folks,
I'm using sse-starlette in my project and I've recently tried to upgrade starlette 0.14.2 -> 0.16.0 After this I'm getting sporadic warning message:

Task exception was never retrieved
future: <Task finished name='Task-31378' coro=<EventSourceResponse._ping() done, defined at sse_starlette/sse.py:268> exception=ClosedResourceError()>"

Unfortunately I cannot reproduce this issue locally. I did some investigations and found out that run_until_first_complete at sse.py throws CancelledError so stop_streaming is not executed.

My guess is that behaviour is due to anyio in starlette 0.16.0. I've tried to wrap code in __call__ in CancelScope like so:

    async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:
       with CancelScope(shield=True):
           await run_until_first_complete(
                (self.stream_response, {"send": send}),
                (self.listen_for_disconnect, {"receive": receive}),
                (self.listen_for_exit_signal, {}),
            )
            self.stop_streaming()
            # ... and so on

This seems to work but I'm not pretty sure this is the right way to fix the issue.

Any ideas how to deal with it?

Usage without Async Generators

I am not entirely sure this is an SSE-starlette issue, but I think it may be discussion-worthy.

I have a very long running FastAPI SSE server that is sending out updates every 5 to 10 seconds. I need the server to be fetching state updates upstream regardless of whether or not there are active connections to the FastAPI route.
The basic code looked something like,

state: Optional[dict] = None
async def process(timeout=0.0):
    global state

    while True:
        fetch_result = fetch_stuff()
        if fetch_result:
            state= fetch_result.dict()

        await asyncio.sleep(5)

@app.on_event("startup")
def startup_event():
    asyncio.create_task(process())

@app.get("/v1/stream")
async def stream():
    def event_stream():
        while True:
            yield json.dumps(state)
            time.sleep(5)
    return EventSourceResponse(event_stream())

This was working fine for a few hours, but eventually, the server would just keep sending the same state updates over the SSE connection (i.e. the state was no longer updating).

I had a hunch that the example on sse-startlette without async generators would help, so per the example code, I modified my code to,

@app.get("/v1/stream")
async def stream(req: Request):
    send_chan, recv_chan = anyio.create_memory_object_stream(10)

    async def event_publisher(inner_send_chan: MemoryObjectSendStream):
        async with inner_send_chan:
                try:
                    while True:
                         await inner_send_chan.send(dict(data=json.dumps(data)))
                         await anyio.sleep(5.0)
                except anyio.get_cancelled_exc_class() as e:
                    log.info(f"Disconnected from client (via refresh/close) {req.client}")
                    with anyio.move_on_after(1, shield=True):
                        await inner_send_chan.send(dict(closing=True))
                        raise e

    return EventSourceResponse(
        recv_chan, data_sender_callable=partial(event_publisher, send_chan)
    )

This was working for several days until I made some updates to the fetch_stuff function. I need to debug the function and figure out what's going on, but I also would like my server not to stop sending out updates if fetch_stuff has an error.

My first question is, does what I'm doing make any sense lol? I am still trying to wrap my head around all the async concepts involved here.. Please let me know if I'm misusing anything.
Next, is there any reason why we don't pull the while True loop in the above code outside of the try/except clause?

while True:
    try:
        ...
    except:
        ...

Would this make the API more robust to errors in updating the state?

Heartbeat event?

I have a FastAPI server with sse-starlette for SSE. Everything works smoothly except occasionally, my client would receive an event with data looking like a timestamp:

data= 2023-04-07 23:36:05.744043
data= 2023-04-07 23:36:20.893051

I checked my server code and can be 100% sure that my server never sent events with these data. What could be the reason? How can I filter/ignore them?

Postman SSE Support does not work

First of all, thank you for the great repository!

Postman recently gained support for SSE and it works as expected for some SSE endpoints I found online.

However, I tried to implement a Server-Sent Events endpoint using FastAPI/Starlette with the built-in Starlette StreamingResponse and your SSE-Startlette GitHub repo and neither of them produced the new streaming UI in Postman.

Any Idea on what might be the cause?

(tested it with your "usage" code from the repo readme)

RuntimeError in stream_generator.py example

While further investigating on #48, I tried to run the stream_generator example as I expected it to exhibit the same behavior.

First, I had to fix a queue declaration:

- self._queue = asyncio.Queue[ServerSentEvent]()
+ self._queue: asyncio.Queue[ServerSentEvent] = asyncio.Queue()

Then, issuing a GET /sse throws the following error:

RuntimeError: Task <Task pending name='sse_starlette.sse.EventSourceResponse.__call__.<locals>.wrap' coro=<EventSourceResponse.__call__.<locals>.wrap() running at /home/duranda/devel/fastapi-example/venv/lib/python3.8/site-packages/sse_starlette/sse.py:230> cb=[TaskGroup._spawn.<locals>.task_done() at /home/duranda/devel/fastapi-example/venv/lib/python3.8/site-packages/anyio/_backends/_asyncio.py:726]> got Future <Future pending> attached to a different loop

Feature request: Option to disable logging / stop using uvicorn logger

Hi! First of all, thanks for this library, it is working great.

I would like an option to disable the debug logging from this library because it is spamming our dev environment with ping logs :)
Since you are using Uvicorn logger my current options are disabling all logs from Uvicorn which I don't want to do, or monkeypatching your methods to remove logging, which is unnecessary code.

Would be great with a parameter to disable logging or you can use your own logger (instead of uvicorn) so I can disable log propagation.

generator stays active after calling EventSource.close() and closing the browser

Hi, I think I am misunderstanding something or doing something wrong, but I am completely stuck on using this package due to event generators never stopping when the connection originates from an EventSource creation in the browser. I've tried my best to capture what I'm experiencing below. I really appreciate any help or guidance.

I have a very simple test setup using basic JavaScript creating an EventSource and a setTimeout that closes it 5 seconds later. I have a simple event generator that is sending a message every second.

When I test this with cURL, everything works as expected.

When I use the browser, the generator continues to run even after calling close() and closing the browser completely.

Here is the stream handler in Python

    def new_messages():
        # Add logic here to check for new messages
        yield "Hello World"

    async def event_generator():
        while True:
            print("stream")
            # If client closes connection, stop sending events
            disconnected = await request.is_disconnected()
            if disconnected:
                break

            # Checks for new messages and return them to client if any
            if new_messages():
                yield {
                    "event": "new_message",
                    "id": "message_id",
                    "retry": RETRY_TIMEOUT,
                    "data": "message_content",
                }

            await asyncio.sleep(STREAM_DELAY)

    return EventSourceResponse(event_generator())

Here is the JavaScript

        console.log("Creating eventSource");
        this.eventSource = new EventSource(endpoint);

        this.eventSource.addEventListener(
          "message",
          (ev): any => {
            console.log(ev);
          }
        );

        setTimeout(() => {
          console.log("Calling eventSource.close()");
          this.eventSource.close();
        }, 5000);

Using cURL I can call this endpoint and when I Ctrl-C, I see the disconnect as expected.
curl-works

Using a very simple JavaScript client in Chrome, after calling close OR closing the browser completely, the generator task stays active.
browser-keeps-running

After Ctrl-C on the uvicorn server, you can see all the tasks shutting down.

^CINFO:     Shutting down
INFO:     Waiting for connections to close. (CTRL+C to force quit)
cancelled
TRACE:    127.0.0.1:37810 - ASGI [5] Send {'type': 'http.response.body', 'body': '<0 bytes>', 'more_body': False}
TRACE:    127.0.0.1:37810 - HTTP connection lost
TRACE:    127.0.0.1:37810 - ASGI [5] Completed
cancelled
TRACE:    127.0.0.1:37804 - ASGI [3] Send {'type': 'http.response.body', 'body': '<0 bytes>', 'more_body': False}
TRACE:    127.0.0.1:37804 - HTTP connection lost
TRACE:    127.0.0.1:37804 - ASGI [3] Completed
cancelled
TRACE:    127.0.0.1:37860 - ASGI [7] Send {'type': 'http.response.body', 'body': '<0 bytes>', 'more_body': False}
TRACE:    127.0.0.1:37860 - HTTP connection lost
TRACE:    127.0.0.1:37860 - ASGI [7] Completed
INFO:     Waiting for application shutdown.
TRACE:    ASGI [1] Receive {'type': 'lifespan.shutdown'}
TRACE:    ASGI [1] Send {'type': 'lifespan.shutdown.complete'}
TRACE:    ASGI [1] Completed
INFO:     Application shutdown complete.
INFO:     Finished server process [9310]
INFO:     Stopping reloader process [9299]

can not exit async generator in shutdown-event when shutting down server with fast_api?

fast api has "shutdown" event --> documentation <-- and I would like to use that to give the event generator a signal to exit its loop. But the shutdown event in fastapi never gets called when using sse-starlette because sse-starlette doesn't shutdown itself as its waiting for clients to disconnect. So I am getting the

Waiting for connections to close. (CTRL+C to force quit) Error which was also stated in the readme.

So, how would I do that? Is there anything available "out-ofthe-box" in starlette/fastapi?

Response does not stream to browser from localhost (Chrome or Firefox)

I have used the exact example from the README. While it works fine from curl (each event shows .9 seconds apart), in the browser (Chrome or Firefox on Windows), it waits ~5 seconds, then shows the entire body once it has closed.

Have you seen this behavior? Is it an ASGI issue, or something with browsers? I have similar issues with many Python ASGI and SSE implementations. I hope I am just missing something obvious.

How can I make pub-sub pattern with sse-starlette? Any suggestion?

just like this has a queue in this link encode/starlette#20 (comment)

Example something like this.

from flask import Flask
from flask_sse import sse

app = Flask(__name__)
app.config["REDIS_URL"] = "redis://localhost"
app.register_blueprint(sse, url_prefix='/stream')

@app.route('/send')
def send_message():
    sse.publish({"message": "Hello!"}, type='greeting')
    return "Message sent!"

so I can use sse.publish anywhere in the code send things like notification to the client?

PS. I am using FastAPI

Detection spontaneous client disconnection

Thanks for building this @sysid ! Very useful and works like a charm.

I'm using this in a project in place of a websocket (as I only need to send data to the client). One area I'm having trouble with is identifying sudden disconnections - e.g. browser tab closes. I expect I'm just misunderstanding the mechanism to detect this.

What I expect to happen

When I spontaneously close a connection to the SSE handler I expect an asyncio.CancelledError exception to be raised

What actually happens

No exception is raised

Steps to reproduce:

  1. Run: $ uvicorn main:app --reload
  2. Open : http://localhost:8000/
  3. Observe the console - which will print "yielding some message {some number}"
  4. Close the browser tab from (2)
  5. Continue to see the console messages updating

Minimally reproducible example

import asyncio
import uvicorn
from starlette.applications import Starlette
from starlette.routing import Route
from sse_starlette.sse import EventSourceResponse


async def numbers(request):
    try:
        index = 0
        while True:
            index += 1
            await asyncio.sleep(1)
            if await request.is_disconnected():
                await asyncio.sleep(5)
                raise asyncio.CancelledError("Client disconnected")
            print("yielding some message", index)
            yield dict(data="Some message")
    except asyncio.CancelledError as e:
        await asyncio.sleep(5)
        print("CancelledError", e)
    except asyncio.TimeoutError as e:
        await asyncio.sleep(5)
        print("TimeoutError", e)


async def sse(request):
    gen = numbers(request)
    return EventSourceResponse(
        gen,
        ping=1,
    )


routes = [Route("/", endpoint=sse)]

app = Starlette(debug=True, routes=routes)

if __name__ == "__main__":
    uvicorn.run(app, host="0.0.0.0", port=8000, log_level="info")

Do you have any clues as to where I might be going wrong?

asyncio.InvalidStateError on async generator when connection is closed with FastAPI

I am using a simple publish/subscribe pattern with FastAPI in order to broadcast data to clients using SSE:

import asyncio
from fastapi import FastAPI, Request
from sse_starlette.sse import EventSourceResponse


class PubSub:
    def __init__(self):
        self.waiter = asyncio.Future()

    def publish(self, value):
        waiter, self.waiter = self.waiter, asyncio.Future()
        waiter.set_result((value, self.waiter))

    async def subscribe(self):
        waiter = self.waiter
        while True:
            value, waiter = await waiter
            yield value

    __aiter__ = subscribe

pubsub = PubSub()

async def ticker(pubsub):
    counter = 0
    while True:
        pubsub.publish(counter)
        counter += 1
        await asyncio.sleep(1)

app = FastAPI()

@app.on_event("startup")
async def on_startup():    
    asyncio.create_task(ticker(pubsub), name='my_task')

@app.get('/stream')
async def message_stream(request: Request):
    async def event_publisher():
        try:
            while True:
                async for event in pubsub:
                    yield dict(data=event)
        except asyncio.CancelledError as e:
            print(f"Disconnected from client (via refresh/close) {request.client}")
            # Do any other cleanup, if any
            raise e
    return EventSourceResponse(event_publisher())

However, the task "my_task" is somehow killed as soon as the first client disconnects:

Task exception was never retrieved
future: <Task finished name='my_task' coro=<ticker() done, defined at /home/duranda/devel/fastapi-pubsub/main.py:51> exception=InvalidStateError('invalid state')>
Traceback (most recent call last):
  File "/home/duranda/devel/fastapi-pubsub/main.py", line 54, in ticker
    pubsub.publish(counter)
  File "/home/duranda/devel/fastapi-pubsub/main.py", line 38, in publish
    waiter.set_result((value, self.waiter))
asyncio.exceptions.InvalidStateError: invalid state

I also tried with other patterns, such as using AsyncIteratorObserver from aioreactive with the same result: the task linked to the async iterator ends up with an InvalidStateError.

What is the difference between SSE and StreamingResponse


from meutils.pipe import *

from typing import Generator
from fastapi import FastAPI, Response, status, HTTPException
from fastapi.responses import StreamingResponse

app = FastAPI()


def generate_data():
    for i in range(5):
        time.sleep(i)
        print(i)
        yield f"data {i}\n"


@app.get("/stream")
async def stream_data():
    return StreamingResponse(generate_data(), media_type='text/event-stream')


if __name__ == '__main__':
    import uvicorn

    uvicorn.run(app)

RuntimeError during unit tests: Future attached to a different loop

I tried to write pytest testcases for a FastAPI route that returns an event source/stream. The first test always succeedes, but the second test, despite identical content, consistently runs into an error: RuntimeError: Task ... got Future ... attached to a different loop. I'm not sure if the error is related to sse_starlette, fastapi, httpx or pytest-asyncio. I'll try it here first.

Here is a minimal example that can be used to reproduce the issue:

from httpx import AsyncClient
from fastapi import FastAPI
from sse_starlette.sse import EventSourceResponse
import asyncio

app = FastAPI()

@app.post("/foo")
async def http_foo() -> EventSourceResponse:
    async def event_generator():
        yield {"data": "1"}
        yield {"data": "2"}
        yield {"data": "3"}
    return EventSourceResponse(event_generator())

def parse_event_stream(text):
    events = []
    for line in text.strip().split("\r\n\r\n"):
        events.append(line[len("data:"):].strip())
    return events

async def test_first():
    client = AsyncClient(app=app, base_url="http://test")
    async with client:
        response = await client.post("/foo")
    events = parse_event_stream(response.text)
    assert events == ["1", "2", "3"]

async def test_second():
    client = AsyncClient(app=app, base_url="http://test")
    async with client:
        response = await client.post("/foo")
    events = parse_event_stream(response.text)
    assert events == ["1", "2", "3"]

Versions used:

sse-starlette 1.6.1
fastapi 0.101.0
httpx 0.24.1
pytest 7.4.0
pytest-asyncio 0.21.1

My pytest config looks like this:

[tool.pytest.ini_options]
asyncio_mode = "auto"
testpaths = ["tests"]

And here is the full error log:

$ poetry run pytest -vv -s -k sse
===================================================================== test session starts ======================================================================
platform darwin -- Python 3.8.15, pytest-7.4.0, pluggy-1.2.0 -- /Users/chris/Library/Caches/pypoetry/virtualenvs/test-project-h2N-_wM8-py3.8/bin/python
cachedir: .pytest_cache
rootdir: /Users/chris/test-project
configfile: pyproject.toml
testpaths: tests
plugins: timeout-2.1.0, asyncio-0.21.1, mock-3.11.1, anyio-3.7.1
asyncio: mode=auto
collected 54 items / 52 deselected / 2 selected                                                                                                                

tests/test_sse.py::test_first PASSED
tests/test_sse.py::test_second FAILED

=========================================================================== FAILURES ===========================================================================
_________________________________________________________________________ test_second __________________________________________________________________________

    async def test_second():
        client = prepare_test()
        async with client:
>           response = await client.post("/foo")

tests/test_sse.py:33: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
../Library/Caches/pypoetry/virtualenvs/test-project-h2N-_wM8-py3.8/lib/python3.8/site-packages/httpx/_client.py:1848: in post
    return await self.request(
../Library/Caches/pypoetry/virtualenvs/test-project-h2N-_wM8-py3.8/lib/python3.8/site-packages/httpx/_client.py:1530: in request
    return await self.send(request, auth=auth, follow_redirects=follow_redirects)
../Library/Caches/pypoetry/virtualenvs/test-project-h2N-_wM8-py3.8/lib/python3.8/site-packages/httpx/_client.py:1617: in send
    response = await self._send_handling_auth(
../Library/Caches/pypoetry/virtualenvs/test-project-h2N-_wM8-py3.8/lib/python3.8/site-packages/httpx/_client.py:1645: in _send_handling_auth
    response = await self._send_handling_redirects(
../Library/Caches/pypoetry/virtualenvs/test-project-h2N-_wM8-py3.8/lib/python3.8/site-packages/httpx/_client.py:1682: in _send_handling_redirects
    response = await self._send_single_request(request)
../Library/Caches/pypoetry/virtualenvs/test-project-h2N-_wM8-py3.8/lib/python3.8/site-packages/httpx/_client.py:1719: in _send_single_request
    response = await transport.handle_async_request(request)
../Library/Caches/pypoetry/virtualenvs/test-project-h2N-_wM8-py3.8/lib/python3.8/site-packages/httpx/_transports/asgi.py:162: in handle_async_request
    await self.app(scope, receive, send)
../Library/Caches/pypoetry/virtualenvs/test-project-h2N-_wM8-py3.8/lib/python3.8/site-packages/fastapi/applications.py:289: in __call__
    await super().__call__(scope, receive, send)
../Library/Caches/pypoetry/virtualenvs/test-project-h2N-_wM8-py3.8/lib/python3.8/site-packages/starlette/applications.py:122: in __call__
    await self.middleware_stack(scope, receive, send)
../Library/Caches/pypoetry/virtualenvs/test-project-h2N-_wM8-py3.8/lib/python3.8/site-packages/starlette/middleware/errors.py:184: in __call__
    raise exc
../Library/Caches/pypoetry/virtualenvs/test-project-h2N-_wM8-py3.8/lib/python3.8/site-packages/starlette/middleware/errors.py:162: in __call__
    await self.app(scope, receive, _send)
../Library/Caches/pypoetry/virtualenvs/test-project-h2N-_wM8-py3.8/lib/python3.8/site-packages/starlette/middleware/exceptions.py:79: in __call__
    raise exc
../Library/Caches/pypoetry/virtualenvs/test-project-h2N-_wM8-py3.8/lib/python3.8/site-packages/starlette/middleware/exceptions.py:68: in __call__
    await self.app(scope, receive, sender)
../Library/Caches/pypoetry/virtualenvs/test-project-h2N-_wM8-py3.8/lib/python3.8/site-packages/fastapi/middleware/asyncexitstack.py:20: in __call__
    raise e
../Library/Caches/pypoetry/virtualenvs/test-project-h2N-_wM8-py3.8/lib/python3.8/site-packages/fastapi/middleware/asyncexitstack.py:17: in __call__
    await self.app(scope, receive, send)
../Library/Caches/pypoetry/virtualenvs/test-project-h2N-_wM8-py3.8/lib/python3.8/site-packages/starlette/routing.py:718: in __call__
    await route.handle(scope, receive, send)
../Library/Caches/pypoetry/virtualenvs/test-project-h2N-_wM8-py3.8/lib/python3.8/site-packages/starlette/routing.py:276: in handle
    await self.app(scope, receive, send)
../Library/Caches/pypoetry/virtualenvs/test-project-h2N-_wM8-py3.8/lib/python3.8/site-packages/starlette/routing.py:69: in app
    await response(scope, receive, send)
../Library/Caches/pypoetry/virtualenvs/test-project-h2N-_wM8-py3.8/lib/python3.8/site-packages/sse_starlette/sse.py:251: in __call__
    await wrap(partial(self.listen_for_disconnect, receive))
../Library/Caches/pypoetry/virtualenvs/test-project-h2N-_wM8-py3.8/lib/python3.8/site-packages/anyio/_backends/_asyncio.py:597: in __aexit__
    raise exceptions[0]
../Library/Caches/pypoetry/virtualenvs/test-project-h2N-_wM8-py3.8/lib/python3.8/site-packages/sse_starlette/sse.py:240: in wrap
    await func()
../Library/Caches/pypoetry/virtualenvs/test-project-h2N-_wM8-py3.8/lib/python3.8/site-packages/sse_starlette/sse.py:215: in listen_for_exit_signal
    await AppStatus.should_exit_event.wait()
../Library/Caches/pypoetry/virtualenvs/test-project-h2N-_wM8-py3.8/lib/python3.8/site-packages/anyio/_backends/_asyncio.py:1778: in wait
    if await self._event.wait():
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <asyncio.locks.Event object at 0x1346e4730 [unset]>

    async def wait(self):
        """Block until the internal flag is true.
    
        If the internal flag is true on entry, return True
        immediately.  Otherwise, block until another coroutine calls
        set() to set the flag to true, then return True.
        """
        if self._value:
            return True
    
        fut = self._loop.create_future()
        self._waiters.append(fut)
        try:
>           await fut
E           RuntimeError: Task <Task pending name='sse_starlette.sse.EventSourceResponse.__call__.<locals>.wrap' coro=<EventSourceResponse.__call__.<locals>.wrap() running at /Users/chris/Library/Caches/pypoetry/virtualenvs/test-project-h2N-_wM8-py3.8/lib/python3.8/site-packages/sse_starlette/sse.py:240> cb=[TaskGroup._spawn.<locals>.task_done() at /Users/chris/Library/Caches/pypoetry/virtualenvs/test-project-h2N-_wM8-py3.8/lib/python3.8/site-packages/anyio/_backends/_asyncio.py:661]> got Future <Future pending> attached to a different loop

../.pyenv/versions/3.8.15/lib/python3.8/asyncio/locks.py:309: RuntimeError
=================================================================== short test summary info ====================================================================
FAILED tests/test_sse.py::test_second - RuntimeError: Task <Task pending name='sse_starlette.sse.EventSourceResponse.__call__.<locals>.wrap' coro=<EventSourceResponse.__call__.<locals>.wrap() run...
========================================================== 1 failed, 1 passed, 52 deselected in 1.39s ==========================================================

Allow cache control header override

Hi, Firstly, thanks for the amazing work. I am trying to use fastly to fan out one steam to multiple clients, but for that to work the response must be cacheable.

We do request collapsing automatically (unless you turn it off), but for requests to be collapsed, the origin response must be cacheable and still 'fresh' at the time of the new request. You donโ€™t actually want us to cache the event stream after it ends; if we did, future requests to join the stream would just get an instant response containing a batch of events that happened over some earlier period in time. But you do want us to buffer the response as well as streaming it out, so that a cache record exists for new clients to join onto. That means your time to live (TTL) for the stream response must be the same duration as you intend to stream for. Say your server is configured to serve streams in 30-second segments (the browser reconnects after each segment ends): the response TTL of the stream should be exactly 30 seconds (or 29, if you want to cover the possibility of clock-mis-syncs):
Ref: https://www.fastly.com/blog/server-sent-events-fastly.

But it looks like we don't allow certain headers to be overwritten


        # mandatory for servers-sent events headers
        _headers["Cache-Control"] = "no-cache"
        _headers["Connection"] = "keep-alive"
        _headers["X-Accel-Buffering"] = "no"


I am more than happy to shoot a pull request if that sounds good to you?

Replace asyncio.sleep() with anyio.sleep()

This project uses anyio, which is proper since starlette began using it for concurrency some time ago. However, this project still contains calls to asyncio.sleep(). These calls should be replaced with anyio.sleep() so that users of the package can use it with Trio.

Multiple events grouped as one event

Hi, I have one issue on the client-side when using the API using this library/fastapi and I am unable to pin-point the issue. Sometimes I receive events grouped as one single event, like this:

--
event: token
data:

event: token
data: It

event: token
data: does
--

instead of

--
event: token
data:
--
event: token
data: It
--
event: token
data: does
--

Are there any settings or changes one can make to prevent this? In my generator I am actually yielding ServerSentEvents, so I believe it is not related to the generator itself, otherwise this wouldn't work. Any advice is appreciated!

EventSourceResponse performance

Hey @sysid, thanks for providing this library :)

Overview

We (the company I work for) are currently using sse-starlette to build some of our services. With one somewhat high load one, we discovered a potential performance bottleneck.

It seems that this pull request https://github.com/sysid/sse-starlette/pull/55/files introduced an anyio.Lock to prevent a race condition between the _ping and the stream_reponse task from happening. This lock seems to be a bit slow.

Proposal

We experimented with two solutions:

1: Remove the ping task altogether

After reading (skimming? ^^) https://html.spec.whatwg.org/multipage/server-sent-events.html and the comment of the _ping function in your code, it seems that a ping is not really required by the SSE protocol, so we could provide an EventSourceResponse subclass, which just doesn't do it. If there is still some ping required, the user of the library could integrate it themselves in their stream_response

Example implementation

class EventSourceResponseNoPing(EventSourceResponse):
    async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:

        async with anyio.create_task_group() as task_group:
            # https://trio.readthedocs.io/en/latest/reference-core.html#custom-supervisors
            async def wrap(func: Callable[[], Coroutine[None, None, None]]) -> None:
                await func()
                # noinspection PyAsyncCall
                task_group.cancel_scope.cancel()

            task_group.start_soon(wrap, partial(self.stream_response, send))
            task_group.start_soon(wrap, self.listen_for_exit_signal)

            if self.data_sender_callable:
                task_group.start_soon(self.data_sender_callable)

            await wrap(partial(self.listen_for_disconnect, receive))

        if self.background is not None:  # pragma: no cover, tested in StreamResponse
            await self.background()

2: Only ping on timeout

An alternative approach we investigated was moving the ping inside the stream_response function, so that the same loop would send the data and the ping, therefore not requiring a lock. This turned out a bit tricky, since we used anyio.fail_after which cancels the running tasks and requires they async generator provided by the users of the library to be able to handle those cancellations. It seems that non-class based async generators struggle with this.

Example code:

class EventSourceResponsePingOnCancel(EventSourceResponse):
    async def stream_response(self, send) -> None:
        await send(
            {
                "type": "http.response.start",
                "status": self.status_code,
                "headers": self.raw_headers,
            }
        )
        it = aiter(self.body_iterator)
        while True:
            try:
                async with anyio.fail_after(self._ping_interval, False):
                    data = await anext(it)
                chunk = ensure_bytes(data, self.sep)
                _log.debug(f"chunk: {chunk.decode()}")
                await send({"type": "http.response.body", "body": chunk, "more_body": True})
            except TimeoutError:
                ping = (
                    ServerSentEvent(comment=f"ping - {datetime.utcnow()}").encode()
                    if self.ping_message_factory is None
                    else ensure_bytes(self.ping_message_factory(), self.sep)
                )
                _log.debug(f"ping: {ping.decode()}")
                await send({"type": "http.response.body", "body": ping, "more_body": True})
            except StopAsyncIteration:
                _log.debug("end of iterator")
                break

        await send({"type": "http.response.body", "body": b"", "more_body": False})

    async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:

        async with anyio.create_task_group() as task_group:
            # https://trio.readthedocs.io/en/latest/reference-core.html#custom-supervisors
            async def wrap(func: Callable[[], Coroutine[None, None, None]]) -> None:
                await func()
                # noinspection PyAsyncCall
                task_group.cancel_scope.cancel()

            task_group.start_soon(wrap, partial(self.stream_response, send))
            task_group.start_soon(wrap, self.listen_for_exit_signal)

            if self.data_sender_callable:
                task_group.start_soon(self.data_sender_callable)

            await wrap(partial(self.listen_for_disconnect, receive))

        if self.background is not None:  # pragma: no cover, tested in StreamResponse
            await self.background()

It seems the first proposal would be an easy option to implement and provide some flexibility to the users of the library, what do you think?

If we agree on the approach and some naming, we can provide a pull request :)

Here are some results of the tests we performed.

Tests

I ran the following tests:

I started this script via uvicorn sse_test.py:app (I removed some details here, I know I wouldn't have to repeat the position 500 times)

import asyncio
import json
from fastapi import FastAPI, Request
from sse_starlette.sse import EventSourceResponse, EventSourceResponsePingOnCancel, EventSourceResponseNoPing
from starlette.responses import StreamingResponse


position = (json.dumps({
  "position_timestamp": "2023-09-19T11:25:35.286Z",
  "x": 0,
  "y": 0,
  "z": 0,
  "a": 0,
  "b": 0,
  "c": 0,
  # some more fields
}) + '\n')
positions = [position] * 500

sse_clients = 0

app = FastAPI()


@app.get('/stream')
async def message_stream(request: Request):

    async def event_generator():
        global sse_clients
        sse_clients += 1
        print(f"{sse_clients} sse clients connected", flush=True)
        while True:
            # If client closes connection, stop sending events
            if await request.is_disconnected():
                break

            for p in positions:
                yield p

    return EventSourceResponse(event_generator())

And then I connect some clients and count the returned lines.

For example one client via curl http://localhost:8000/stream | pv --line-mode --average-rate > /dev/null

Or 20 clients with a custom go script.

Result 1: current implementation with anyio.Lock

test via one curl connection (avg over 5 min):

95k/s

test via go 20 clients (running for 5 min):

Average number of received events per second: 4922
Max number of received events per second: 4923
min number of received events per second: 4922

Result 2: removing the ping task and the lock

test via one curl connection (avg over 5 min):

263k/s

test via go 20 clients (running for 5 min):

Average number of received events per second: 13636
Max number of received events per second: 13660
min number of received events per second: 13630

Result 3: handling the ping when a timeout occurs

test via one curl connection (avg over 5 min):

129k/s

test via go 20 clients (running for 5 min):

Average number of received events per second: 6095
Max number of received events per second: 6115
min number of received events per second: 6090

Speed-up

Since the actual numbers are not too relevant, here the speed up:

Speedup from test 1 to test 2:

13660 / 4923 = 2.774730855169612
 263 / 95 = 2.768421052631579

Speedup from test 1 to test 3:

6115 / 4923 = 1.2421287832622385
129 / 95 = 1.3578947368421053

Various test issues/ideas

Since you asked so nicely for ideas, I'll let you have a braindump of opinions ๐Ÿ˜‡

How to actually check for the race condition, which is maybe the most important part, I unfortunately have no clue how to do ๐Ÿ˜…

But anyway, these are all/most of the issues I encountered when writing my PR. Some is subjective / not necessary, most is due to new releases of stuff breaking other stuff. I can give you program versions if there's stuff that you can't reproduce on your end - I'm running Arch Linux with pretty up to date stuff, although ofc mostly it's just a fresh virtualenv with pip install -r requirements.txt
As background I've contributed to several python FOSS repos last few months, so have gotten a decent grasp on how most well-maintained foss repos do things.

uvicorn

As far as I can see there's no documentation that you have to open 127.0.0.1:8000 in a web browser to initiate the uvicorn tests/examples. Would be nice to have that in a comment in the file and in the readme. But feels like it should be possible to automate at least a bunch of them from within python (or sh/the makefile) by making a simple web request?

Makefile

There's quite a bit of overlap between make and tox, esp as tox only has a single environment which reformats and runs tests with coverage. I'd personally maybe just put all the test/checks in tox [in different environments], including type checking and linting. Or even better use pre-commit for checks. I haven't seen ~any projects use makefiles, but I kinda dig it tbh!

Several of the packages needed in targets are not in requirements.txt (mypy, flake8, any flake8 plugins) - if going the makefile route I'd maybe add target[s] for installing/upgrading packages - and/or rename requirements.txt to test_requirements.txt.

Python packaging is kind of a mess atm but having all of Pipfile, requirements.txt, setup.cfg, setup.py, and pyproject.toml - and a Makefile, and many also put configs in tox.ini - is a bit confusing x)

make coverage

/home/h/Git/sse-starlette/.venv/lib/python3.10/site-packages/coverage/control.py:836: CoverageWarning: No data was collected. (no-data-collected)
  self._warn("No data was collected.", slug="no-data-collected")
python -m coverage report -m
No data to report.

I recently had similar issues - I'd recommend checking out pytest-dev/pytest-cov#98

make test

home/h/Git/sse-starlette/.venv/lib/python3.10/site-packages/coverage/inorout.py:507: CoverageWarning: Module /sse_starlette was never imported. (module-not-imported)
  self.warn(f"Module {pkg} was never imported.", slug="module-not-imported")
/home/h/Git/sse-starlette/.venv/lib/python3.10/site-packages/coverage/control.py:836: CoverageWarning: No data was collected. (no-data-collected)
  self._warn("No data was collected.", slug="no-data-collected")
WARNING: Failed to generate report: No data to report.

/home/h/Git/sse-starlette/.venv/lib/python3.10/site-packages/pytest_cov/plugin.py:311: CovReportWarning: Failed to generate report: No data to report.

  warnings.warn(CovReportWarning(message))

see above

make tox

works perfectly fine for me, except for missing 12-13% of coverage and it doesn't have environments for python 3.11. And ofc it'd be great if there was an environment for testing uvicorn as well.

make style / make black

black /sse_starlette tests
Usage: black [OPTIONS] SRC ...
Try 'black -h' for help.

Error: Invalid value for 'SRC ...': Path '/sse_starlette' does not exist.
make: *** [Makefile:74: format] Error 2

make isort

works perfectly fine

make lint

ValueError: Error code ';' supplied to 'ignore' option does not match '^[A-Z]{1,3}[0-9]{0,3}$'
make: *** [Makefile:85: flake8] Error 1

make mypy

# keep config in setup.cfg for integration with PyCharm
mypy --config-file setup.cfg /sse_starlette
mypy: can't read file '/sse_starlette': No such file or directory
make: *** [Makefile:90: mypy] Error 2

same error as black

black

reformatted /home/h/Git/sse-starlette/examples/stream_generator.py
reformatted /home/h/Git/sse-starlette/sse_starlette/sse.py

mypy

sse_starlette/sse.py:34: error: Cannot assign to a method  [assignment]
tests/conditional_yielding_endpoint.py:17: error: Need type annotation for "items" (hint: "items: Dict[<type>, <type>] = ...")  [var-annotated]
Found 2 errors in 2 files (checked 11 source files)

Cheers~

Question: SSE multicast?

Hi,

Is it possible to send SSE to multiple clients?

I see that if there are 2 clients connected, each of them receives every 2nd message. I would like to broadcast SSE to each client connected to given endpoint.

Thanks!

sse-starlette doesn't respect uvicorn timeout_graceful_shutdown

Issue

sse-starlette doesn't seem to respect uvicorn.server.Server.config.timeout_graceful_shutdown.

From some digging, my understanding of the issue is as follows:

  • sse_starlette monkey patches uvicorn's signal handler
    • This configures a class called AppStatus, which holds an event should_exit_event
    • The monkey patched signal handler sets should_exit_event
  • When EventSourceResponse gets __call__ed, it creates an anyio task group with three tasks:
    1. Stream the response
    2. Ping on the connection every 15s (to avoid a behavior where proxies sometimes auto disconnect after some period of inactivity)
    3. Wait for should_exit_event to be set
  • If any of these three tasks exit (i.e. streaming the response finishes, pinging fails, the exit signal handler is triggered), all three tasks get immediately canceled.

The problem is specifically that if should_exit_event is set, then the tasks are immediately canceled without respecting uvicorn.server.Server.config.timeout_graceful_shutdown.

Possible solutions

  • Add some asyncio.sleep in AppStatus.handle_exit and possibly a similar force_exit mechanism to Uvicorn to handle receiving multiple signals

sse-starlette 1.6.1 appears to leak asyncio.Lock across instantiations

With sse-starlette 1.6.1, I'm getting new test failures that all look like this:

.tox/py/lib/python3.11/site-packages/sse_starlette/sse.py:237: in __call__
    async with anyio.create_task_group() as task_group:
.tox/py/lib/python3.11/site-packages/anyio/_backends/_asyncio.py:662: in __aexit__
    raise exceptions[0]
.tox/py/lib/python3.11/site-packages/sse_starlette/sse.py:240: in wrap
    await func()
.tox/py/lib/python3.11/site-packages/sse_starlette/sse.py:215: in listen_for_exit_signal
    await AppStatus.should_exit_event.wait()
.tox/py/lib/python3.11/site-packages/anyio/_backends/_asyncio.py:1842: in wait
    if await self._event.wait():
/opt/hostedtoolcache/Python/3.11.3/x64/lib/python3.11/asyncio/locks.py:210: in wait
    fut = self._get_loop().create_future()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <asyncio.locks.Event object at 0x7ff6f15d1bd0 [unset]>

    def _get_loop(self):
        loop = events._get_running_loop()
    
        if self._loop is None:
            with _global_lock:
                if self._loop is None:
                    self._loop = loop
        if loop is not self._loop:
>           raise RuntimeError(f'{self!r} is bound to a different event loop')
E           RuntimeError: <asyncio.locks.Event object at 0x7ff6f15d1bd0 [unset]> is bound to a different event loop

I think this may be related to #57. It looks like the should_exit_event may not be properly cleared between tests, which causes it to fail because each test uses a separate event loop.

Reverting to sse-starlette to 1.6.0 makes this problem go away again.

JSON objects are sent using single quotes causing JSON parsing failure on browser side

I am trying to send a dictionary object to the browser but JSON.parse fails on browser side saying single quotes are not allowed.

The issue is with the following code, which tries to convert dictionary to string (resulting in single quotes).

        for chunk in self.LINE_SEP_EXPR.split(str(self.data)):
            buffer.write(f"data: {chunk}")
            buffer.write(self._sep)

Could you check the data type and may use json.dumps or provide it as configuration option?

Client disconnect revisited, role of competing detection mechanisms

This is rather a request for clarification and not a bug report, because my code inspired from example works fine and issue has been thoroughly discussed already at #7, but after commit 0f69f56 the example seems to suggest two different ways of detecting client disconnect. In my implementation only CancelledError at line 67 is reached and req.is_disconnected() at line 59 appears unreachable in case of actual disconnect.

try:
while True:
disconnected = await req.is_disconnected()
if disconnected:
_log.info(f"Disconnecting client {req.client}")
break
# yield dict(id=..., event=..., data=...)
i += 1
yield dict(data=i)
await asyncio.sleep(0.9)
_log.info(f"Disconnected from client {req.client}")
except asyncio.CancelledError as e:
_log.info(f"Disconnected from client (via refresh/close) {req.client}")
# Do any other cleanup, if any
raise e

What is the role of those different detections of client disconnect and are they both needed? Since the issue is also featured in README.md this might be maybe worth an extra clarification.

I have a fastAPI stream request, and when I call it on the client, I can interrupt the request. So how do I know the action of this interrupt on the fastAPI server?

import uvicorn
import asyncio
from fastapi import FastAPI, Request
from fastapi.middleware.cors import CORSMiddleware
from sse_starlette.sse import EventSourceResponse

times = 0
app = FastAPI()

origins = [
    "*"
]

app.add_middleware(
    CORSMiddleware,
    allow_origins=origins,
    allow_credentials=True,
    allow_methods=["*"],
    allow_headers=["*"],
)


@app.get("/sse/data")
async def root(request: Request):
    print(request.client.host)
    event_generator = status_event_generator(request)
    return EventSourceResponse(event_generator)


status_stream_delay = 1  # second
status_stream_retry_timeout = 30000  # milisecond


async def status_event_generator(request):
    global times
    while True:
        if times <= 5:
            yield {
                "event": "message",
                "retry": status_stream_retry_timeout,
                "data": "data:" + "times" + str(times) + "\n\n"
            }
        print(times)
        times += 1
        if times > 5:
            return
        await asyncio.sleep(status_stream_delay)


if __name__ == '__main__':
    uvicorn.run("fastapi_sse:app", host="0.0.0.0", port=5000, log_level="info", reload=True, forwarded_allow_ips='*',
                proxy_headers=True)

I have a fastAPI stream request, and when I call it on the client, I can interrupt the request. So how do I know the action of this interrupt on the fastAPI server?

Put id: after data:

https://javascript.info/server-sent-events makes a good point to:

Put id: after data:
Please note: the id is appended below message data by the server, to ensure that lastEventId is updated after the message is received.

However, that does not seem to be the case in the current implementation:

if self.id is not None:
buffer.write(self.LINE_SEP_EXPR.sub("", f"id: {self.id}"))
buffer.write(self._sep)
if self.event is not None:
buffer.write(self.LINE_SEP_EXPR.sub("", f"event: {self.event}"))
buffer.write(self._sep)
if self.data is not None:
for chunk in self.LINE_SEP_EXPR.split(str(self.data)):
buffer.write(f"data: {chunk}")
buffer.write(self._sep)

EventSourceResponse should accept AsyncIterable[ServerSentEvent]?

Hi,

From version 1.8.1 EventSourceResponse constructor stopped accepting AsyncIterable[ServerSentEvent] in content field. It looks like to code inside is compatible - ensure_bytes converts ServerSentEvent to bytes, but typing does not allow it. ServerSentEvent is now exposed only in ping_message_factory type.

py.typed file missing in 1.x versions

Hello, have attempted to bump sse-starlette in my repository to 1.1.1 (upgrading from 0.10.3), and I have got the following error:

error: Skipping analyzing "sse_starlette.sse": module is installed, but missing library stubs or py.typed marker
note: See https://mypy.readthedocs.io/en/stable/running_mypy.html#missing-imports
error: Skipping analyzing "sse_starlette.sse": module is installed, but missing library stubs or py.typed marker

I tried downgrading to 1.1.0 and then 1.0.0, and they all have the same issue. Looking into the bundle as downloaded from pypi, the py.typed file is no longer being included.

See downloaded bundle for 0.10.3:
image

Compared to 1.1.1:
image

I had a look at the code to see if there was anything obvious which had changed from the previous version which could cause this behaviour, and the setup.py and Manifest.in files seem correct to me. The only thing is that maybe MANIFEST.in should be uppercase, but I don't see how that could really cause a problem.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.