GithubHelp home page GithubHelp logo

perdy / starlette-prometheus Goto Github PK

View Code? Open in Web Editor NEW
267.0 5.0 30.0 206 KB

Prometheus integration for Starlette.

License: GNU General Public License v3.0

Python 87.30% JavaScript 12.70%
starlette prometheus metrics middleware asgi

starlette-prometheus's Issues

Should exception better be consider as well for responses ?

Hi, and thanks for your middleware. Very useful and save so much time !

I would like to trigger a discussion although regarding the current implementation which is sometime a little bit confusing with error handling: in my use case I use your middleware along with FastAPI, it works great, BUT, something that I expect, is that response related metrics highlight errors as well, but it won't in case of exception leading to HTTP 500 status because of the current flow:

except Exception as e:
    EXCEPTIONS.labels(method=method, path_template=path_template, exception_type=type(e).__name__).inc()
    raise e from None
else:
    REQUESTS_PROCESSING_TIME.labels(method=method, path_template=path_template).observe(after_time - before_time )
    RESPONSES.labels(method=method, path_template=path_template, status_code=response.status_code).inc()
finally:
    REQUESTS_IN_PROGRESS.labels(method=method, path_template=path_template).dec()

Would it be relevant to report as well exception into responses with 500 status code inference ?

Submounted routes use incorrect path in labels

When submounting routes, rather than the full path template being used, only the mount prefix is used.

Running the following app:

from starlette.applications import Starlette
from starlette.middleware import Middleware
from starlette.responses import Response
from starlette.routing import Mount, Route
from starlette_prometheus import PrometheusMiddleware, metrics


async def foo(request):
    return Response()


async def bar_baz(request):
    return Response()


routes = [
    Route("/foo", foo),
    Mount("/bar", Route("/baz", bar_baz)),
    Route("/metrics", metrics),
]
middleware = [Middleware(PrometheusMiddleware)]
app = Starlette(routes=routes, middleware=middleware)

Then making the following requests:

$ curl localhost:8000/foo
$ curl localhost:8000/bar/baz
$ curl localhost:8000/metrics

Gives the following output (I only included one metric as an example, but it's the same for all of them). Note the label for the request to localhost:8000/bar/baz has a path label of /bar.

starlette_requests_total{method="GET",path_template="/foo"} 1.0
starlette_requests_total{method="GET",path_template="/bar"} 1.0
starlette_requests_total{method="GET",path_template="/metrics"} 1.0

Detail how to interact with visualizations of Prometheus

I'm a new user to Starlette, and looking to monitor some Gunicorn processes for my Starlette server. This library looks promising, and I've successfully integrated and viewed the plain text stats at /metrics.

However, I'd like a better visualization of these performance metrics. I've looked at integrating Grafana, but am having difficulty (https://prometheus.io/docs/visualization/grafana/ looks promising).

I'm looking for the most basic level of monitoring; the console templates at https://prometheus.io/docs/visualization/consoles/ look promising.

It'd be really nice to have the following:

  • A couple sentences describing the configuration that Grafana needs to use starlette-prometheus (which I suspect is just prometheus).
  • Basic integration with visualizations. I'd like to see some basics graphs of the stats at /metrics with a simple HTML page. I think I'd like to see an interface like this:
from starlette.applications import Starlette
from starlette_prometheus import metrics, metric_viz, PrometheusMiddleware

app = Starlette()
app.add_middleware(PrometheusMiddleware)
app.add_route("/metrics/", metrics)
app.add_route("/metric-viz/", metric_viz)

Grafana Dashboard

it would be nice to have a sample/reference grafana dashboard in this project.

Error when raising exception in FastAPI: UnboundLocalError: local variable 'status_code' referenced before assignment

Hi,

I'm seeing an issue with FastAPI, where I am raising an exception in a route handler. I've created a small reproducer:

from fastapi import FastAPI
from starlette.middleware import Middleware
from starlette_prometheus import PrometheusMiddleware


middleware = [
    Middleware(PrometheusMiddleware)
]

app = FastAPI(middleware=middleware)

@app.get("/")
def read_root():
    raise ValueError("Test error")
    # return {"Hello": "World"}

Here's the output from running the reproducer and calling it with curl localhost:8000/:

output
$ uvicorn example:app                             
INFO:     Started server process [5099]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/uvicorn/protocols/http/h11_impl.py", line 373, in run_asgi
    result = await app(self.scope, self.receive, self.send)
  File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py", line 75, in __call__
    return await self.app(scope, receive, send)
  File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/fastapi/applications.py", line 208, in __call__
    await super().__call__(scope, receive, send)
  File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette/applications.py", line 112, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette/middleware/errors.py", line 159, in __call__
    await self.app(scope, receive, _send)
  File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette/middleware/base.py", line 57, in __call__
    task_group.cancel_scope.cancel()
  File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 572, in __aexit__
    raise ExceptionGroup(exceptions)
anyio._backends._asyncio.ExceptionGroup: 2 exceptions were raised in the task group:
----------------------------
Traceback (most recent call last):
  File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette/middleware/base.py", line 30, in coro
    await self.app(scope, request.receive, send_stream.send)
  File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette/exceptions.py", line 82, in __call__
    raise exc
  File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette/exceptions.py", line 71, in __call__
    await self.app(scope, receive, sender)
  File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette/routing.py", line 656, in __call__
    await route.handle(scope, receive, send)
  File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette/routing.py", line 259, in handle
    await self.app(scope, receive, send)
  File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette/routing.py", line 61, in app
    response = await func(request)
  File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/fastapi/routing.py", line 226, in app
    raw_response = await run_endpoint_function(
  File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/fastapi/routing.py", line 161, in run_endpoint_function
    return await run_in_threadpool(dependant.call, **values)
  File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette/concurrency.py", line 39, in run_in_threadpool
    return await anyio.to_thread.run_sync(func, *args)
  File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/anyio/to_thread.py", line 28, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(func, *args, cancellable=cancellable,
  File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 818, in run_sync_in_worker_thread
    return await future
  File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 754, in run
    result = context.run(func, *args)
  File "./example.py", line 14, in read_root
    raise ValueError("Test error")
ValueError: Test error
----------------------------
Traceback (most recent call last):
  File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette_prometheus/middleware.py", line 53, in dispatch
    response = await call_next(request)
  File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette/middleware/base.py", line 35, in call_next
    message = await recv_stream.receive()
  File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/anyio/streams/memory.py", line 89, in receive
    await receive_event.wait()
  File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 1655, in wait
    await checkpoint()
  File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 440, in checkpoint
    await sleep(0)
  File "/Users/krisb/.pyenv/versions/3.8.9/lib/python3.8/asyncio/tasks.py", line 644, in sleep
    await __sleep0()
  File "/Users/krisb/.pyenv/versions/3.8.9/lib/python3.8/asyncio/tasks.py", line 638, in __sleep0
    yield
asyncio.exceptions.CancelledError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette/middleware/base.py", line 55, in __call__
    response = await self.dispatch_func(request, call_next)
  File "/Users/krisb/Code/temp/starlette-prometheus-repro/.venv/lib/python3.8/site-packages/starlette_prometheus/middleware.py", line 65, in dispatch
    RESPONSES.labels(method=method, path_template=path_template, status_code=status_code).inc()
UnboundLocalError: local variable 'status_code' referenced before assignment
`pip freeze` output
anyio==3.4.0
asgiref==3.4.1
click==8.0.3
fastapi==0.70.1
h11==0.12.0
idna==3.3
prometheus-client==0.11.0
pydantic==1.9.0
sniffio==1.2.0
starlette==0.16.0
starlette-prometheus==0.8.0
typing-extensions==4.0.1
uvicorn==0.16.0

It seems like starlette has started raising asyncio.exceptions.CancelledError, which is not based on Exception caught here

but rather BaseException.

I believe this was introduced in version 0.15.0 of Starlette, in PR encode/starlette#1157.

I've tried to change the exception catching to include both – i.e. except (Exception, asyncio.exceptions.CancelledError), this seems to revert the behavior to the expected.

Grafana dashboard

Hi! Is there any official grafana dashboard? Or any grafana dashboard?

ImportError: cannot import name 'ASGIInstance' from 'starlette.types' after starlette was updated to 0.12.0

In starlette 0.12.0 ASGIInstance was removed from types.py and so now there is an exception:

File "/home/max/work/indigo-backend/main.py", line 8, in <module>
    from starlette_prometheus import metrics, PrometheusMiddleware
File "/home/max/work/indigo-backend/env/lib/python3.7/site-packages/starlette_prometheus/__init__.py", line 1, in <module>
    from starlette_prometheus.middleware import PrometheusMiddleware
  File "/home/max/work/indigo-backend/env/lib/python3.7/site-packages/starlette_prometheus/middleware.py", line 6, in <module>
    from starlette.types import ASGIInstance
ImportError: cannot import name 'ASGIInstance' from 'starlette.types' (/home/max/work/indigo-backend/env/lib/python3.7/site-packages/starlette/types.py)

How to add a custom metric?

Is there provision in this plugin to add a custom metric? We are using FastAPI to host a custom application, I need the ability to add a custom metric.

Any info on how to do this would be good.

Thanks

Dependencies conflict with prometheus_client

Hello!
I've tried to use starlette_prometheus with prometheus-client==0.16.0, but there is a Dependencies conflict


ERROR: Cannot install -r requirements.txt (line 20) and prometheus-client==0.16.0 because these package versions have conflicting dependencies.

 

The conflict is caused by:
    The user requested prometheus-client==0.16.0
    starlette-prometheus 0.9.0 depends on prometheus_client<0.13 and >=0.12

 

To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict

Are there any reasons to install starlette_prometheus only with prometheus-client==0.12.0?

How do i disable logging for specific path

I am using PrometheusMiddleware from starlette_prometheus
ever second or the, it keeps generating log, this grows the log file.

How do i disable this log, this specific?

INFO:     127.0.0.1:57304 - "GET /metrics HTTP/1.1" 200 OK
INFO:     127.0.0.1:57310 - "GET /metrics HTTP/1.1" 200 OK
INFO:     127.0.0.1:57304 - "GET /metrics HTTP/1.1" 200 OK
INFO:     127.0.0.1:57310 - "GET /metrics HTTP/1.1" 200 OK
INFO:     127.0.0.1:57304 - "GET /metrics HTTP/1.1" 200 OK
INFO:     127.0.0.1:57310 - "GET /metrics HTTP/1.1" 200 OK
INFO:     127.0.0.1:57304 - "GET /metrics HTTP/1.1" 200 OK
INFO:     127.0.0.1:57310 - "GET /metrics HTTP/1.1" 200 OK
............several  million times .............................
INFO:     127.0.0.1:57310 - "GET /metrics HTTP/1.1" 200 OK

Consider bumping requirement versions

Hi, there.

In short, I created a new project over the weekend and started using the latest prometheus_client==0.8.0. After also adding starlette-prometheus, there's a version conflict:

ERROR: starlette-prometheus 0.7.0 has requirement prometheus_client<0.8,>=0.7, but you'll have prometheus-client 0.8.0 which is incompatible.

Can you consider bumping requirement versions so we can use the latest versions?

Releasing new version with updated prometheus-client dep?

Hello! I have a deps conflict due to the fact that current version fron pypi 0.7.0 is still list prometheus-client dep as <8.0 in poetry, but another package needs prometheus-client >=8.0.
I see that updated dependency is already merged since august, is it possible to release something like v0.7.1 with this dependency?

[FEATURE] Path template instead of actual path in metrics

Hi, there!

Thanks for a great middleware! I've been using it a while and now I want to show response time by url in grafana. It works good with regular paths, like /users, but not with templated paths like /users/{id} because in /metrics they appear as actual paths (/users/1, /users/2, etc...)

I've made a quick pull request #6 for this. Let me know what you think of this idea and feel free to decline it if anything

Bug: Does not work with multiprocessing

The moment prometheus_multiproc_dir joins the game, the Prometheus client stops working and no metrics are returned. The request to the endpoint is successful (200), but the body is completely empty.

  1. Setup a new virtual environment:
pyenv virtualenv venv_test_prometheus_multiproc
pyenv activate venv_test_prometheus_multiproc
pip install starlette-prometheus fastapi requests

Here are the exact requirements:

certifi==2020.6.20
chardet==3.0.4
fastapi==0.59.0
idna==2.10
prometheus-client==0.7.1
pydantic==1.6
requests==2.24.0
starlette==0.13.4
starlette-prometheus==0.7.0
urllib3==1.25.9
  1. Execute the following Python script:
from fastapi import FastAPI
from starlette.testclient import TestClient
from starlette.applications import Starlette
from starlette_prometheus import metrics, PrometheusMiddleware
import tempfile
import os

with tempfile.TemporaryDirectory() as tmpdir:
	print('created temporary directory', tmpdir)
	os.environ["prometheus_multiproc_dir"] = tmpdir

	app = FastAPI()

	@app.get("/")
	def read_root():
	    return {"Hello": "World"}

	@app.get("/something")
	def read_root():
	    return {"Hello": "something"}

	app.add_middleware(PrometheusMiddleware)
	app.add_route("/metrics/", metrics)

	client = TestClient(app)

	print(client.get("/").content)
	print(client.get("/").content)
	print(client.get("/something").content)

	print(client.get("/metrics/").content)

You will see the following output:

$ python main.py 
created temporary directory /tmp/tmp8kcanvsd
b'{"Hello":"World"}'
b'{"Hello":"World"}'
b'{"Hello":"something"}'
b''

The metrics endpoint returns nothing.

Custom labels for inbuild starlette-prometheus metrics

I want a way to inject custom labels like app_id/tenant_id for a SaaS application
So that it is easier to visualize no. of requests and other metrics for specific tenant.

Note: I can do this by creating a custom metric, but just wondering if there's a way to add custom label (tenant_id), and tell starlette-prometheus to pick it from the route handlers or if we could use it as a dependency or something

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.