GithubHelp home page GithubHelp logo

pydantic / logfire Goto Github PK

View Code? Open in Web Editor NEW
1.5K 14.0 45.0 28.33 MB

Uncomplicated Observability for Python and beyond! ๐Ÿชต๐Ÿ”ฅ

Home Page: https://docs.pydantic.dev/logfire/

License: MIT License

Python 99.87% Makefile 0.13%
fastapi logging metri observability openai opentelemetry pydantic python trace

logfire's Introduction

Pydantic

CI Coverage pypi CondaForge downloads versions license Pydantic v2

Data validation using Python type hints.

Fast and extensible, Pydantic plays nicely with your linters/IDE/brain. Define how data should be in pure, canonical Python 3.8+; validate it with Pydantic.

Pydantic Company ๐Ÿš€

We've started a company based on the principles that I believe have led to Pydantic's success. Learn more from the Company Announcement.

Pydantic V1.10 vs. V2

Pydantic V2 is a ground-up rewrite that offers many new features, performance improvements, and some breaking changes compared to Pydantic V1.

If you're using Pydantic V1 you may want to look at the pydantic V1.10 Documentation or, 1.10.X-fixes git branch. Pydantic V2 also ships with the latest version of Pydantic V1 built in so that you can incrementally upgrade your code base and projects: from pydantic import v1 as pydantic_v1.

Help

See documentation for more details.

Installation

Install using pip install -U pydantic or conda install pydantic -c conda-forge. For more installation options to make Pydantic even faster, see the Install section in the documentation.

A Simple Example

from datetime import datetime
from typing import List, Optional
from pydantic import BaseModel

class User(BaseModel):
    id: int
    name: str = 'John Doe'
    signup_ts: Optional[datetime] = None
    friends: List[int] = []

external_data = {'id': '123', 'signup_ts': '2017-06-01 12:22', 'friends': [1, '2', b'3']}
user = User(**external_data)
print(user)
#> User id=123 name='John Doe' signup_ts=datetime.datetime(2017, 6, 1, 12, 22) friends=[1, 2, 3]
print(user.id)
#> 123

Contributing

For guidance on setting up a development environment and how to make a contribution to Pydantic, see Contributing to Pydantic.

Reporting a Security Vulnerability

See our security policy.

logfire's People

Contributors

adriangb avatar alexmojaki avatar bossenti avatar davidhewitt avatar dmontagu avatar e-hosseini avatar elisalimli avatar elkiwa avatar eltociear avatar frankie567 avatar hattajr avatar hramezani avatar inspirsmith avatar kludex avatar kpcofgs avatar lig avatar rishabgit avatar samuelcolvin avatar syniex avatar tlpinney avatar willbakst avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

logfire's Issues

logfire auth

Question

I was trying logfire on a FastAPI app hosted on render.com, I got "You're not authenticate, run logfire auth" on the render log.
So, is a way I can pass the auth credential a environment variable?

Timings when using async SQLAlchemy/psycopg3 aren't correct

Description

Posting from a request from slack.

It appears that when using an async engine in SQLAlchemy+ psycopg3, timing information is lost/not correct on logfire. See the two screenshots below: one uses an async engine (the one with 1ms/0ms timings), the other uses the sync engine (correct timings)
Screenshot 2024-05-01 at 14 43 26
Screenshot 2024-05-01 at 14 43 44

I would expect to see timing information for queries done using the async varieties of these packages

Python, Logfire & OS Versions, related packages

logfire="0.28.0"
platform="macOS-14.1.1-arm64-arm-64bit"
python="3.12.1 (main, Dec  7 2023, 20:45:44) [Clang 15.0.0 
(clang-1500.0.40.1)]"
[related_packages]
requests="2.31.0"
pydantic="2.7.1"
fastapi="0.110.2"
protobuf="4.25.3"
rich="13.7.1"
opentelemetry-api="1.24.0"
opentelemetry-exporter-otlp-proto-common="1.24.0"
opentelemetry-exporter-otlp-proto-http="1.24.0"
opentelemetry-instrumentation="0.45b0"
opentelemetry-instrumentation-asgi="0.45b0"
opentelemetry-instrumentation-asyncio="0.45b0"
opentelemetry-instrumentation-dbapi="0.45b0"
opentelemetry-instrumentation-fastapi="0.45b0"
opentelemetry-instrumentation-grpc="0.45b0"
opentelemetry-instrumentation-jinja2="0.45b0"
opentelemetry-instrumentation-psycopg="0.45b0"
opentelemetry-instrumentation-redis="0.45b0"
opentelemetry-instrumentation-sqlalchemy="0.45b0"
opentelemetry-proto="1.24.0"
opentelemetry-sdk="1.24.0"
opentelemetry-semantic-conventions="0.45b0"
opentelemetry-test-utils="0.45b0"
opentelemetry-util-http="0.45b0"

Links to GitHub code source

Description

You'll be able to go to your GitHub repository directly from the Logfire UI, and see the code of a logfire call or exception.

Please let us know if you are interested in this, so we can prioritize it.

Email integration for alerts

Description

We support slack, and webhooks (with slack format only), but we still don't support email integration.

Please let us know if you are interested in this, so we can prioritize it.

Can we please help with stripe?

Question

help please

I cannot receive the payment made to my Stripe account, I have made changes many times, the error has gotten bigger and now, even though I have entered my information correctly, I am giving an address verification error and I cannot activate my account, so I cannot receive the payment of the money in the account, they banned me, I think it has been 1 year.

My other problem is, I want to access the control panel from the e-mail sent on a past date, but there is no e-mail, I cannot find it, so I cannot activate my account.

While creating a new membership, before switching from the test mode to the live mode, it says that the restriction is not acceptable to confirm. Could it be because my API address is blocked? I cannot create a healthy membership. Turkey does not provide any kind of service. Am I signing up with the information of that country with a VPN? Even if it does not work, if I send 1 dollar to try it, it will be restricted.

When I connect it to my website, I cannot work because there will be restrictions when payments are made. I integrate it in 2D and there are restrictions when there are incoming payments. Is there anyone who can help me with this? I do not want to connect the APIs to the website and try to persuade these stripe approvals.

Error logging FastAPI arguments with async SQLAlchemy

Description

Full traceback:

Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/logfire/_internal/integrations/fastapi.py", line 185, in solve_dependencies
    self.logfire_instance.log(level, 'FastAPI arguments', attributes=attributes)
  File "/usr/local/lib/python3.11/site-packages/logfire/_internal/main.py", line 547, in log
    if json_schema_properties := attributes_json_schema_properties(attributes):
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/logfire/_internal/json_schema.py", line 160, in attributes_json_schema_properties
    {key: create_json_schema(value, set()) for key, value in attributes.items() if key not in STACK_INFO_KEYS}
  File "/usr/local/lib/python3.11/site-packages/logfire/_internal/json_schema.py", line 160, in <dictcomp>
    {key: create_json_schema(value, set()) for key, value in attributes.items() if key not in STACK_INFO_KEYS}
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/logfire/_internal/json_schema.py", line 118, in create_json_schema
    return _mapping_schema(obj, seen)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/logfire/_internal/json_schema.py", line 212, in _mapping_schema
    **_properties({(k if isinstance(k, str) else safe_repr(k)): v for k, v in obj.items()}, seen),
      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/logfire/_internal/json_schema.py", line 328, in _properties
    if (value_schema := create_json_schema(value, seen)) not in PLAIN_SCHEMAS:
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/logfire/_internal/json_schema.py", line 120, in create_json_schema
    return _dataclass_schema(obj, seen)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/logfire/_internal/json_schema.py", line 167, in _dataclass_schema
    return _custom_object_schema(obj, 'dataclass', (field.name for field in dataclasses.fields(obj)), seen)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/logfire/_internal/json_schema.py", line 342, in _custom_object_schema
    **_properties({key: getattr(obj, key) for key in keys}, seen),
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/logfire/_internal/json_schema.py", line 342, in <dictcomp>
    **_properties({key: getattr(obj, key) for key in keys}, seen),
                        ^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/attributes.py", line 566, in __get__
    return self.impl.get(state, dict_)  # type: ignore[no-any-return]
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/attributes.py", line 1086, in get
    value = self._fire_loader_callables(state, key, passive)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/attributes.py", line 1121, in _fire_loader_callables
    return self.callable_(state, passive)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/strategies.py", line 967, in _load_for_state
    return self._emit_lazyload(
           ^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/strategies.py", line 1130, in _emit_lazyload
    result = session.execute(
             ^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/session.py", line 2306, in execute
    return self._execute_internal(
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/session.py", line 2191, in _execute_internal
    result: Result[Any] = compile_state_cls.orm_execute_statement(
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/context.py", line 293, in orm_execute_statement
    result = conn.execute(
             ^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1422, in execute
    return meth(
           ^^^^^
  File "/usr/local/lib/python3.11/site-packages/sqlalchemy/sql/elements.py", line 514, in _execute_on_connection
    return connection._execute_clauseelement(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1644, in _execute_clauseelement
    ret = self._execute_context(
          ^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1850, in _execute_context
    return self._exec_single_context(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1990, in _exec_single_context
    self._handle_dbapi_exception(
  File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 2360, in _handle_dbapi_exception
    raise exc_info[1].with_traceback(exc_info[2])
  File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1971, in _exec_single_context
    self.dialect.do_execute(
  File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 919, in do_execute
    cursor.execute(statement, parameters)
  File "/usr/local/lib/python3.11/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 572, in execute
    self._adapt_connection.await_(
  File "/usr/local/lib/python3.11/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 122, in await_only
    raise exc.MissingGreenlet(
sqlalchemy.exc.MissingGreenlet: greenlet_spawn has not been called; can't call await_only() here. Was IO attempted in an unexpected place? (Background on this error at: https://sqlalche.me/e/20/xd2s)

This seems to occur whenever we call a FastAPI endpoint that uses a FastAPI dependency that returns a instance of one of our SQLAlchemy models. We use asyncio with SQLAlchemy, and have to load any relationships with await instance.awaitable_attrs.attr, since if we do IO otherwise we get the same sort of sqlalchemy.exc.MissingGreenlet error.

Python, Logfire & OS Versions, related packages

logfire="0.28.0"
platform="macOS-14.4.1-arm64-arm-64bit"
python="3.11.3 (main, Jun 29 2023, 17:08:14) [Clang 14.0.3 (clang-1403.0.22.14.1)]"
[related_packages]
requests="2.31.0"
pydantic="2.7.1"
fastapi="0.110.3"
protobuf="4.25.3"
rich="13.7.1"
opentelemetry-api="1.24.0"
opentelemetry-exporter-otlp-proto-common="1.24.0"
opentelemetry-exporter-otlp-proto-http="1.24.0"
opentelemetry-instrumentation="0.45b0"
opentelemetry-instrumentation-asgi="0.45b0"
opentelemetry-instrumentation-fastapi="0.45b0"
opentelemetry-proto="1.24.0"
opentelemetry-sdk="1.24.0"
opentelemetry-semantic-conventions="0.45b0"
opentelemetry-util-http="0.45b0"

`logfire-noop`

Description

My idea is that we release logfire-noop, this package has no dependencies and allows third-party libraries to integrate with logfire while still giving their users complete choice over whether to actually use Logfire.

The idea is that logfire-noop would contain two modules: logfire_noop and logfire_if_installed (name TBC)

logfire_noop exports types matching logfire but that do nothing, or do the minimum required for code to run, e.g. logfire.span() needs to return a context manager.

logfire_if_installed behaves like this:

  • when logfire is installed, it just the contents of logfire
  • when logfire is not installed, it just the contents of logfire_noop

Third party libraries would wuite code like this:

import logfire_if_installed

...

def my_library_method(...):
    ...
    with logfire.span('doing a thing {sniffle=}', sniffle=rofl):
        ...

Feature request: Make it so we can show spans that encapsulating handled exception as INFO level

Description

image In this case, the outermost task did a retry of a failure, and the retry succeeded.

I think what we can do to better handle things today is, rather than showing the worst descendant level of a span in all circumstances, we do the following:

  • If a span has a level explicitly set, we use that as the level to display in the UI
  • If the span has level null, we look at its children (recursively over other spans with level null) for the worst level, and show that

We probably also want to make sure there is an API for late-setting the level on a span so that, e.g. in the example above, we could leave the level unset until the request finally succeeds, then set the level to INFO right at the end when we know the inner call did succeed, as a way to get the bar to show up the desired color.

@alexmojaki if you agree with this, I am happy to handle the frontend side if you can help with the SDK. If you disagree with this approach, happy to discuss (I'm not 100% confident it doesn't have issues, but it made sense to me).

(Reported by @willbakst)

OpenAI SDK traces will fail when `.with_raw_response` is used

Description

I have this code:

chat_completion_response = await openai_client.chat.completions.with_raw_response.create(
            messages=query_messages,  # type: ignore
            # Azure OpenAI takes the deployment name as the model name
            model=self.chatgpt_deployment if self.chatgpt_deployment else self.chatgpt_model,
            temperature=0.0,  # Minimize creativity for search query generation
            max_tokens=100,  # Setting too low risks malformed JSON, setting too high may affect performance
            n=1,
            tools=tools,
            tool_choice="auto",
        )
self.meter_ratelimit_remaining_tokens.set(
            int(chat_completion_response.headers.get("x-ratelimit-remaining-tokens", 0))
        )
self.meter_ratelimit_remaining_requests.set(
            int(chat_completion_response.headers.get("x-ratelimit-remaining-requests", 0))
        )
chat_completion = chat_completion_response.parse()

That causes a crash when instrumentation is enabled using logfire:

ERROR:root:Exception while generating response stream: 'LegacyAPIResponse' object has no attribute 'choices'
Traceback (most recent call last):
  File "/Users/anthonyshaw/projects/azure-search-openai-demo/app/backend/app.py", line 181, in format_as_ndjson
    async for event in r:
  File "/Users/anthonyshaw/projects/azure-search-openai-demo/app/backend/approaches/chatapproach.py", line 152, in run_with_streaming
    extra_info, chat_coroutine = await self.run_until_final_call(
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/anthonyshaw/projects/azure-search-openai-demo/app/backend/approaches/chatreadretrieveread.py", line 140, in run_until_final_call
    chat_completion_response = await self.openai_client.chat.completions.with_raw_response.create(
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/anthonyshaw/projects/azure-search-openai-demo/.venv/lib/python3.11/site-packages/openai/_legacy_response.py", line 349, in wrapped
    return cast(LegacyAPIResponse[R], await func(*args, **kwargs))
                                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/anthonyshaw/projects/azure-search-openai-demo/.venv/lib/python3.11/site-packages/opentelemetry/instrumentation/openai/shared/chat_wrappers.py", line 128, in achat_wrapper
    response = await wrapped(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/anthonyshaw/projects/azure-search-openai-demo/.venv/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 1334, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
  File "/Users/anthonyshaw/projects/azure-search-openai-demo/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 1743, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/anthonyshaw/projects/azure-search-openai-demo/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 1446, in request
    return await self._request(
           ^^^^^^^^^^^^^^^^^^^^
  File "/Users/anthonyshaw/projects/azure-search-openai-demo/.venv/lib/python3.11/site-packages/logfire/_internal/integrations/openai.py", line 154, in instrumented_openai_request
    on_response(response, span)
  File "/Users/anthonyshaw/projects/azure-search-openai-demo/.venv/lib/python3.11/site-packages/logfire/_internal/integrations/openai.py", line 214, in on_chat_response
    'message': response.choices[0].message,
               ^^^^^^^^^^^^^^^^
AttributeError: 'LegacyAPIResponse' object has no attribute 'choices'

This looks to be because the instrumentation is wrapping the .create() function but assuming that the response is always the pydantic model. When you call OpenAI SDK with .with_raw_response you get a LegacyAPIResponse object instead and you need to run .parse() on it.

I'm doing .with_raw_response because I convert the rate-limiting headers into OpenTelemetry Metrics for the OTLP meters API.

Python, Logfire & OS Versions, related packages

logfire="0.28.0"
platform="macOS-13.6.6-x86_64-i386-64bit"
python="3.13.0a0 (heads/main:8ac2085b80, Sep 26 2023, 19:39:32) [Clang 14.0.3 (clang-1403.0.22.14.1)]"
[related_packages]
requests="2.31.0"
protobuf="4.25.3"
rich="13.7.1"
opentelemetry-api="1.24.0"
opentelemetry-exporter-otlp-proto-common="1.24.0"
opentelemetry-exporter-otlp-proto-http="1.24.0"
opentelemetry-instrumentation="0.45b0"
opentelemetry-proto="1.24.0"
opentelemetry-sdk="1.24.0"
opentelemetry-semantic-conventions="0.45b0"

Adding LogFire Broke Existing Uvicorn FastAPI Console Logging

Description

Had existing uvicorn setup configured on debug logging config like so:
uvicorn.run( "main:app", host="0.0.0.0", port=3000, reload=True, reload_excludes=["./repos"], # log_config="uvicorn.yaml", )
Default logging produces logs for every request and print statements in my app.

After following instructions and configuring logfire with my FastAPI app, I dont see ANYTHING logged to my console.

This is ... not cool in the slightest. What turned started as a morning exploring some cool new logging alternatives now turns into me trying to debug what exactly logfire messed up in my default logging configs. Any idea guys? (Oh and btw, ofc logging works to logfire web interface ....)

Python, Logfire & OS Versions, related packages

No response

No module named opentelemetry

Description

Got the No module named 'opentelemetry' after install and logfire auth :

โฏ poetry add logfire                                                                                                                       
Using version ^0.28.3 for logfire                                                                                                          
                                                                                                                                           
Updating dependencies                                                                                                                      
Resolving dependencies... (1.0s)                                                                                                           
                                                                                                                                           
Package operations: 1 install, 0 updates, 0 removals                                                                                       
                                                                                                                                           
  โ€ข Installing logfire (0.28.3)                                                                                                            
                                                                                                                                           
Writing lock file                                                                                                                          
                                                                                                                                           
โฏ poetry shell                                                                                                                             
Spawning shell within /home/philippe/src/paxpar/.venv                                                                                      
[..]
โฏ logfire auth                                                                                                                             
Traceback (most recent call last):                                                                                                         
  File "/home/philippe/src/paxpar/.venv/bin/logfire", line 5, in <module>                                                                  
    from logfire.cli import main                                                                                                           
  File "/home/philippe/src/paxpar/.venv/lib/python3.12/site-packages/logfire/__init__.py", line 7, in <module>                             
    from ._internal.auto_trace import AutoTraceModule                                                                                      
  File "/home/philippe/src/paxpar/.venv/lib/python3.12/site-packages/logfire/_internal/auto_trace/__init__.py", line 8, in <module>        
    from ..constants import ONE_SECOND_IN_NANOSECONDS                                                                                      
  File "/home/philippe/src/paxpar/.venv/lib/python3.12/site-packages/logfire/_internal/constants.py", line 6, in <module>                  
    from opentelemetry.context import create_key                                                                                           
ModuleNotFoundError: No module named 'opentelemetry'                                                                                       

Python, Logfire & OS Versions, related packages (not required)

Traceback (most recent call last):                                                                                                   
  File "/home/philippe/src/paxpar/.venv/bin/logfire", line 5, in <module>                                                            
    from logfire.cli import main                                                                                                     
  File "/home/philippe/src/paxpar/.venv/lib/python3.12/site-packages/logfire/__init__.py", line 7, in <module>                       
    from ._internal.auto_trace import AutoTraceModule                                                                                
  File "/home/philippe/src/paxpar/.venv/lib/python3.12/site-packages/logfire/_internal/auto_trace/__init__.py", line 8, in <module>  
    from ..constants import ONE_SECOND_IN_NANOSECONDS                                                                                
  File "/home/philippe/src/paxpar/.venv/lib/python3.12/site-packages/logfire/_internal/constants.py", line 6, in <module>            
    from opentelemetry.context import create_key                                                                                     
ModuleNotFoundError: No module named 'opentelemetry'                                                                                 

sklearn instrumentation and in general outdated opentelemetry exstension

Following the tutorial logfire inspect suggested to install the sklearn instrumentation from open telemetry, but the tools does not have any version compatibility check and this can easily lead to issues.

I tried using https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-sklearn to expand tracking to sklearn but the package is severely out-dated and tries to import and use non existing methods, I imagine this can be the case for many of the instrumentation suggestions arriving from the community contrib.

ps: loooving Logfire!!

Index error for Azure OpenAI streaming

Description

I'm using AzureOpenAI from the openai SDK and getting this error.

Is it due to there being an empty chunk?

Traceback (most recent call last):
  File "/Users/XXXX/.pyenv/versions/3.11.5/envs/XXXX/lib/python3.11/site-packages/logfire/_internal/integrations/openai.py", line 137, in __stream__
    chunk_content = content_from_stream(chunk)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/XXXX/.pyenv/versions/3.11.5/envs/XXXX/lib/python3.11/site-packages/logfire/_internal/integrations/openai.py", line 183, in <lambda>
    content_from_stream=lambda chunk: chunk.choices[0].delta.content,
                                      ~~~~~~~~~~~~~^^^
IndexError: list index out of range

Python, Logfire & OS Versions, related packages

logfire="0.28.0"
platform="macOS-13.5.2-arm64-arm-64bit"
python="3.11.5 (main, Dec 23 2023, 11:01:02) [Clang 14.0.3 (clang-1403.0.22.14.1)]"
[related_packages]
requests="2.31.0"
pydantic="2.6.1"
fastapi="0.109.2"
openai="1.12.0"
protobuf="4.25.2"
rich="13.7.0"
opentelemetry-api="1.24.0"
opentelemetry-exporter-otlp-proto-common="1.24.0"
opentelemetry-exporter-otlp-proto-grpc="1.22.0"
opentelemetry-exporter-otlp-proto-http="1.24.0"
opentelemetry-instrumentation="0.43b0"
opentelemetry-instrumentation-asgi="0.43b0"
opentelemetry-instrumentation-fastapi="0.43b0"
opentelemetry-proto="1.24.0"
opentelemetry-sdk="1.24.0"
opentelemetry-semantic-conventions="0.45b0"
opentelemetry-util-http="0.43b0"

Extra message displayed in `openai` instrumentation

Description

first off, really excited about logfire! ๐Ÿ‘


this issue is purely cosmetic and disclaimer I could be confused about the expected behavior but I seem to consistently get an extra message displayed in the human readable trace details.

Maybe this is meant to be a placeholder for the Assistant response?

Screenshot 2024-04-30 at 7 22 30โ€ฏPM

however the messages in the arguments section agrees with my expectations

image

login error

Description

not able to authenticate using GitHub login

Python, Logfire & OS Versions, related packages (not required)

Error
Whoops, something went wrong!

If the error persists, please contact us.

Stripe Integration?

Description

Stripe's SDK has 4M downloads a month, that's significantly more than the other libraries we've been considering integrations with.

Also requests to Stripe are arguably even more precious that LLM queries, so observability seems particularly useful.

Log exceptions and maybe other events in the console

(Copy of https://linear.app/pydantic/issue/PYD-794/log-exceptions-and-maybe-other-events-in-the-console)

Encountered again here: https://pydanticlogfire.slack.com/archives/C06EDRBSAH3/p1715088823604819

For example this:

import logfire

logfire.configure(console=logfire.ConsoleOptions(verbose=True))

try:
    1 / 0
except:
    logfire.exception('error!!!')

shows the exception in the UI, but in the console it merely prints:

10:21:29.817 error!!!
             โ”‚ scratch_1899.py:8 error

Performance Issue: Just passing SecretStr instead of str, cause the execution time to go to >100ms from 5ms.

Description

Check below code, Change secretstr, with "4". and try running the code again. That's kinda weird behavior.

from datetime import datetime

import logfire
from pydantic import SecretStr

logfire.configure(service_name="New Service")


@logfire.instrument("Second Function", extract_args=True)
def foo(arg1: SecretStr, arg2: SecretStr):
    return f"{arg1}-{arg2}"


secretstr = SecretStr("4")
with logfire.span("One unit of work"):
    now = datetime.now()
    foo(secretstr, secretstr)
    logfire.info("xD")
    print((datetime.now() - now).total_seconds())

Python, Logfire & OS Versions, related packages

logfire="0.28.0"
platform="macOS-14.2.1-arm64-arm-64bit"
python="3.11.4 (v3.11.4:d2340ef257, Jun  6 2023, 19:15:51) [Clang 13.0.0 (clang-1300.0.29.30)]"
[related_packages]
requests="2.31.0"
pydantic="2.6.4"
fastapi="0.110.0"
protobuf="4.25.3"
rich="13.7.1"
opentelemetry-api="1.24.0"
opentelemetry-exporter-otlp-proto-common="1.24.0"
opentelemetry-exporter-otlp-proto-http="1.24.0"
opentelemetry-instrumentation="0.45b0"
opentelemetry-instrumentation-asgi="0.41b0"
opentelemetry-instrumentation-fastapi="0.41b0"
opentelemetry-instrumentation-redis="0.41b0"
opentelemetry-instrumentation-sqlalchemy="0.41b0"
opentelemetry-propagator-b3="1.23.0"
opentelemetry-proto="1.24.0"
opentelemetry-sdk="1.24.0"
opentelemetry-semantic-conventions="0.45b0"
opentelemetry-util-http="0.41b0"

[question] Traceback UI

Question

What's the correct way to set the exception in Python, so UI can render it properly?

image image

This looks empty right now, according to the docs I'm setting exc_info:

try:
   ...
except Exception as e:
   logfire.error("My error log", exc_info=e)

Ability to configure additional span processors (e.g. to export to another sink)

Description

https://pydanticlogfire.slack.com/archives/C06EDRBSAH3/p1714686467955179

feature request! being able to add another span processor while still including the default logfire one. what Iโ€™m doing right now is hacking it by not providing the second processor up front, then doing:

from logfire._internal.config import GLOBAL_CONFIG
from logfire._internal.exporters.processor_wrapper import SpanProcessorWrapper

span_processor = SpanProcessorWrapper(SentrySpanProcessor(), GLOBAL_CONFIG.scrubber)
GLOBAL_CONFIG.processors = [span_processor]

Offer a command or argument that returns only the packages listed by `logfire inspect`

Description

I ran logfire inspect, and it spits out a a lot of text plus an overly formatted version of the pip install command. Copying it gives me numerous newlines (where it wraps) which I have to clean up to run the command. Admittedly this could just be some terminal setting that is inconveniencing me.

pip install opentelemetry-instrumentation-httpx opentelemetry-instrumentation-psycopg2 opentelemetry-instrumentation-urllib opentelemetry-instrumentation-sqlite3 opentelemetry-instrumentation-fastapi          
                                                                      opentelemetry-instrumentation-requests opentelemetry-instrumentation-sqlalchemy

It would have been slicker if I could just return the list so that I could run pip install $(logfire <some command>)

Self-hosted Logfire Enterprise Edition

Description

We are planning to offer an on-premise deployment option for Logfire.
This will allow you to deploy Logfire on your own infrastructure.
This is not meant to be a free version of Logfire, it will be targeted at enterprises with compliance requirements and will likely cost considerably more than using the version of Logfire that we host.

Please let us know if you are interested in this, so we can prioritize it.

Custom SDKs for other languages

Description

Logfire is built on top of OpenTelemetry, which means that it supports all the languages that OpenTelemetry supports.

Still, we are planning to create custom SDKs for JavaScript, TypeScript, and Rust, and make sure that the
attributes are displayed in a nice way in the Logfire UI โ€” as they are for Python.

Please let us know if you are interested in this, so we can prioritize it.

Redacting content that isn't private

Description

Right now I've got some content that I want to print in logfire, but it's being redacted as private, but there's nothing private!

I imagine what is triggering it is the word dotenv or env.

It strikes me as overly sensitive and I don't see a way to configure it.

Python, Logfire & OS Versions, related packages (not required)

No response

AWS Lambda Integration

Description

Datadog gives me the following dashboard for AWS lambda invocations:
CleanShot 2024-05-04 at 09 55 48@2x

It's very helpful for debugging issues with my lambda running out of time, having very high cold starts, or running out of memory.

I imagine a logfire span-esque visualization for the Lambda invocations. This, combined with the existing FastAPI integration would be godly. Would be able to see all of the logs I want in one place :)

The issue is the implementation for this may not be that easy. Datadog installs some kind of additional binary and a handler that wraps the default Lambda handler to make this work: https://docs.datadoghq.com/serverless/aws_lambda/installation/python/?tab=terraform

Best of luck and let me know if I can help with this in any way!

Asserterror when using in flask app, gevent, magic libraries

Description

The error below happens only when using the combination of gevent, logfire, and importing magic.

Full code:

import gevent.monkey

gevent.monkey.patch_all()

import logfire

logfire.configure(token="-----------------------------------")
# todo doing this import shows the error
import magic  
from flask import Flask

app = Flask(__name__)


@app.route("/")
def hello_world():
    return "<p>Hello, World!</p>"


if __name__ == "__main__":
    app.run()
Logfire project URL: https://logfire.pydantic.dev/--------------
Traceback (most recent call last):
  File "src/gevent/_abstract_linkable.py", line 287, in gevent._gevent_c_abstract_linkable.AbstractLinkable._notify_links
  File "src/gevent/_abstract_linkable.py", line 333, in gevent._gevent_c_abstract_linkable.AbstractLinkable._notify_links
AssertionError: (None, <callback at 0x7f4ec2dfdf40 args=([],)>)
2024-05-01T09:06:11Z <callback at 0x7f4ec2dfdf40 args=([],)> failed with AssertionError

 * Serving Flask app 'fshije'
 * Debug mode: off
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
 * Running on http://127.0.0.1:5000
Press CTRL+C to quit

gevent==24.2.1
flask==3.0.3
python-magic==0.4.27

Python, Logfire & OS Versions, related packages

logfire="0.28.0"
platform="Linux-5.15.0-105-generic-x86_64-with-glibc2.35"
python="3.12.3 (main, Apr 27 2024, 19:00:26) [GCC 9.4.0]"
[related_packages]
requests="2.31.0"
pydantic="2.7.1"
protobuf="4.25.3"
rich="13.7.1"
opentelemetry-api="1.24.0"
opentelemetry-exporter-otlp-proto-common="1.24.0"
opentelemetry-exporter-otlp-proto-http="1.24.0"
opentelemetry-instrumentation="0.45b0"
opentelemetry-instrumentation-aiohttp-client="0.45b0"
opentelemetry-instrumentation-dbapi="0.45b0"
opentelemetry-instrumentation-flask="0.45b0"
opentelemetry-instrumentation-jinja2="0.45b0"
opentelemetry-instrumentation-psycopg2="0.45b0"
opentelemetry-instrumentation-redis="0.45b0"
opentelemetry-instrumentation-requests="0.45b0"
opentelemetry-instrumentation-sqlalchemy="0.45b0"
opentelemetry-instrumentation-sqlite3="0.45b0"
opentelemetry-instrumentation-system-metrics="0.45b0"
opentelemetry-instrumentation-urllib="0.45b0"
opentelemetry-instrumentation-urllib3="0.45b0"
opentelemetry-instrumentation-wsgi="0.45b0"
opentelemetry-proto="1.24.0"
opentelemetry-sdk="1.24.0"
opentelemetry-semantic-conventions="0.45b0"
opentelemetry-util-http="0.45b0"

Generate metric points from pydantic class changes?

Question

Hello team! thanks for the great tool.

I haven't found it in the docs and maybe it is a use case you might be interested in covering? I have a pydantic class and I want to create a gauge/counter metric point out of it every time its value gets updated.

In the docs I see you have an integration detecting the validation results, so maybe there is a way already?

Thanks a lot.

Misleading documentation ?

Description

When trying to explore log fire abilities by starting by a test, I did have an unpleasant experience. This is very specific as running log fire from the first time using pytest is not so common.
Setup :

  • python 3.9.12, logfire & code carbon
  • logfire auth from terminal returns correct output : You are already logged in. (Your credentials are stored in ....../.logfire/default.toml)

However, even when configuring log fire to use the correct project name, running from a test file returns an error :

__________________________________________________________________________________ ERROR collecting tests/test_logfire.py ___________________________________________________________________________________
tests/test_logfire.py:10: in <module>
   logfire.configure(project_name="codecarbon-tdd")
../../anaconda3/envs/logifre/lib/python3.10/site-packages/logfire/_internal/config.py:214: in configure
   GLOBAL_CONFIG.configure(
../../anaconda3/envs/logifre/lib/python3.10/site-packages/logfire/_internal/config.py:523: in configure
   self.initialize()
../../anaconda3/envs/logifre/lib/python3.10/site-packages/logfire/_internal/config.py:528: in initialize
   return self._initialize()
../../anaconda3/envs/logifre/lib/python3.10/site-packages/logfire/_internal/config.py:609: in _initialize
   credentials = LogfireCredentials.initialize_project(
../../anaconda3/envs/logifre/lib/python3.10/site-packages/logfire/_internal/config.py:1119: in initialize_project
   use_existing_projects = Confirm.ask(
../../anaconda3/envs/logifre/lib/python3.10/site-packages/rich/prompt.py:141: in ask
   return _prompt(default=default, stream=stream)
../../anaconda3/envs/logifre/lib/python3.10/site-packages/rich/prompt.py:274: in __call__
   value = self.get_input(self.console, prompt, self.password, stream=stream)
../../anaconda3/envs/logifre/lib/python3.10/site-packages/rich/prompt.py:203: in get_input
   return console.input(prompt, password=password, stream=stream)
../../anaconda3/envs/logifre/lib/python3.10/site-packages/rich/console.py:2123: in input
   result = input()
../../anaconda3/envs/logifre/lib/python3.10/site-packages/_pytest/capture.py:207: in read
   raise OSError(
E   OSError: pytest: reading from stdin while output is captured!  Consider using `-s`.
---------------------------------------------------------------------------------------------- Captured stdout ----------------------------------------------------------------------------------------------
No Logfire project credentials found.
All data sent to Logfire must be associated with a project.

Do you want to use one of your existing projects?  [y/n] (y): 

And here is the info :

logfire="0.29.0"
platform="macOS-14.3-arm64-arm-64bit"
python="3.9.12 (main, Jun  1 2022, 06:34:44) 
[Clang 12.0.0 ]"
[related_packages]
requests="2.28.1"
protobuf="4.25.3"
rich="13.7.1"
tomli="2.0.1"
opentelemetry-api="1.24.0"
opentelemetry-exporter-otlp-proto-common="1.24.0"
opentelemetry-exporter-otlp-proto-http="1.24.0"
opentelemetry-instrumentation="0.45b0"
opentelemetry-proto="1.24.0"
opentelemetry-sdk="1.24.0"
opentelemetry-semantic-conventions="0.45b0"

Documentation might be misleading, or my setup too unusual.
Everything is fine for me now.

Python, Logfire & OS Versions, related packages (not required)

No response

"LLM Chat Completions" section of dashboard cuts off horizontal text overflow.

Description

In the image below I've covered the actual text, but the prompt gets cut off and this makes the dashboard unusable for what would otherwise be a wonderful convenience.

image

Add a clickable, ...more button, make this element scrollable, or whatever sounds best to you!

Thanks so much

Python, Logfire & OS Versions, related packages (not required)

No response

Pydantic Schema Catalog

We want to build a catalog of Pydantic Models/Schemas as outlined in our Roadmap article with in Logfire.

The idea is that we'd use the SDK to upload the schema of Pydantic models to Logfire.
Then allow you to watch how those schemas change, as well as view metrics on how validation performed by a specific model is behaving.

Please let us know if you are interested in this, so we can prioritize it.

Code Details are wrong on OpenAI instrumentation

Description

Screenshot 2024-05-03 at 15 22 42
from openai import Client

import logfire

openai_client = Client()

logfire.instrument_openai(openai_client)

chat_completion_response = openai_client.chat.completions.with_raw_response.create(
    messages=[
        {'role': 'system', 'content': 'You are a helpful assistant.'},
        {'role': 'user', 'content': 'What is the best way to cook a steak?'},
    ],
    model='gpt-3.5-turbo',
    temperature=0.0,
    max_tokens=100,
    n=1,
)
chat_completion = chat_completion_response.parse()

Python, Logfire & OS Versions, related packages (not required)

No response

Testing Linear link

Description

Does this work?

Python, Logfire & OS Versions, related packages (not required)

logfire="0.28.2"
platform="Linux-6.5.0-27-generic-x86_64-with-glibc2.35"
python="3.12.3 (main, Apr 15 2024, 18:25:56) [Clang 17.0.6 ]"
[related_packages]
requests="2.31.0"
pydantic="2.7.1"
fastapi="0.110.2"
openai="1.23.3"
protobuf="4.25.3"
rich="13.7.1"
opentelemetry-api="1.24.0"
opentelemetry-exporter-otlp-proto-common="1.24.0"
opentelemetry-exporter-otlp-proto-http="1.24.0"
opentelemetry-instrumentation="0.45b0"
opentelemetry-instrumentation-aiohttp-client="0.45b0"
opentelemetry-instrumentation-asgi="0.45b0"
opentelemetry-instrumentation-asyncpg="0.45b0"
opentelemetry-instrumentation-dbapi="0.45b0"
opentelemetry-instrumentation-django="0.45b0"
opentelemetry-instrumentation-fastapi="0.45b0"
opentelemetry-instrumentation-flask="0.45b0"
opentelemetry-instrumentation-httpx="0.45b0"
opentelemetry-instrumentation-psycopg="0.45b0"
opentelemetry-instrumentation-psycopg2="0.45b0"
opentelemetry-instrumentation-requests="0.45b0"
opentelemetry-instrumentation-sqlalchemy="0.45b0"
opentelemetry-instrumentation-starlette="0.45b0"
opentelemetry-instrumentation-system-metrics="0.45b0"
opentelemetry-instrumentation-wsgi="0.45b0"
opentelemetry-proto="1.24.0"
opentelemetry-sdk="1.24.0"
opentelemetry-semantic-conventions="0.45b0"
opentelemetry-util-http="0.45b0"

Cross-Project Dashboards

Description

You'll be able to create dashboards with information from multiple projects.

Please let us know if you are interested in this, so we can prioritize it.

Deferred otel control

Description

I named the feature as "Deferred otel control", because I want to control the otel process and flush at some point in my program, to defer it if I want to gain the best performance in a defined time frame where my software is very time and performance sensitive.
In simpler term, some software, in my case a crawler that was tasked to do something quickly, can be time-intensive. That means there's a point in my software, where I want full speed as possible.
my case was more IO intensive as well. I wasn't doing much CPU. And 10ms makes alot difference.

At first I found this might be useless and not a common issue, but here's the discussion with @dmontagu who was also convinced this could be quite useful. You can also access more details from reading this thread.

To show my idea in term of coding:

def fn():
   with deferred_otel():
       # whatever happens here, otel sdks should defer the operation to afterward
        execute_fast()
        # i need all telemetry data from execute fast, but just deferred and flushed later
        execute_fast()
    # good breathing point
    # idc about speed here
    ...

Collecting the data for spans and logs can have significant overhead, but should I avoid logfire entirely if performance is highly time-sensitive when in fact that period only happens in small part of my code? In principle that in the logfire SDK we could buffer spans for processing/sending before even handing them off to OTel, rather than doing a lot of the manipulation eagerly

EdgeDB Integration?

Description

I was talking to @elprans on on LinkedIn.

The integration would be the similar to SQL DB integrations, but show the EdgeDB query?

(Maybe off topic) Would be good to put a span around transactions? This question applies to any db integration.

Sudden URRLIb error when I use model_dump_json()

Description

Hi there,

I'm using logfire with Instructor to extract some values from a given image. I'm getting a strange error message as seen below when my script executes as seen below.

requests.exceptions.ReadTimeout: HTTPSConnectionPool(host='logfire-api.pydantic.dev', port=443): Read timed out. (read timeout=10)

Here is the script I am using. It was working fine prior to this so I'm not too sure what's happening.

import instructor
from io import StringIO
from typing import Annotated, Any
from collections.abc import Iterable
from pydantic import (
    BeforeValidator,
    PlainSerializer,
    InstanceOf,
    WithJsonSchema,
    BaseModel,
)
import pandas as pd
from openai import OpenAI
import logfire

openai_client = OpenAI()
logfire.configure(pydantic_plugin=logfire.PydanticPlugin(record="all"))
logfire.instrument_openai(openai_client)
client = instructor.from_openai(
    openai_client, mode=instructor.function_calls.Mode.MD_JSON
)


def md_to_df(data: Any) -> Any:
    # Convert markdown to DataFrame
    if isinstance(data, str):
        return (
            pd.read_csv(
                StringIO(data),  # Process data
                sep="|",
                index_col=1,
            )
            .dropna(axis=1, how="all")
            .iloc[1:]
            .applymap(lambda x: x.strip())
        )
    return data


MarkdownDataFrame = Annotated[
    InstanceOf[pd.DataFrame],
    BeforeValidator(md_to_df),
    PlainSerializer(lambda df: df.to_markdown()),
    WithJsonSchema(
        {
            "type": "string",
            "description": "The markdown representation of the table, each one should be tidy, do not try to join tables that should be seperate",
        }
    ),
]


class Table(BaseModel):
    caption: str
    dataframe: MarkdownDataFrame


@logfire.instrument("extract-table", extract_args=True)
def extract_table_from_image(url: str) -> Iterable[Table]:
    return client.chat.completions.create(
        model="gpt-4-vision-preview",
        response_model=Iterable[Table],
        max_tokens=1800,
        messages=[
            {
                "role": "user",
                "content": [
                    {
                        "type": "text",
                        "text": "Extract out a table from the image. Only extract out the total number of skiiers.",
                    },
                    {"type": "image_url", "image_url": {"url": url}},
                ],
            }
        ],
    )


url = "https://cdn.statcdn.com/Infographic/images/normal/16330.jpeg"
tables = extract_table_from_image(url)
for table in tables:
    print(table.caption, end="\n")
    print(table.dataframe.to_markdown())

Python, Logfire & OS Versions, related packages

>> % logfire info
logfire="0.28.0"
platform="macOS-13.6.1-x86_64-i386-64bit"
python="3.12.3 (main, Apr  9 2024, 08:09:14) [Clang 15.0.0 (clang-1500.1.0.2.5)]"
[related_packages]
requests="2.31.0"
pydantic="2.7.1"
openai="1.24.1"
protobuf="4.25.3"
rich="13.7.1"
opentelemetry-api="1.24.0"
opentelemetry-exporter-otlp-proto-common="1.24.0"
opentelemetry-exporter-otlp-proto-http="1.24.0"
opentelemetry-instrumentation="0.45b0"
opentelemetry-proto="1.24.0"
opentelemetry-sdk="1.24.0"
opentelemetry-semantic-conventions="0.45b0"
(venv) ivanleo@Ivans-MacBook-Pro logfire % 

Web: Trying to create a new project doesn't display issue with insufficient permissions

Description

Screenshot 2024-05-02 at 12 43 53โ€ฏp m

Clicking on Create project when the user has the "member" role in an organization doesn't do anything. Looking at the Network tab in the browser reveals a 403 status and a response of

{"detail":"User does not have sufficient permissions on this organization"}

It would be nice to surface this error to the user in a toast or similar.

Python, Logfire & OS Versions, related packages (not required)

No response

`logfire inspect` is missing `psycopg` in the report

Description

  1. Install psycopg in a fastapi project
  2. Install logfire and use logfire inspect to get a report like the following:
The following packages from your environment have an OpenTelemetry instrumentation that is not installed:
โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”“
โ”ƒ Package    โ”ƒ OpenTelemetry instrumentation package    โ”ƒ
โ”กโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ฉ
โ”‚ fastapi    โ”‚ opentelemetry-instrumentation-fastapi    โ”‚
โ”‚ httpx      โ”‚ opentelemetry-instrumentation-httpx      โ”‚
โ”‚ jinja2     โ”‚ opentelemetry-instrumentation-jinja2     โ”‚
โ”‚ requests   โ”‚ opentelemetry-instrumentation-requests   โ”‚
โ”‚ sqlalchemy โ”‚ opentelemetry-instrumentation-sqlalchemy โ”‚
โ”‚ sqlite3    โ”‚ opentelemetry-instrumentation-sqlite3    โ”‚
โ”‚ urllib     โ”‚ opentelemetry-instrumentation-urllib     โ”‚
โ”‚ urllib3    โ”‚ opentelemetry-instrumentation-urllib3    โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
  1. Expectation:
The following packages from your environment have an OpenTelemetry instrumentation that is not installed:
โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”“
โ”ƒ Package    โ”ƒ OpenTelemetry instrumentation package    โ”ƒ
โ”กโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ฉ
โ”‚ fastapi    โ”‚ opentelemetry-instrumentation-fastapi    โ”‚
โ”‚ httpx      โ”‚ opentelemetry-instrumentation-httpx      โ”‚
โ”‚ jinja2     โ”‚ opentelemetry-instrumentation-jinja2     โ”‚
โ”‚ psycopg    โ”‚ opentelemetry-instrumentation-psycopg    โ”‚
โ”‚ requests   โ”‚ opentelemetry-instrumentation-requests   โ”‚
โ”‚ sqlalchemy โ”‚ opentelemetry-instrumentation-sqlalchemy โ”‚
โ”‚ sqlite3    โ”‚ opentelemetry-instrumentation-sqlite3    โ”‚
โ”‚ urllib     โ”‚ opentelemetry-instrumentation-urllib     โ”‚
โ”‚ urllib3    โ”‚ opentelemetry-instrumentation-urllib3    โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Python, Logfire & OS Versions, related packages (not required)

logfire="0.29.0"
platform="Windows-11-10.0.22631-SP0"
python="3.12.3 (tags/v3.12.3:f6650f9, Apr  9 2024, 14:05:25) [MSC v.1938 64 bit (AMD64)]"
[related_packages]
requests="2.31.0"
pydantic="2.7.1"
fastapi="0.110.3"
openai="0.28.1"
protobuf="4.25.3"
rich="13.7.1"
opentelemetry-api="1.24.0"
opentelemetry-exporter-otlp-proto-common="1.24.0"
opentelemetry-exporter-otlp-proto-http="1.24.0"
opentelemetry-instrumentation="0.45b0"
opentelemetry-proto="1.24.0"
opentelemetry-sdk="1.24.0"
opentelemetry-semantic-conventions="0.45b0"

Expose TLS/Insecure params via Logfire config

Description

Add support for sending data to a URL that uses a self-signed cert and also support for specifying TLS cert/key/ca.

I believe the HTTP exporter from OpenTelemetry has a param insecure for doing this. This is not exposed as part of LogfireConfig. There's also params for specifying cert/key/ca.

https://opentelemetry.io/docs/specs/otel/protocol/exporter/

These params are mostly needed for sending data to self-hosted endpoints or a self-hosted Logfire in the future.

Failed span runs forever in UI

Description

Even though this span failed because of an exception, in the UI it shows as "ongoing" for over an hour. The process isn't even running anymore. This persists after refreshing as well.

screenshot 2024-05-01 at 16 45 12

Live tail does not stop during a network error

Description

The timer of the Live tail does not stop as it failed to export spans during a network error (maybe?).
Here is the traceback:
3B9232E0-D076-461B-B144-5194601C937E

[WARNING 2024-05-07 09:38:15,509 _showwarnmsg:109] /usr/local/lib/python3.8/dist-packages/logfire/_internal/exporters/file.py:58: WritingFallbackWarning: Failed to export spans, writing to fallback file: /root/cnocr/.logfire/logfire_spans.bin
  warnings.warn(
 
[ERROR 2024-05-07 09:38:15,524 _export_batch:369] Exception while exporting Span batch. 
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/urllib3-1.26.14-py3.8.egg/urllib3/connectionpool.py", line 449, in _make_request
    six.raise_from(e, None)
  File "<string>", line 3, in raise_from
  File "/usr/local/lib/python3.8/dist-packages/urllib3-1.26.14-py3.8.egg/urllib3/connectionpool.py", line 444, in _make_request
    httplib_response = conn.getresponse()
  File "/usr/lib/python3.8/http/client.py", line 1348, in getresponse
    response.begin()
  File "/usr/lib/python3.8/http/client.py", line 316, in begin
    version, status, reason = self._read_status()
  File "/usr/lib/python3.8/http/client.py", line 277, in _read_status
    line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
  File "/usr/lib/python3.8/socket.py", line 669, in readinto
    return self._sock.recv_into(b)
  File "/usr/lib/python3.8/ssl.py", line 1241, in recv_into
    return self.read(nbytes, buffer)
  File "/usr/lib/python3.8/ssl.py", line 1099, in read
    return self._sslobj.read(len, buffer)
socket.timeout: The read operation timed out

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/requests-2.28.2-py3.8.egg/requests/adapters.py", line 489, in send
    resp = conn.urlopen(
  File "/usr/local/lib/python3.8/dist-packages/urllib3-1.26.14-py3.8.egg/urllib3/connectionpool.py", line 787, in urlopen
    retries = retries.increment(
  File "/usr/local/lib/python3.8/dist-packages/urllib3-1.26.14-py3.8.egg/urllib3/util/retry.py", line 550, in increment
    raise six.reraise(type(error), error, _stacktrace)
  File "/usr/local/lib/python3.8/dist-packages/urllib3-1.26.14-py3.8.egg/urllib3/packages/six.py", line 770, in reraise
    raise value
  File "/usr/local/lib/python3.8/dist-packages/urllib3-1.26.14-py3.8.egg/urllib3/connectionpool.py", line 703, in urlopen
    httplib_response = self._make_request(
  File "/usr/local/lib/python3.8/dist-packages/urllib3-1.26.14-py3.8.egg/urllib3/connectionpool.py", line 451, in _make_request
    self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
  File "/usr/local/lib/python3.8/dist-packages/urllib3-1.26.14-py3.8.egg/urllib3/connectionpool.py", line 340, in _raise_timeout
    raise ReadTimeoutError(
urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='logfire-api.pydantic.dev', port=443): Read timed out. (read timeout=10)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/opentelemetry/sdk/trace/export/__init__.py", line 367, in _export_batch
    self.span_exporter.export(self.spans_list[:idx])  # type: ignore
  File "/usr/local/lib/python3.8/dist-packages/logfire/_internal/exporters/remove_pending.py", line 45, in export
    return super().export(result)
  File "/usr/local/lib/python3.8/dist-packages/logfire/_internal/exporters/wrapper.py", line 14, in export
    return self.wrapped_exporter.export(spans)
  File "/usr/local/lib/python3.8/dist-packages/logfire/_internal/exporters/fallback.py", line 20, in export
    res = self.exporter.export(spans)
  File "/usr/local/lib/python3.8/dist-packages/logfire/_internal/exporters/otlp.py", line 56, in export
    return super().export(spans)
  File "/usr/local/lib/python3.8/dist-packages/logfire/_internal/exporters/wrapper.py", line 14, in export
    return self.wrapped_exporter.export(spans)
  File "/usr/local/lib/python3.8/dist-packages/opentelemetry/exporter/otlp/proto/http/trace_exporter/__init__.py", line 145, in export
    resp = self._export(serialized_data)
  File "/usr/local/lib/python3.8/dist-packages/opentelemetry/exporter/otlp/proto/http/trace_exporter/__init__.py", line 114, in _export
    return self._session.post(
  File "/usr/local/lib/python3.8/dist-packages/requests-2.28.2-py3.8.egg/requests/sessions.py", line 635, in post
    return self.request("POST", url, data=data, json=json, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/requests-2.28.2-py3.8.egg/requests/sessions.py", line 587, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/local/lib/python3.8/dist-packages/logfire/_internal/exporters/otlp.py", line 41, in send
    return super().send(request, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/requests-2.28.2-py3.8.egg/requests/sessions.py", line 701, in send
    r = adapter.send(request, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/requests-2.28.2-py3.8.egg/requests/adapters.py", line 578, in send
    raise ReadTimeout(e, request=request)
requests.exceptions.ReadTimeout: HTTPSConnectionPool(host='logfire-api.pydantic.dev', port=443): Read timed out. (read timeout=10)

Python, Logfire & OS Versions, related packages (not required)

logfire="0.30.0"
platform="Linux-5.10.104-tegra-aarch64-with-glibc2.29"
python="3.8.10 (default, Nov 14 2022, 12:59:47) 
[GCC 9.4.0]"
[related_packages]
requests="2.22.0"
requests="2.28.2"
pydantic="2.7.1"
fastapi="0.110.2"
protobuf="4.25.3"
rich="13.7.1"
tomli="2.0.1"
opentelemetry-api="1.24.0"
opentelemetry-exporter-otlp-proto-common="1.24.0"
opentelemetry-exporter-otlp-proto-http="1.24.0"
opentelemetry-instrumentation="0.45b0"
opentelemetry-instrumentation-asgi="0.45b0"
opentelemetry-instrumentation-fastapi="0.45b0"
opentelemetry-proto="1.24.0"
opentelemetry-sdk="1.24.0"
opentelemetry-semantic-conventions="0.45b0"
opentelemetry-util-http="0.45b0"

Had to install opentelemetry instrumentation packages manually for FastAPI support.

Description

I did a pip install logfire but after trying to setup integration with FastAPI I got errors that opentelemetry-instrumentation-asgi and opentelemetry-instrumentation-fastapi weren't installed.

I probably didn't follow/find the instructions to a "t". Regards,

Python, Logfire & OS Versions, related packages (not required)

No response

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.