GithubHelp home page GithubHelp logo

jolt-org / advanced-alchemy Goto Github PK

View Code? Open in Web Editor NEW
176.0 9.0 21.0 11.4 MB

A carefully crafted, thoroughly tested, optimized companion library for SQLAlchemy

Home Page: http://docs.advanced-alchemy.litestar.dev/

License: MIT License

Python 97.99% Makefile 0.89% Mako 1.09% Shell 0.02%
duckdb fastapi litestar mysql oracle-db postgresql sanic spanner sqlalchemy sqlite starlette alembic repository-pattern mssql cockroachdb litestar-org

advanced-alchemy's Introduction

Litestar Logo - Light Litestar Logo - Dark

Project Status
CI/CD Latest Release ci Documentation Building
Quality Coverage Quality Gate Status Maintainability Rating Reliability Rating Security Rating
Package PyPI - Version PyPI - Support Python Versions Advanced Alchemy PyPI - Downloads
Community Reddit Discord Matrix Medium Twitter Blog
Meta Litestar Project types - Mypy License - MIT Litestar Sponsors linting - Ruff code style - Ruff

Advanced Alchemy

Check out the project documentation ๐Ÿ“š for more information.

About

A carefully crafted, thoroughly tested, optimized companion library for SQLAlchemy, offering:

  • Sync and async repositories, featuring common CRUD and highly optimized bulk operations
  • Integration with major web frameworks including Litestar, Starlette, FastAPI, Sanic
  • Custom-built alembic configuration and CLI with optional framework integration
  • Utility base classes with audit columns, primary keys and utility functions
  • Optimized JSON types including a custom JSON type for Oracle
  • Integrated support for UUID6 and UUID7 using uuid-utils (install with the uuid extra)
  • Pre-configured base classes with audit columns UUID or Big Integer primary keys and a sentinel column.
  • Synchronous and asynchronous repositories featuring:
    • Common CRUD operations for SQLAlchemy models
    • Bulk inserts, updates, upserts, and deletes with dialect-specific enhancements
    • lambda_stmt when possible for improved query building performance
    • Integrated counts, pagination, sorting, filtering with LIKE, IN, and dates before and/or after.
  • Tested support for multiple database backends including:
  • ...and much more

Usage

Installation

pip install advanced-alchemy

Important

Check out the installation guide in our official documentation!

Repositories

Advanced Alchemy includes a set of asynchronous and synchronous repository classes for easy CRUD operations on your SQLAlchemy models.

Click to expand the example
from advanced_alchemy.base import UUIDBase
from advanced_alchemy.filters import LimitOffset
from advanced_alchemy.repository import SQLAlchemySyncRepository
from sqlalchemy import create_engine
from sqlalchemy.orm import Mapped, sessionmaker


class User(UUIDBase):
    # you can optionally override the generated table name by manually setting it.
    __tablename__ = "user_account"  # type: ignore[assignment]
    email: Mapped[str]
    name: Mapped[str]


class UserRepository(SQLAlchemySyncRepository[User]):
    """User repository."""

    model_type = User


# use any compatible sqlalchemy engine.
engine = create_engine("duckdb:///:memory:")
session_factory = sessionmaker(engine, expire_on_commit=False)

# Initializes the database.
with engine.begin() as conn:
    User.metadata.create_all(conn)

with session_factory() as db_session:
    repo = UserRepository(session=db_session)
    # 1) Create multiple users with `add_many`
    bulk_users = [
        {"email": '[email protected]', 'name': 'Cody'},
        {"email": '[email protected]', 'name': 'Janek'},
        {"email": '[email protected]', 'name': 'Peter'},
        {"email": '[email protected]', 'name': 'Jacob'}
    ]
    objs = repo.add_many([User(**raw_user) for raw_user in bulk_users])
    db_session.commit()
    print(f"Created {len(objs)} new objects.")

    # 2) Select paginated data and total row count.  Pass additional filters as kwargs
    created_objs, total_objs = repo.list_and_count(LimitOffset(limit=10, offset=0), name="Cody")
    print(f"Selected {len(created_objs)} records out of a total of {total_objs}.")

    # 3) Let's remove the batch of records selected.
    deleted_objs = repo.delete_many([new_obj.id for new_obj in created_objs])
    print(f"Removed {len(deleted_objs)} records out of a total of {total_objs}.")

    # 4) Let's count the remaining rows
    remaining_count = repo.count()
    print(f"Found {remaining_count} remaining records after delete.")

For a full standalone example, see the sample here

Services

Advanced Alchemy includes an additional service class to make working with a repository easier. This class is designed to accept data as a dictionary or SQLAlchemy model, and it will handle the type conversions for you.

Here's the same example from above but using a service to create the data:
from advanced_alchemy.base import UUIDBase
from advanced_alchemy.filters import LimitOffset
from advanced_alchemy import SQLAlchemySyncRepository, SQLAlchemySyncRepositoryService
from sqlalchemy import create_engine
from sqlalchemy.orm import Mapped, sessionmaker


class User(UUIDBase):
    # you can optionally override the generated table name by manually setting it.
    __tablename__ = "user_account"  # type: ignore[assignment]
    email: Mapped[str]
    name: Mapped[str]


class UserRepository(SQLAlchemySyncRepository[User]):
    """User repository."""

    model_type = User


class UserService(SQLAlchemySyncRepositoryService[User]):
    """User repository."""

    repository_type = UserRepository


# use any compatible sqlalchemy engine.
engine = create_engine("duckdb:///:memory:")
session_factory = sessionmaker(engine, expire_on_commit=False)

# Initializes the database.
with engine.begin() as conn:
    User.metadata.create_all(conn)

with session_factory() as db_session:
    service = UserService(session=db_session)
    # 1) Create multiple users with `add_many`
    objs = service.create_many([
        {"email": '[email protected]', 'name': 'Cody'},
        {"email": '[email protected]', 'name': 'Janek'},
        {"email": '[email protected]', 'name': 'Peter'},
        {"email": '[email protected]', 'name': 'Jacob'}
    ])
    print(objs)
    print(f"Created {len(objs)} new objects.")

    # 2) Select paginated data and total row count.  Pass additional filters as kwargs
    created_objs, total_objs = service.list_and_count(LimitOffset(limit=10, offset=0), name="Cody")
    print(f"Selected {len(created_objs)} records out of a total of {total_objs}.")

    # 3) Let's remove the batch of records selected.
    deleted_objs = service.delete_many([new_obj.id for new_obj in created_objs])
    print(f"Removed {len(deleted_objs)} records out of a total of {total_objs}.")

    # 4) Let's count the remaining rows
    remaining_count = service.count()
    print(f"Found {remaining_count} remaining records after delete.")

Web Frameworks

Advanced Alchemy works with nearly all Python web frameworks. Several helpers for popular libraries are included, and additional PRs to support others are welcomed.

Litestar

Advanced Alchemy is the official SQLAlchemy integration for Litestar.

In addition to installing with pip install advanced-alchemy, it can also be installed as a Litestar extra with pip install litestar[sqlalchemy].

Litestar Example
from litestar import Litestar
from litestar.plugins.sqlalchemy import SQLAlchemyPlugin, SQLAlchemyAsyncConfig
# alternately...
# from advanced_alchemy.extensions.litestar.plugins import SQLAlchemyPlugin
# from advanced_alchemy.extensions.litestar.plugins.init.config import SQLAlchemyAsyncConfig

alchemy = SQLAlchemyPlugin(
  config=SQLAlchemyAsyncConfig(connection_string="sqlite+aiosqlite:///test.sqlite"),
)
app = Litestar(plugins=[alchemy])

For a full Litestar example, check here

FastAPI

FastAPI Example
from fastapi import FastAPI

from advanced_alchemy.config import SQLAlchemyAsyncConfig
from advanced_alchemy.extensions.starlette import StarletteAdvancedAlchemy

app = FastAPI()
alchemy = StarletteAdvancedAlchemy(
    config=SQLAlchemyAsyncConfig(connection_string="sqlite+aiosqlite:///test.sqlite"), app=app,
)

For a full FastAPI example, see here

Starlette

Pre-built Example Apps
from starlette.applications import Starlette

from advanced_alchemy.config import SQLAlchemyAsyncConfig
from advanced_alchemy.extensions.starlette import StarletteAdvancedAlchemy

app = Starlette()
alchemy = StarletteAdvancedAlchemy(
    config=SQLAlchemyAsyncConfig(connection_string="sqlite+aiosqlite:///test.sqlite"), app=app,
)

Sanic

Pre-built Example Apps
from sanic import Sanic
from sanic_ext import Extend

from advanced_alchemy.config import SQLAlchemyAsyncConfig
from advanced_alchemy.extensions.sanic import SanicAdvancedAlchemy

app = Sanic("AlchemySanicApp")
alchemy = SanicAdvancedAlchemy(
    sqlalchemy_config=SQLAlchemyAsyncConfig(connection_string="sqlite+aiosqlite:///test.sqlite"),
)
Extend.register(alchemy)

Contributing

All Litestar Organization projects will always be a community-centered, available for contributions of any size.

Before contributing, please review the contribution guide.

If you have any questions, reach out to us on Discord, our org-wide GitHub discussions page, or the project-specific GitHub discussions page.


Litestar Logo - Light
An official Litestar Organization Project

advanced-alchemy's People

Contributors

abdulhaq-e avatar alc-alc avatar cbscsm avatar cemrehancavdar avatar cofin avatar darinkishore avatar dependabot[bot] avatar gazorby avatar geeshta avatar guacs avatar jacobcoffee avatar mbeijen avatar peterschutt avatar provinzkraut avatar rseeley avatar sergeykomvl avatar sfermigier avatar tspnn avatar wer153 avatar ysnbyzli avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

advanced-alchemy's Issues

Enhancement: litestar `create_all` config attribute

Summary

We can probably abstract the:

    async with engine.begin() as conn:
        await conn.run_sync(Base.metadata.create_all)

boilerplate away.

Basic Example

db_config = SQLAlchemyAsyncConfig(connection_string="sqlite+aiosqlite:///todo.sqlite", create_all=metadata)

Drawbacks and Impact

No response

Unresolved questions

No response


Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Bug: `touch_updated_timestamp` event handler gets registered unkowingly via unrelated import

Description

I'm making use of the SQLAlchemyPlugin for Litestar without making use of the provided bases (UUIDBase, BigIntBase). The plugin gets imported from advanced_alchemy.extensions.litestar.plugins

Through an internal chain of imports, eventually advanced_alchemy.base gets loaded and registers the touch_updated_timestamp event handler, which in turn modified my own updated_at columns which are timezone naive and caused flush operations to fail.

URL to code causing the issue

No response

MCVE

No response

Steps to reproduce

No response

Screenshots

No response

Logs

No response

Jolt Project Version

0.5.5

Platform

  • Linux
  • Mac
  • Windows
  • Other (Please specify in the description above)

Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Enhancement: Consider supporting UUIDv7 and/or ULID

Summary

Determine if adding a ULID data type is feasible at this point.

Basic Example

No response

Drawbacks and Impact

No response

Unresolved questions

No response


Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

ConflictError is a misleading name

Description

I get confused every time I see a ConflictError. This is usually the result of an uninitialised field, not of any kind of "conflict".

Per the source code:

class ConflictError(RepositoryError):
    """Data integrity error."""

I believe ConflictError should be renamed IntegrityError or DataIntegrityError.

URL to code causing the issue

No response

MCVE

No response

Steps to reproduce

No response

Screenshots

No response

Logs

No response

Jolt Project Version

advanced-alchemy 0.6.1

Platform

  • Linux
  • Mac
  • Windows
  • Other (Please specify in the description above)

Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Enhancement: SQLAlchemy repository - Support composite primary keys

Summary

Support composite primary keys for all identifiers in the SQLAlchemy repository.

Basic Example

No response

Drawbacks and Impact

No response

Unresolved questions

No response


Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Bug: list_and_count triggering a lot of SQL queries

Description

I don't know if that could be related to this issue : #128

Basically, I have a route which uses a Service which does a list_and_count, it took way too much time for this little select, so I decided to show the queries with the echo parameter. (I'm using the Litestar Fullstack repo btw).

And I noticed that I had a lot of queries that looked like this : (see logs)

I know that i'm not using the latest version of advanced_alchemy, but I've looked through the commits and I did not see things related to this since 0.8.3

I am using a Postgres 15.1 database

URL to code causing the issue

No response

MCVE

# Here are the main parts of my program



@dataclass
class DatabaseSettings:
    ECHO: bool = field(
        default_factory=lambda: os.getenv("DATABASE_ECHO", "False") in {"True", "1", "yes", "Y", "T", "debug"},
    )
    """Enable SQLAlchemy engine logs."""
    ECHO_POOL: bool = field(
        default_factory=lambda: os.getenv("DATABASE_ECHO_POOL", "False") in {"True", "1", "yes", "Y", "T", "debug"},
    )
    """Enable SQLAlchemy connection pool logs."""
    POOL_DISABLED: bool = field(
        default_factory=lambda: os.getenv("DATABASE_POOL_DISABLED", "False") in TRUE_VALUES,
    )
    """Disable SQLAlchemy pool configuration."""
    POOL_MAX_OVERFLOW: int = field(default_factory=lambda: int(os.getenv("DATABASE_MAX_POOL_OVERFLOW", "10")))
    """Max overflow for SQLAlchemy connection pool"""
    POOL_SIZE: int = field(default_factory=lambda: int(os.getenv("DATABASE_POOL_SIZE", "5")))
    """Pool size for SQLAlchemy connection pool"""
    POOL_TIMEOUT: int = field(default_factory=lambda: int(os.getenv("DATABASE_POOL_TIMEOUT", "30")))
    """Time in seconds for timing connections out of the connection pool."""
    POOL_RECYCLE: int = field(default_factory=lambda: int(os.getenv("DATABASE_POOL_RECYCLE", "300")))
    """Amount of time to wait before recycling connections."""
    POOL_PRE_PING: bool = field(
        default_factory=lambda: os.getenv("DATABASE_PRE_POOL_PING", "False") in TRUE_VALUES,
    )
    """Optionally ping database before fetching a session from the connection pool."""
    URL: str = field(default_factory=lambda: os.getenv("DATABASE_URL", "sqlite+aiosqlite:///db.sqlite3"))
    """SQLAlchemy Database URL."""
    MIGRATION_CONFIG: str = f"{BASE_DIR}/db/migrations/alembic.ini"
    """The path to the `alembic.ini` configuration file."""
    MIGRATION_PATH: str = f"{BASE_DIR}/db/migrations"
    """The path to the `alembic` database migrations."""
    MIGRATION_DDL_VERSION_TABLE: str = "ddl_version"
    """The name to use for the `alembic` versions table name."""
    FIXTURE_PATH: str = f"{BASE_DIR}/db/fixtures"
    """The path to JSON fixture files to load into tables."""
    _engine_instance: AsyncEngine | None = None
    """SQLAlchemy engine instance generated from settings."""

    @property
    def engine(self) -> AsyncEngine:
        return self.get_engine()

    def get_engine(self) -> AsyncEngine:
        if self._engine_instance is not None:
            return self._engine_instance
        if self.URL.startswith("postgres"):
            engine = create_async_engine(
                url=self.URL,
                future=True,
                json_serializer=encode_json,
                json_deserializer=decode_json,
                echo=self.ECHO,
                echo_pool=self.ECHO_POOL,
                max_overflow=self.POOL_MAX_OVERFLOW,
                pool_size=self.POOL_SIZE,
                pool_timeout=self.POOL_TIMEOUT,
                pool_recycle=self.POOL_RECYCLE,
                pool_pre_ping=self.POOL_PRE_PING,
                pool_use_lifo=True,  # use lifo to reduce the number of idle connections
                connect_args={"server_settings": {"jit": "off"}},
                poolclass=NullPool if self.POOL_DISABLED else None,
            )
            """Database session factory.

            See [`async_sessionmaker()`][sqlalchemy.ext.asyncio.async_sessionmaker].
            """

            @event.listens_for(engine.sync_engine, "connect")
            def _sqla_on_connect(dbapi_connection: Any, _: Any) -> Any:  # pragma: no cover
                """Using msgspec for serialization of the json column values means that the
                output is binary, not `str` like `json.dumps` would output.
                SQLAlchemy expects that the json serializer returns `str` and calls `.encode()` on the value to
                turn it to bytes before writing to the JSONB column. I'd need to either wrap `serialization.to_json` to
                return a `str` so that SQLAlchemy could then convert it to binary, or do the following, which
                changes the behaviour of the dialect to expect a binary value from the serializer.
                See Also https://github.com/sqlalchemy/sqlalchemy/blob/14bfbadfdf9260a1c40f63b31641b27fe9de12a0/lib/sqlalchemy/dialects/postgresql/asyncpg.py#L934  pylint: disable=line-too-long
                """

                def encoder(bin_value: bytes) -> bytes:
                    return b"\x01" + encode_json(bin_value)

                def decoder(bin_value: bytes) -> Any:
                    # the byte is the \x01 prefix for jsonb used by PostgreSQL.
                    # asyncpg returns it when format='binary'
                    return decode_json(bin_value[1:])

                dbapi_connection.await_(
                    dbapi_connection.driver_connection.set_type_codec(
                        "jsonb",
                        encoder=encoder,
                        decoder=decoder,
                        schema="pg_catalog",
                        format="binary",
                    ),
                )
                dbapi_connection.await_(
                    dbapi_connection.driver_connection.set_type_codec(
                        "json",
                        encoder=encoder,
                        decoder=decoder,
                        schema="pg_catalog",
                        format="binary",
                    ),
                )

        elif self.URL.startswith("sqlite"):
            engine = create_async_engine(
                url=self.URL,
                future=True,
                json_serializer=encode_json,
                json_deserializer=decode_json,
                echo=self.ECHO,
                echo_pool=self.ECHO_POOL,
                pool_recycle=self.POOL_RECYCLE,
                pool_pre_ping=self.POOL_PRE_PING,
            )
            """Database session factory.

            See [`async_sessionmaker()`][sqlalchemy.ext.asyncio.async_sessionmaker].
            """

            @event.listens_for(engine.sync_engine, "connect")
            def _sqla_on_connect(dbapi_connection: Any, _: Any) -> Any:  # pragma: no cover
                """Override the default begin statement.  The disables the built in begin execution."""
                dbapi_connection.isolation_level = None

            @event.listens_for(engine.sync_engine, "begin")
            def _sqla_on_begin(dbapi_connection: Any) -> Any:  # pragma: no cover
                """Emits a custom begin"""
                dbapi_connection.exec_driver_sql("BEGIN")

        else:
            engine = create_async_engine(
                url=self.URL,
                future=True,
                json_serializer=encode_json,
                connect_args={"server_settings": {"jit": "off"}},
                json_deserializer=decode_json,
                echo=self.ECHO,
                echo_pool=self.ECHO_POOL,
                max_overflow=self.POOL_MAX_OVERFLOW,
                pool_size=self.POOL_SIZE,
                pool_timeout=self.POOL_TIMEOUT,
                pool_recycle=self.POOL_RECYCLE,
                pool_pre_ping=self.POOL_PRE_PING,
            )
        self._engine_instance = engine
        return self._engine_instance


# Models 


class User(UUIDAuditBase):
    __tablename__ = "user_accounts"  # type: ignore[assignment]
    __table_args__ = {"comment": "User accounts for application access"}
    __pii_columns__ = {"name", "email", "avatar_url"}

    email: Mapped[str] = mapped_column(unique=True, index=True, nullable=False)
    name: Mapped[str | None] = mapped_column(nullable=True, default=None)
    # ...

    devices: Mapped[list[Device]] = relationship(
        back_populates="user",
        lazy="selectin",
        uselist=True,
        cascade="all, delete",
    )
    groups: Mapped[list[Group]] = relationship(
        back_populates="user",
        uselist=True,
        lazy="selectin",
        cascade="all, delete",
    )


    def __repr__(self) -> str:
        return self.email



class Group(UUIDAuditBase):
    """Group of devices that belong to a specific user."""

    __tablename__ = "groups"
    __table_args__ = {"comment": "A group of devices belonging to a user"}

    name: Mapped[str] = mapped_column(String(length=255), nullable=False, index=True)

    # -----------
    # ORM Relationships
    # ------------
    user_id: Mapped[UUID] = mapped_column(ForeignKey("user_accounts.id", ondelete="cascade"), nullable=False)
    user: Mapped[User] = relationship(
        back_populates="groups",
        innerjoin=True,
        uselist=False,
        lazy="joined",
    )

    devices: Mapped[list[Device]] = relationship(
        secondary=device_group_association,
        back_populates="groups",
        lazy="selectin",
    )

    def __repr__(self) -> str:
        return self.name



class Device(UUIDAuditBase):
    """Device to send push notifications to."""

    __tablename__ = "devices"
    __table_args__ = (UniqueConstraint("user_id", "expo_token"),)  # type: ignore[assignment]


    # -----------
    # ORM Relationships
    # ------------
    user_id: Mapped[UUID] = mapped_column(ForeignKey("user_accounts.id", ondelete="cascade"), nullable=False)
    user: Mapped[User] = relationship(
        back_populates="devices",
        innerjoin=True,
        uselist=False,
        lazy="joined",
    )

    groups: Mapped[list[Group]] = relationship(
        secondary=device_group_association,
        back_populates="devices",
    )

    def __repr__(self) -> str:
        return self.expo_token




async def provide_groups_service(
    db_session: AsyncSession | None = None,
) -> AsyncGenerator[GroupService, None]:
    """

    Args:
        db_session (AsyncSession | None, optional): current database session. Defaults to None.

    """
    async with GroupService.new(
        session=db_session,
        statement=select(Group).options(
            selectinload(Group.devices),
        ),
    ) as service:
        yield service


# And the controller

class GroupController(Controller):
    tags = ["Groups"]

    dependencies = {
        "users_service": Provide(provide_users_service),
        "devices_service": Provide(provide_devices_service),
        "groups_service": Provide(provide_groups_service),
    }

    path = "/api/groups"

    @get(
        path="/",
        media_type=MediaType.JSON,
        cache=False,
        status_code=200,
        guards=[],
        summary="Route for an user to list the groups and count.",
    )
    async def list_groups(
        self,
        request: Request,
        groups_service: GroupService,
        current_user: UserModel,
    ) -> OffsetPagination[GroupShow]:

        results, total = await groups_service.list_and_count(user_id=current_user.id)

        return groups_service.to_schema(GroupShow, results, total)

Steps to reproduce

1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error

Screenshots

"In the format of: ![SCREENSHOT_DESCRIPTION](SCREENSHOT_LINK.png)"

Logs

024-05-12 23:01:43,456 INFO sqlalchemy.engine.Engine [caching disabled (excess depth for ORM loader options) 0.00310s ] ('11db6fe2-861b-49c7-97db-6b62c753ca5f',)
INFO - 2024-05-12 23:01:43,454 - sqlalchemy.engine.Engine - base - SELECT user_account_roles.user_id AS user_account_roles_user_id, user_account_roles.role_id AS user_account_roles_role_id, user_account_roles.assigned_at AS user_account_roles_assigned_at, user_account_roles.id AS user_account_roles_id, user_account_roles.sa_orm_sentinel AS user_account_roles_sa_orm_sentinel, user_account_roles.created_at AS user_account_roles_created_at, user_account_roles.updated_at AS user_account_roles_updated_at, user_accounts_1.email AS user_accounts_1_email, user_accounts_1.name AS user_accounts_1_name, user_accounts_1.team_name AS user_accounts_1_team_name, user_accounts_1.hashed_password AS user_accounts_1_hashed_password, user_accounts_1.avatar_url AS user_accounts_1_avatar_url, user_accounts_1.is_active AS user_accounts_1_is_active, user_accounts_1.is_superuser AS user_accounts_1_is_superuser, user_accounts_1.is_verified AS user_accounts_1_is_verified, user_accounts_1.is_demo AS user_accounts_1_is_demo, user_accounts_1.verified_at AS user_accounts_1_verified_at, user_accounts_1.joined_at AS user_accounts_1_joined_at, user_accounts_1.login_count AS user_accounts_1_login_count, user_accounts_1.premium_plan AS user_accounts_1_premium_plan, user_accounts_1.premium_plan_expiration_date AS user_accounts_1_premium_plan_expiration_date, user_accounts_1.lemonsqueezy_customer_id AS user_accounts_1_lemonsqueezy_customer_id, user_accounts_1.id AS user_accounts_1_id, user_accounts_1.sa_orm_sentinel AS user_accounts_1_sa_orm_sentinel, user_accounts_1.created_at AS user_accounts_1_created_at, user_accounts_1.updated_at AS user_accounts_1_updated_at, roles_1.slug AS roles_1_slug, roles_1.name AS roles_1_name, roles_1.description AS roles_1_description, roles_1.id AS roles_1_id, roles_1.sa_orm_sentinel AS roles_1_sa_orm_sentinel, roles_1.created_at AS roles_1_created_at, roles_1.updated_at AS roles_1_updated_at 
FROM user_account_roles JOIN user_accounts AS user_accounts_1 ON user_accounts_1.id = user_account_roles.user_id JOIN roles AS roles_1 ON roles_1.id = user_account_roles.role_id 
WHERE user_account_roles.user_id IN ($1::UUID)
INFO - 2024-05-12 23:01:43,456 - sqlalchemy.engine.Engine - base - [caching disabled (excess depth for ORM loader options) 0.00310s ] ('11db6fe2-861b-49c7-97db-6b62c753ca5f',)
2024-05-12 23:01:43,503 INFO sqlalchemy.engine.Engine SELECT user_account_roles.user_id AS user_account_roles_user_id, user_account_roles.role_id AS user_account_roles_role_id, user_account_roles.assigned_at AS user_account_roles_assigned_at, user_account_roles.id AS user_account_roles_id, user_account_roles.sa_orm_sentinel AS user_account_roles_sa_orm_sentinel, user_account_roles.created_at AS user_account_roles_created_at, user_account_roles.updated_at AS user_account_roles_updated_at, user_accounts_1.email AS user_accounts_1_email, user_accounts_1.name AS user_accounts_1_name, user_accounts_1.team_name AS user_accounts_1_team_name, user_accounts_1.hashed_password AS user_accounts_1_hashed_password, user_accounts_1.avatar_url AS user_accounts_1_avatar_url, user_accounts_1.is_active AS user_accounts_1_is_active, user_accounts_1.is_superuser AS user_accounts_1_is_superuser, user_accounts_1.is_verified AS user_accounts_1_is_verified, user_accounts_1.is_demo AS user_accounts_1_is_demo, user_accounts_1.verified_at AS user_accounts_1_verified_at, user_accounts_1.joined_at AS user_accounts_1_joined_at, user_accounts_1.login_count AS user_accounts_1_login_count, user_accounts_1.premium_plan AS user_accounts_1_premium_plan, user_accounts_1.premium_plan_expiration_date AS user_accounts_1_premium_plan_expiration_date, user_accounts_1.lemonsqueezy_customer_id AS user_accounts_1_lemonsqueezy_customer_id, user_accounts_1.id AS user_accounts_1_id, user_accounts_1.sa_orm_sentinel AS user_accounts_1_sa_orm_sentinel, user_accounts_1.created_at AS user_accounts_1_created_at, user_accounts_1.updated_at AS user_accounts_1_updated_at, roles_1.slug AS roles_1_slug, roles_1.name AS roles_1_name, roles_1.description AS roles_1_description, roles_1.id AS roles_1_id, roles_1.sa_orm_sentinel AS roles_1_sa_orm_sentinel, roles_1.created_at AS roles_1_created_at, roles_1.updated_at AS roles_1_updated_at 
FROM user_account_roles JOIN user_accounts AS user_accounts_1 ON user_accounts_1.id = user_account_roles.user_id JOIN roles AS roles_1 ON roles_1.id = user_account_roles.role_id 
WHERE user_account_roles.user_id IN ($1::UUID)
2024-05-12 23:01:43,504 INFO sqlalchemy.engine.Engine [caching disabled (excess depth for ORM loader options) 0.00148s ] ('11db6fe2-861b-49c7-97db-6b62c753ca5f',)
INFO - 2024-05-12 23:01:43,503 - sqlalchemy.engine.Engine - base - SELECT user_account_roles.user_id AS user_account_roles_user_id, user_account_roles.role_id AS user_account_roles_role_id, user_account_roles.assigned_at AS user_account_roles_assigned_at, user_account_roles.id AS user_account_roles_id, user_account_roles.sa_orm_sentinel AS user_account_roles_sa_orm_sentinel, user_account_roles.created_at AS user_account_roles_created_at, user_account_roles.updated_at AS user_account_roles_updated_at, user_accounts_1.email AS user_accounts_1_email, user_accounts_1.name AS user_accounts_1_name, user_accounts_1.team_name AS user_accounts_1_team_name, user_accounts_1.hashed_password AS user_accounts_1_hashed_password, user_accounts_1.avatar_url AS user_accounts_1_avatar_url, user_accounts_1.is_active AS user_accounts_1_is_active, user_accounts_1.is_superuser AS user_accounts_1_is_superuser, user_accounts_1.is_verified AS user_accounts_1_is_verified, user_accounts_1.is_demo AS user_accounts_1_is_demo, user_accounts_1.verified_at AS user_accounts_1_verified_at, user_accounts_1.joined_at AS user_accounts_1_joined_at, user_accounts_1.login_count AS user_accounts_1_login_count, user_accounts_1.premium_plan AS user_accounts_1_premium_plan, user_accounts_1.premium_plan_expiration_date AS user_accounts_1_premium_plan_expiration_date, user_accounts_1.lemonsqueezy_customer_id AS user_accounts_1_lemonsqueezy_customer_id, user_accounts_1.id AS user_accounts_1_id, user_accounts_1.sa_orm_sentinel AS user_accounts_1_sa_orm_sentinel, user_accounts_1.created_at AS user_accounts_1_created_at, user_accounts_1.updated_at AS user_accounts_1_updated_at, roles_1.slug AS roles_1_slug, roles_1.name AS roles_1_name, roles_1.description AS roles_1_description, roles_1.id AS roles_1_id, roles_1.sa_orm_sentinel AS roles_1_sa_orm_sentinel, roles_1.created_at AS roles_1_created_at, roles_1.updated_at AS roles_1_updated_at 
FROM user_account_roles JOIN user_accounts AS user_accounts_1 ON user_accounts_1.id = user_account_roles.user_id JOIN roles AS roles_1 ON roles_1.id = user_account_roles.role_id 
WHERE user_account_roles.user_id IN ($1::UUID)
INFO - 2024-05-12 23:01:43,504 - sqlalchemy.engine.Engine - base - [caching disabled (excess depth for ORM loader options) 0.00148s ] ('11db6fe2-861b-49c7-97db-6b62c753ca5f',)

Package Version

0.8.3

Platform

  • Linux
  • Mac
  • Windows
  • Other (Please specify in the description above)

Bug: select queries on repositories are not always idempotent

Description

When executing twice the same select query with a whereclause on a relationship, the second query gives an empty result.

Ex select(State).where(State.country == usa) (see below for complete reproducible code).

URL to code causing the issue

No response

MCVE

from advanced_alchemy.base import UUIDBase
from advanced_alchemy.repository import SQLAlchemySyncRepository
from sqlalchemy import create_engine, select, ForeignKey, func
from sqlalchemy.orm import Mapped, Session, sessionmaker, mapped_column, relationship


class Country(UUIDBase):
    name: Mapped[str]


class State(UUIDBase):
    name: Mapped[str]
    country_id: Mapped[str] = mapped_column(ForeignKey(Country.id))

    country = relationship(Country)


class USStateRepository(SQLAlchemySyncRepository[State]):
    model_type = State


engine = create_engine("sqlite:///:memory:", future=True, echo=True)
# engine = create_engine("postgresql://localhost/sandbox", future=True)
session_factory: sessionmaker[Session] = sessionmaker(engine, expire_on_commit=False)


def run_script() -> None:
    with engine.begin() as conn:
        State.metadata.create_all(conn)

    with session_factory() as db_session:
        usa = Country(name="United States of America")
        france = Country(name="France")
        db_session.add(usa)
        db_session.add(france)

        california = State(name="California", country=usa)
        oregon = State(name="Oregon", country=usa)
        ile_de_france = State(name="รŽle-de-France", country=france)

        repo = USStateRepository(session=db_session)
        repo.add(california)
        repo.add(oregon)
        repo.add(ile_de_france)
        db_session.commit()

        print("\n" + "-" * 80 + "\n")

        # Using only the ORM, this works fine:

        stmt = select(State).where(State.country_id == usa.id).with_only_columns(func.count())
        count = db_session.execute(stmt).scalar_one()
        assert count == 2, f"Expected 2, got {count}"
        count = db_session.execute(stmt).scalar_one()
        assert count == 2, f"Expected 2, got {count}"

        stmt = select(State).where(State.country == usa).with_only_columns(func.count())
        count = db_session.execute(stmt).scalar_one()
        assert count == 2, f"Expected 2, got {count}"
        count = db_session.execute(stmt).scalar_one()
        assert count == 2, f"Expected 2, got {count}"

        print("\n" + "-" * 80 + "\n")

        # Using the repository, this works:
        stmt1 = select(State).where(State.country_id == usa.id)

        print("First query")
        count = repo.count(statement=stmt1)
        assert count == 2, f"Expected 2, got {count}"

        print("Second query")
        count = repo.count(statement=stmt1)
        assert count == 2, f"Expected 2, got {count}"

        print("\n" + "-" * 80 + "\n")

        # But this fails (only after the second query):
        stmt2 = select(State).where(State.country == usa)

        print("First query")
        count = repo.count(statement=stmt2)
        assert count == 2, f"Expected 2, got {count}"

        print("Second query")
        count = repo.count(statement=stmt2)
        assert count == 2, f"Expected 2, got {count}"

        # It also fails with
        states = repo.list(statement=stmt2)
        count = len(states)
        assert count == 2, f"Expected 2, got {count}"



if __name__ == "__main__":
    run_script()

Steps to reproduce

No response

Screenshots

No response

Logs

First query
2024-02-01 14:12:48,387 INFO sqlalchemy.engine.Engine SELECT count(state.id) AS count_1
FROM state
WHERE ? = state.country_id
2024-02-01 14:12:48,387 INFO sqlalchemy.engine.Engine [generated in 0.00005s] (<memory at 0x1072a13c0>,)
Second query
2024-02-01 14:12:48,387 INFO sqlalchemy.engine.Engine SELECT count(state.id) AS count_1
FROM state
WHERE ? = state.country_id
2024-02-01 14:12:48,387 INFO sqlalchemy.engine.Engine [cached since 0.0002423s ago] (None,)
2024-02-01 14:12:48,387 INFO sqlalchemy.engine.Engine ROLLBACK
Traceback (most recent call last):
  File "/Users/fermigier/projects/abilian-analytics/sandbox/debug_aa.py", line 97, in <module>
    run_script()
  File "/Users/fermigier/projects/abilian-analytics/sandbox/debug_aa.py", line 87, in run_script
    assert count == 2, f"Expected 2, got {count}"
AssertionError: Expected 2, got 0

Jolt Project Version

0.7.0

Platform

  • Linux
  • Mac
  • Windows
  • Other (Please specify in the description above)

Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Bug: Repository initialization failing when passing execute_options argument

Description

After updating to the latest version the newly introduced execution_options variable of the repository throws an InvalidRequestError when providing the variable during the Repository initialization. This issue occurs on both Sync and Async repositories.

The problem occurs from the lambda statement and i found out that it can be fixed by simply setting the track_bound_values=False and enable_tracking=False ( here and here) on the lambda statement during the repository initialization.

This issue does not occur on the repository functions (e.g. get etc) as during those the _get_base_statement function correctly sets the tracking variables to false

I would be more than happy to contribute and open a PR to address this

URL to code causing the issue

No response

MCVE

from advanced_alchemy.base import UUIDBase
from advanced_alchemy.repository import SQLAlchemySyncRepository
from sqlalchemy import create_engine
from sqlalchemy.orm import Mapped, Session, sessionmaker

# Note that the same applies also to SQLAlchemyAsyncRepository

class User(UUIDBase):
    name: Mapped[str]


class UserRepository(SQLAlchemySyncRepository[User]):
    model_type = User


engine = create_engine("sqlite:///:memory:")
session_factory: sessionmaker[Session] = sessionmaker(engine, expire_on_commit=False)

with engine.begin() as conn:
    User.metadata.create_all(conn)

with session_factory() as session:
    repository = UserRepository(session=session, execution_options={"populate_existing": True})

Steps to reproduce

1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error

Screenshots

"In the format of: ![SCREENSHOT_DESCRIPTION](SCREENSHOT_LINK.png)"

Logs

sqlalchemy.exc.InvalidRequestError: Can't invoke Python callable keys() inside of lambda expression argument at <code object <lambda> at 0x1036c26b0, file /lib/python3.11/site-packages/advanced_alchemy/repository/_sync.py", line 110>; lambda SQL constructs should not invoke functions from closure variables to produce literal values since the lambda SQL system normally extracts bound values without actually invoking the lambda or any functions within it.  Call the function outside of the lambda and assign to a local variable that is used in the lambda as a closure variable, or set track_bound_values=False if the return value of this function is used in some other way other than a SQL bound value.

Package Version

v0.11.0

Platform

  • Linux
  • Mac
  • Windows
  • Other (Please specify in the description above)

unreachable code in entity CommonTableAttributes

class CommonTableAttributes(BasicAttributes):

`class CommonTableAttributes(BasicAttributes):
"""Common attributes for SQLALchemy tables."""

if TYPE_CHECKING:
    __tablename__: str
else:

    @declared_attr.directive
    def __tablename__(cls) -> str:
        """Infer table name from class name."""

        return table_name_regexp.sub(r"_\1", cls.__name__).lower()`

should be:

`class CommonTableAttributes(BasicAttributes):
"""Common attributes for SQLALchemy tables."""

if TYPE_CHECKING:
    __tablename__: str
else:

@declared_attr.directive
def __tablename__(cls) -> str:
    """Infer table name from class name."""

    return table_name_regexp.sub(r"_\1", cls.__name__).lower()

`

since I don't know the code-base well, I'm curious as to what affect this might have if any...

Enhancement: Create a custom encrypted field SQLAlchemy type.

Summary

Create a new encrypted field that supports multiple backends included cryptography and tink

Basic Example

No response

Drawbacks and Impact

No response

Unresolved questions

No response


Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Bug: Incorrect datetime in Postgresql for AuditColumns class

Description

Hello,

I do not know if it's an issue or not.

In the AuditColumns class in the project when adding the fields created_at, it does insert the datetime in the database with the timezone configured in the database

updated_at: Mapped[datetime] = mapped_column(
        DateTimeUTC(timezone=True),
        default=lambda: datetime.now(timezone.utc),
    )

I got the following entry in postgres:
2023-10-23 17:16:36.71099+02 with the timezone +2

What I expect to have is the following in the database 2023-10-23 15:16:36.71099

In postgres to have that, there is a function timezone('utc', now())

URL to code causing the issue

No response

MCVE

from advanced_alchemy.base import UUIDAuditBase as TimestampedDatabaseModel

class Customer(TimestampedDatabaseModel):

    """Customer Model."""

    __tablename__ = "customer"  # type: ignore[assignment]
    __table_args__ = {"comment": "Customer for data retrieval"}

Steps to reproduce

1. Create a class that inherits AuditColumns
2. Create a migrations schema
3. Create a new entity through this model
4. The database entry got the timezone of the server instead of the timezone defined in the AuditColumns (UTC)

Screenshots

psql

Logs

No response

Jolt Project Version

advanced-alchemy 0.3.0

Platform

  • Linux
  • Mac
  • Windows
  • Other (Please specify in the description above)

Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Bug: Litestar crashes after update to version 0.10.0

Description

My application uses Pydantic models for input and output.
After updating Advanced Alchemy from 0.9.4 to the new version 0.10.0, my app now crashes.

I have already found the problem:
The to_schema function in _service/utils.py no longer checks if scheme_type is from BaseModel. Instead, it now checks whether scheme is of type ModelMetaclass.
Therefore the condition is skipped and cast(โ€œModelOrRowMappingTโ€, data) is returned.

URL to code causing the issue

No response

MCVE

No response

Steps to reproduce

No response

Screenshots

No response

Logs

No response

Package Version

0.10.0

Platform

  • Linux
  • Mac
  • Windows
  • Other (Please specify in the description above)

Docs: Explain the intended usage of respositories and services

Summary

This package looks very useful but it is (as I'm sure you know) missing some essential documentation. In particular, it would be useful to have more documentation about repositories and services and their intended use cases. I've browsed through the litestar-fullstack application, which uses both, and it's not especially clear to me what are the different intended use cases of each. Not sure if I'm missing some essential background knowledge here that would make the difference clearer, but if so, it would be nice to have it explained. Thanks for listening!

Bug: When we generate our own column names instead of using the columns in the SQL table, the to_dict function does not work

Description

When I want to avoid using the column names found in the SQL table, as in the ExampleModel below, I create custom column names to correspond to them. However, when I try to perform an upsert operation on the table's service, instead of getting the custom column names I provided to the model in the to_dict function, it retrieves the column names from the SQL table. Hence, it does not match the column names in the model.

We've encountered this issue before with the model_from_dict function, and we fixed it by adding the following code. #78

def model_from_dict(model: ModelT, **kwargs: Any) -> ModelT:
    """Return ORM Object from Dictionary."""
    data = {
        column_name: kwargs[column_name]
        for column_name in model.__mapper__.columns.keys()  # noqa: SIM118
        if column_name in kwargs
    }
    return model(**data)  # type: ignore  # noqa: PGH003

If we need to continue from the current issue, we can proceed with the following model.

Model:

class ExampleModel(BigIntAuditBase):
    __tablename__ = "Example_Model"

    field_one: Mapped[str] = mapped_column(
        "FIELDONE", String(10), ForeignKey("ERP_Items.erp_code"), nullable=False
    )
    field_two: Mapped[str] = mapped_column(
        "FIELDTWO", String(10), ForeignKey("ERP_STAR_ITEMS.erp_code"), nullable=False
    )

Current:

    def to_dict(self, exclude: set[str] | None = None) -> dict[str, Any]:
        """Convert model to dictionary.

        Returns:
            dict[str, Any]: A dict representation of the model
        """
        exclude = {"sa_orm_sentinel", "_sentinel"}.union(self._sa_instance_state.unloaded).union(exclude or [])  # type: ignore[attr-defined]
        return {field.name: getattr(self, field.name) for field in self.__table__.columns if field.name not in exclude}

Must be:

def to_dict(self, exclude: set[str] | None = None) -> dict[str, Any]:
        """Convert model to dictionary.

        Returns:
            dict[str, Any]: A dict representation of the model
        """
        exclude = {"sa_orm_sentinel", "_sentinel"}.union(self._sa_instance_state.unloaded).union(exclude or [])  # type: ignore[attr-defined]
        return {field: getattr(self, field) for field in self.__mapper__.columns.keys() if field not in exclude}

URL to code causing the issue

https://github.com/jolt-org/advanced-alchemy/blob/main/advanced_alchemy/base.py#L215

MCVE

# Your MCVE code here

Steps to reproduce

1. Create a model and define column names different from those in the SQL table.
2. Create a service for the model.
3. Use an upsert method within the service.
4. Finally, trigger the place where you used upsert in this service.

Screenshots

"In the format of: ![SCREENSHOT_DESCRIPTION](SCREENSHOT_LINK.png)"

Logs

No response

Jolt Project Version

v0.9.0

Platform

  • Linux
  • Mac
  • Windows
  • Other (Please specify in the description above)

Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Bug: Impossible to use unique() method for results after executing

Description

Description
I am encountering a sqlalchemy.exc.InvalidRequestError when using lazy='joined' loading strategy across my models. The error states: "The unique() method must be invoked on this Result, as it contains results that include joined eager loads against collections." However, applying unique() is not feasible in my current implementation.
There is no way to make results unique() considering using lazy='joined'

URL to code causing the issue

No response

MCVE

# Here you would put a simplified version of your code that still produces the error.
# For example:

from sqlalchemy.orm import joinedload
from myapp.models import Parent, Child

# Example query that leads to the error
session.execute(select(Parent).options(joinedload(Parent.children))).scalars()

Steps to reproduce

No response

Screenshots

No response

Logs

No response

Jolt Project Version

0.0.9

Platform

  • Linux
  • Mac
  • Windows
  • Other (Please specify in the description above)

Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Bug: If a column has positional name model_from_dict can't find column

Description

This is our Model definition for some reason :)

class InventoryModel(BigIntBase):
    __tablename__ = "MV_ERP_INVENTORY"

    id: Mapped[int] = mapped_column(Integer, primary_key=True, autoincrement=True)
    building_name: Mapped[str] = mapped_column(
        "BUILDINGNAME", String(45), nullable=False
    )
# .../advanced_alchemy/repository/_util.py

def model_from_dict(model: ModelT, **kwargs: Any) -> ModelT:
    """Return ORM Object from Dictionary."""
    data = {}
    for column in model.__table__.columns:
        column_val = kwargs.get(column.name, None)
        if column_val is not None:
            data[column.name] = column_val
    return model(**data) 

columns in list[model.__table__.columns] returns

[ 
 Column('id', Integer(), table=<MV_ERP_INVENTORY>, primary_key=True, nullable=False),
 Column('BUILDINGNAME', String(length=45), table=<MV_ERP_INVENTORY>, nullable=False)
]

so column.name return BUILDINGNAME for building_name column.

So whenever we pass dict as:

await self.update(
    data={"building_name": "some_building_name"},
    item_id=inventory_id,
    auto_commit=True,
)

As a solution we tried BUILDINGNAME in data dict it throws some other errors (invalid key for model) as expected.

So changing model.__table__.columns to model.__mapper__.columns.keys() would get building_name instead of and could create model exactly for models with column has positional name.

We can also create pull request for this.

w/ @ysnbyzli

URL to code causing the issue

No response

MCVE

No response

Steps to reproduce

1. Define a model with one or more columns with positional name
2. Use repositoy.update(data={"column_with_positional_name": "updated_data"}) to update data
3. Debug through model_from_dict 
4. See column name comes from positional_name instead key of model
5. It won't update the column

Screenshots

"In the format of: ![SCREENSHOT_DESCRIPTION](SCREENSHOT_LINK.png)"

Logs

None

Jolt Project Version

advanced-alchemy = "^0.3.3"

Platform

  • Linux
  • Mac
  • Windows
  • Other (Please specify in the description above)

Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Enhancement: Create a file field type for SQLAlchemy

Summary

Create a custom field to type for storing file references. It should support multiple cloud backends and local filesystem.

Basic Example

No response

Drawbacks and Impact

No response

Unresolved questions

No response


Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Bug: UUIDPrimaryKey.id type is `Unknown | UUID` when uuid_utils is not installed.

Description

error: Type of "id" is partially unknown
  ย ย Type of "id" is "Unknown | UUID" (reportUnknownMemberType)

URL to code causing the issue

No response

MCVE

from advanced_alchemy.base import UUIDAuditBase


class User(UUIDAuditBase):
    pass


def foo(user: User) -> None:
    user.id  # pyright error: reportUnknownMemberType

Steps to reproduce

  1. install pyright(I'm using 1.1.350) and advanced-alchemy(>0.7.0).
  2. run pyright type checking on above code.

Screenshots

No response

Logs

No response

Jolt Project Version

0.7.3

Platform

  • Linux
  • Mac
  • Windows
  • Other (Please specify in the description above)

Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Enhancement: DTOs: support fields defined in mixins and `declared_attr` fields while excluding implicit fields

Summary

SQLAlchemyDTOConfig has a field named include_implicit_fields, it is either True, False or 'hybrid-only.

The first issue here is that there is another type of mapping that can be defined using declared_attr (see https://docs.sqlalchemy.org/en/20/orm/mapping_api.html#sqlalchemy.orm.declared_attr). Currently the only way to show this mapping is to set include_implicit_fields=True.

The use case is to enable showing declared_attr fields while still hiding implicitly mapped columns.

The second issue is fields defined in a mixin. They will also not appear unless include_implicit_fields=True

Basic Example

Here is a small example based o SQLAlchemy docs:

from sqlalchemy import ForeignKey, Table
from sqlalchemy.orm import declared_attr
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
from sqlalchemy.orm import relationship
from litestar.contrib.sqlalchemy.dto import SQLAlchemyDTO, SQLAlchemyDTOConfig

my_model_table = Table(
   "my_model_table",
    metadata,
    Column("id", Integer, primary_key=True),
    Column("log_record_id", Integer), # this is a foreign key
    Column("age", Integer),
    Column("grade", Integer),
    Column("name", String),
    Column("occupation", String),
)

class Base(DeclarativeBase):
    pass


class CommonMixin:
    """define a series of common elements that may be applied to mapped
    classes using this class as a mixin class."""

    @declared_attr.directive
    def __tablename__(cls) -> str:
        return cls.__name__.lower()

    __table_args__ = {"mysql_engine": "InnoDB"}
    __mapper_args__ = {"eager_defaults": True}

    id: Mapped[int] = mapped_column(primary_key=True)


class HasLogRecord:
    """mark classes that have a many-to-one relationship to the
    ``LogRecord`` class."""

    log_record_id: Mapped[int] = mapped_column(ForeignKey("logrecord.id"))

    @declared_attr
    def log_record(self) -> Mapped["LogRecord"]:
        return relationship("LogRecord")


class LogRecord(CommonMixin, Base):
    log_info: Mapped[str]


class MyModelView(CommonMixin, HasLogRecord, Base):
    __table__ = my_model_table

    name: Mapped[str]

# this dto will show all fields including the relationship defined in the mixin
class DTO1(SQLAlchemyDTO[V]):
    config = SQLAlchemyDTOConfig(
        include_implicit_fields=True)
    )

# this dto will only show the `name` field
class DTO2(SQLAlchemyDTO[V]):
    config = SQLAlchemyDTOConfig(
        include_implicit_fields=False)
    )

# this will behave like DTO2 since there isn't any hybrid field
class DTO3(SQLAlchemyDTO[V]):
    config = SQLAlchemyDTOConfig(
        include_implicit_fields='hybrid-only')
    )

Drawbacks and Impact

No response

Unresolved questions

No response


Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Enhancement: ChoicesFilter/BooleanFilter

Summary

It would be nice to have these filters

Basic Example

def provide_boolean_filter(
   some_field: bool = Parameter(title="Boolean filter", query="Boolean", default=True),
) -> bool:
    return Boolean

Drawbacks and Impact

No response

Unresolved questions

No response


Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Enhancement: Add a `drop_all` function

Summary

There should be a way to drop all current tables in your database including the alembic_versioning table. This is really useful for rapid prototyping.

Basic Example

No response

Drawbacks and Impact

No response

Unresolved questions

No response


Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Enhancement: Add Quart

Summary

Quart is Flask made async: https://quart.palletsprojects.com/en/latest/

Wondering if there is any interest in adding it as a potential target?

Basic Example

No response

Drawbacks and Impact

More development maintenance

Unresolved questions

What other targets could we add?


Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Unintuitive API

Description

I have recently been bitten by the two following issues, in sequence:

  • With repository.get(): in Python, get is one of the most commonly used methods, and it's easy to assume that it means "return something if found, or None (or some other default value) otherwise". In advanced-alchemy's case, it raises an exception when nothing is found.

  • With repository.get_one_or_none(), it's easy to assume that it has a similar signature than get. But no, one has to provide the primary key by name (e.g. get(id) v.s get_one_or_none(id=id).). This can also lead to confusion.

In other words, there is a lack of consistency between:

  • repository.get() and dict.get()
  • repository.get() and repository.get_one_or_none()

It's probably too late to change the API now, but I suggest anyway:

  • renaming get to get_one and introducing a get method similar to the current version of get that return None instead of raising a exception.
  • Renaming the current get_* methods using a different verb ("fetch"?, "retrieve"? ...)

URL to code causing the issue

No response

MCVE

No response

Steps to reproduce

No response

Screenshots

No response

Logs

No response

Jolt Project Version

0.6.1.

Platform

  • Linux
  • Mac
  • Windows
  • Other (Please specify in the description above)

Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Bug: CollectionFilter returns all entries if values is empty

Description

Basically what the title says, when initializing a CollectionFilter with an empty list, the resulting query returns all entries instead of no entries, which I'd expect.
I know it's a special case and needs to be handled differently, but this seems very inconsistent behaviour.
Here's the specific line which leads to that bug (currently in advanced-alchemy project, although the code is unchanged since it was part of litestar)
https://github.com/jolt-org/advanced-alchemy/blob/46d3e7acbc7a391b4bab06fe7e64f3d45826270a/advanced_alchemy/repository/_async.py#L1109

Issue was first raised on the litestar discord
The bug is replicated in the sync repo as well

URL to code causing the issue

No response

MCVE

No response

Steps to reproduce

No response

Screenshots

No response

Logs

No response

Jolt Project Version

master

Platform

  • Linux
  • Mac
  • Windows
  • Other (Please specify in the description above)

Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Enhancement: `upsert_many` should use a `merge` statement when possible.

Summary

Implement the Merge operation for Oracle and Postgres 15+

Basic Example

No response

Drawbacks and Impact

No response

Unresolved questions

No response


Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Bug: queries using cached argument values

Description

I've tried the following with both the get_one and get_one_or_none methods, in the context of a Litestar dev (local) environment.
The first query from my user table with an arbitrary kwarg results in fetching the correct result. Querying for a different user thereafter results in SQLAlchemy using a cached argument value. The log below might explain this better:

# initial query
# await user_repository.get_one(id_number="1234567890")

2024-01-16 09:38:45,451 INFO sqlalchemy.engine.Engine SELECT "user".name
FROM "user" 
WHERE "user".id_number = $1::VARCHAR
2024-01-16 09:38:45,453 INFO sqlalchemy.engine.Engine [generated in 0.00203s] ('1234567890',)
------
# second query
# await user_repository.get_one(id_number="789")

2024-01-16 09:41:39,580 INFO sqlalchemy.engine.Engine SELECT "user".name
FROM "user" 
WHERE "user".id_number = $1::VARCHAR
2024-01-16 09:41:39,581 INFO sqlalchemy.engine.Engine [cached since 174.1s ago] ('1234567890',)

This is across two separate requests with their own AsyncSession instances.

Overriding the method with my own select() resolves the issue.

I don't have a MCVE as this is all part of a greater project, I'll work on something

URL to code causing the issue

No response

MCVE

No response

Steps to reproduce

No response

Screenshots

No response

Logs

No response

Jolt Project Version

0.6.2

Platform

  • Linux
  • Mac
  • Windows
  • Other (Please specify in the description above)

Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Bug: to_schema is missing hybrid properties

Description

Using to_schema on a BaseModel omits sqlalchemy hybrid properties

URL to code causing the issue

No response

MCVE

class User(ModelBase):
    """User Model."""

    __tablename__ = "user_account"  # type: ignore[assignment]
    __table_args__ = {"comment": "User accounts for application access"}

    email: Mapped[str] = mapped_column(unique=True, index=True)
    name: Mapped[str | None] = mapped_column(String(length=255), nullable=True)
    hashed_password: Mapped[str | None] = mapped_column(String(length=255), info=dto.dto_field("private"))
    is_active: Mapped[bool] = mapped_column(default=True)
    is_superuser: Mapped[bool] = mapped_column(default=False)

    @hybrid_property
    def permissions(self) -> list["Permission"]:
        permissions = set()
        for role in self.roles:
            permissions.update(role.permissions)
        return list(permissions)


class UserController(Controller):
    """Account Controller."""

    path = "/api/users"
    tags = ["User Accounts"]
    guards = [requires_superuser, permissions_required]
    dependencies = {
        "users_service": Provide(provide_users_service),
        "passkeys_service": Provide(provide_passkey_service),
        "roles_service": Provide(provide_roles_service),
    }
    signature_types = [UserService]

    @get(
        operation_id="ListUsers",
        name="users:list",
        summary="List Users",
        description="Retrieve the users.",
        path="/",
        opt={"permissions": ["users:list"]}
    )
    async def list_users(
        self,
        users_service: UserService,
        filters: Annotated[list[FilterTypes], Dependency(skip_validation=True)],
    ) -> OffsetPagination[User]:
        """List users."""
        results, total = await users_service.list_and_count(*filters)
        return users_service.to_schema(data=results, total=total, schema_type=User, filters=filters)

Steps to reproduce

litestar/dto/_codegen_backend.py in DTOCodegenBackend.encode_data at line 155
            wrapped_transfer = self._encode_data(getattr(data, self.wrapper_attribute_name))

Missing required argument 'permissions'

Screenshots

"In the format of: ![SCREENSHOT_DESCRIPTION](SCREENSHOT_LINK.png)"

Logs

No response

Jolt Project Version

0.9.0

Platform

  • Linux
  • Mac
  • Windows
  • Other (Please specify in the description above)

Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Bug(docs): Documentation changes cause Litestar docs to encounter buildtime errors

Description

When building docs, changes in AA are sometimes causing Litestar build steps to warn/fail.

Solution would be to set up a CI workflow to check out the develop branch of Litestar and build the docs against the latest AA codebase, and fail if any build errors are encountered.

This way we can ensure (up?) downstream build success.

URL to code causing the issue

https://github.com/litestar-org/litestar/actions/runs/8206934469/job/22447123447?pr=3169

Bug: `pyright` warnings

Description

There are a number of pyright linting errors that should be addressed. Additionally, pyright should be enabled as part of the CI testing pipeline going forward.

URL to code causing the issue

No response

MCVE

# Your MCVE code here

Steps to reproduce

1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error

Screenshots

"In the format of: ![SCREENSHOT_DESCRIPTION](SCREENSHOT_LINK.png)"

Logs

No response

Package Version

v0.11.0

Platform

  • Linux
  • Mac
  • Windows
  • Other (Please specify in the description above)

Bug: TestClient used in pytest fails with `KeyError: 'session_maker_class'`

Description

I've developed a failing test by writing a test that tries to test the litestar example in this repo: main...sherbang:advanced-alchemy:litestar_test

URL to code causing the issue

https://github.com/sherbang/advanced-alchemy/tree/litestar_test

MCVE

# Your MCVE code here

Steps to reproduce

1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error

Screenshots

"In the format of: ![SCREENSHOT_DESCRIPTION](SCREENSHOT_LINK.png)"

Logs

/home/sherbang/devel/advanced-alchemy/tests/examples/test_litestar.py::test_create failed: test_client = <litestar.testing.client.sync_client.TestClient object at 0x7582d98e9c50>

    async def test_create(test_client: TestClient[Litestar]) -> None:
        author = AuthorCreate(name="foo")
    
        response = test_client.post(
            "/authors",
            json=author.model_dump(mode="json"),
        )
>       assert response.status_code == 200, response.text
E       AssertionError: Traceback (most recent call last):
E           File "/home/sherbang/devel/advanced-alchemy/.venv/lib/python3.12/site-packages/litestar/middleware/exceptions/middleware.py", line 218, in __call__
E             await self.app(scope, receive, send)
E           File "/home/sherbang/devel/advanced-alchemy/.venv/lib/python3.12/site-packages/litestar/routes/http.py", line 82, in handle
E             response = await self._get_response_for_request(
E                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E           File "/home/sherbang/devel/advanced-alchemy/.venv/lib/python3.12/site-packages/litestar/routes/http.py", line 134, in _get_response_for_request
E             return await self._call_handler_function(
E                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E           File "/home/sherbang/devel/advanced-alchemy/.venv/lib/python3.12/site-packages/litestar/routes/http.py", line 154, in _call_handler_function
E             response_data, cleanup_group = await self._get_response_data(
E                                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E           File "/home/sherbang/devel/advanced-alchemy/.venv/lib/python3.12/site-packages/litestar/routes/http.py", line 191, in _get_response_data
E             cleanup_group = await parameter_model.resolve_dependencies(request, kwargs)
E                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E           File "/home/sherbang/devel/advanced-alchemy/.venv/lib/python3.12/site-packages/litestar/_kwargs/kwargs_model.py", line 394, in resolve_dependencies
E             await resolve_dependency(next(iter(batch)), connection, kwargs, cleanup_group)
E           File "/home/sherbang/devel/advanced-alchemy/.venv/lib/python3.12/site-packages/litestar/_kwargs/dependencies.py", line 65, in resolve_dependency
E             value = await dependency.provide(**dependency_kwargs)
E                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E           File "/home/sherbang/devel/advanced-alchemy/.venv/lib/python3.12/site-packages/litestar/di.py", line 101, in __call__
E             value = self.dependency(**kwargs)
E                     ^^^^^^^^^^^^^^^^^^^^^^^^^
E           File "/home/sherbang/devel/advanced-alchemy/advanced_alchemy/extensions/litestar/plugins/init/config/asyncio.py", line 193, in provide_session
E             session_maker = cast("Callable[[], AsyncSession]", state[self.session_maker_app_state_key])
E                                                                ~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E           File "/home/sherbang/devel/advanced-alchemy/.venv/lib/python3.12/site-packages/litestar/datastructures/state.py", line 90, in __getitem__
E             return self._state[key]
E                    ~~~~~~~~~~~^^^^^
E         KeyError: 'session_maker_class'
E         
E       assert 500 == 200
E        +  where 500 = <Response [500 Internal Server Error]>.status_code

tests/examples/test_litestar.py:28: AssertionError

Jolt Project Version

0.8.1

Platform

  • Linux
  • Mac
  • Windows
  • Other (Please specify in the description above)

Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Enhancement: Add Big Query support to the repository

Summary

Add formal tests for Big Query to the repository.

This enhancement is currently blocked by this issue

Basic Example

No response

Drawbacks and Impact

No response

Unresolved questions

No response


Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Bug: RepositoryServices and Repositories have different list_and_count/count filter signatures

Description

Hi,

First of all thanks, for the package. I've encountered a mypy error with filters as list_and_count and count methods in *RepositoryService (*filters: FilterTypes) and *Repository (*filters: FilterTypes | ColumnElement[bool]) classes have different signatures.

Is there a specific reason to have a drift in signatures as they are only forwarding them? If not I can prepare a PR with a fix. Thanks.

URL to code causing the issue

No response

MCVE

filters: list[FilterTypes | ColumnElement[bool]] = []

repository = UserReposiotry(session=db_session)
repository.list_and_count(*filters) # OK
repository.count(*filters) # OK

service = UserService(session=db_session)
service.list_and_count(*filters) # Error, expects: *filters: FilterTypes
service.count(*filters) # Error, expects: *filters: FilterTypes

Steps to reproduce

No response

Screenshots

No response

Logs

No response

Jolt Project Version

v0.5.5

Platform

  • Linux
  • Mac
  • Windows
  • Other (Please specify in the description above)

Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Bug: unsupported operand for lambda

Description

Using the latest version, I encounter the following error:

advanced_alchemy/repository/_async.py", line 1225, in _filter_by_where
    statement += lambda s: s.where(field == value)
TypeError: unsupported operand type(s) for +=: 'Select' and 'function'

https://github.com/jolt-org/advanced-alchemy/blob/f40e497feb098ace05bfbc87a332b7dd4597f97d/advanced_alchemy/repository/_async.py#L1216

    def _filter_by_where(
        self,
        statement: StatementLambdaElement,
        field_name: str | InstrumentedAttribute,
        value: Any,
    ) -> StatementLambdaElement:
        field = get_instrumented_attr(self.model_type, field_name)
        statement += lambda s: s.where(field == value)
        return statement

URL to code causing the issue

No response

MCVE

# Your MCVE code here

Steps to reproduce

1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error

Screenshots

"In the format of: ![SCREENSHOT_DESCRIPTION](SCREENSHOT_LINK.png)"

Logs

No response

Jolt Project Version

0.3.4

Platform

  • Linux
  • Mac
  • Windows
  • Other (Please specify in the description above)

Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

refactor: remove reliance on deprecated litestar utils

We recently refactored the way that litestar is manages state within the asgi connection scope.

Part of that was deprecation of the {get,set,delete}_litestar_scope_state() utility functions.

The reason for this deprecation is that the namespace we use in scope state, and the things that we store within it are meant to be an implementation detail.

If plugins need to store state for their own operations, it is better to do this inside their own namespace to reduce coupling and future breakages.


Funding

  • If you would like to see an issue prioritized, make a pledge towards it!
  • We receive the pledge once the issue is completed & verified
Fund with Polar

Enhancement: make `.get_one` respect None filters

Summary

I use soft delete for my models, and i try to get instance with repo like this:

obj = await repo.get_one(id='id', deleted_at=None)

because i dont want instance to be selected if it is deleted.

The problem is that with these kwargs i dont get any object at all, because the filter doesn't get converted to deleted_at is null for the db query but to deleted_at = 'None'

Is it as intended? For now the workaround to this is to pass custom statement to get_one, but its not very convenient.
Thank you.

Basic Example

No response

Drawbacks and Impact

No response

Unresolved questions

No response

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.