GithubHelp home page GithubHelp logo

code's People

Contributors

adamculp avatar daniel-faber avatar geyser avatar hjwp avatar karolpawlowski avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

code's Issues

chapter 4, service layer exercise, "decrement" should be "increment" in test names

In chapter 4, we are challenged to implement a deallocation service, given a link with some stubs for tests and some complete e2e tests.
There are some inaccuracies with the exercies which I have found:

  1. services.add_batch("b1", "BLUE-PLINTH", 100, None, repo, session)

    Test uses unexistent service add_batch. Shall we also implement it with tests? I would say yes. Maybe it worth to be included in the book itself?

  2. services.allocate("o1", "BLUE-PLINTH", 10, repo, session)

    Invalid call for a service.allocate() function.

Unclear cases:

def test_deallocate_decrements_available_quantity():

def test_deallocate_decrements_correct_quantity():

What is an expected outcome of such functions? My intertpretation is that in first test we can deallocate any line with matching orderid and sku which is allocated. In the second test we deallocate the line with matching orderid, sku and qty. Is it correct?

django appendix - allocation_set question

Hi,

I'm reading Django appendix code and I can't quite get where self.allocation_set and b.allocation_set comes from in src/djangoproject/alloc/models.py/ Batch class:

class Batch(models.Model):
    reference = models.CharField(max_length=255)
    sku = models.CharField(max_length=255)
    qty = models.IntegerField()
    eta = models.DateField(blank=True, null=True)

    @staticmethod
    def update_from_domain(batch: domain_model.Batch):
        try:
            b = Batch.objects.get(reference=batch.reference)
        except Batch.DoesNotExist:
            b = Batch(reference=batch.reference)
        b.sku = batch.sku
        b.qty = batch._purchased_quantity
        b.eta = batch.eta
        b.save()
        b.allocation_set.set(
            Allocation.from_domain(l, b)
            for l in batch._allocations
        )

    def to_domain(self) -> domain_model.Batch:
        b = domain_model.Batch(
            ref=self.reference, sku=self.sku, qty=self.qty, eta=self.eta
        )
        b._allocations = set(
            a.line.to_domain()
            for a in self.allocation_set.all()
        )
        return b

Tried to grep the source code but got nowhere:

➜  code git:(appendix_django) ✗ grep -r "allocation_set" .         
./src/djangoproject/alloc/models.py:        b.allocation_set.set(
./src/djangoproject/alloc/models.py:            for a in self.allocation_set.all()

Can you help me understand how this works?

Using Async Await?

Is it possible to use async await, especially when accessing external resources like the db?

Psycopg2 error: Symbol not found _PQbackendPID

I noticed this error is happening on m1 chip macbook due to pscopg-binary.

After i switched to python 3.9+, the error has gone away. Also this error doesn't happen when you run the tests via docker, since i believe it's already using 3.9 however when i tried to run the tests in my local with pytest, i saw this error.

cloning of branches

The described ways in the readme of cloning branches doesn't work anymore.

New way to clone branches:
git clone --branch <branchname> <remote-repo-url>

move start_mapping() out of flask app

Should the orm.start_mappers() be moved to the repository or unit of work? It seems out of place in the flask api.

I'm learning towards the repository because it is the connection between the database and the model. What are your thoughts?

Question: Why there is using a specific model with an abstract repository but not some abstract or base model?

Hello,
Can you tell me please why the specific model model.Product used here with the abstract repository AbstractRepository but not some abstract or base model?

code/src/allocation/adapters/repository.py:

def get(self, sku) -> model.Product:

def _add(self, product: model.Product):

def _get(self, sku) -> model.Product:

def _get_by_batchref(self, batchref) -> model.Product:

tests failing due to possible conflict with SQLAlchemy

Hi, first of all thanks for the book . I was missing this kind of book for a lot of time.
The thing is that runing pytest I've got some errors on the test_orm.py file:

==================================== test session starts ====================================
platform linux -- Python 3.7.3, pytest-5.4.1, py-1.8.1, pluggy-0.13.1
rootdir: /home/javier/Projects/p_patterns/code-chapter_02_repository
collected 20 items                                                                          

test_allocate.py ....                                                                 [ 20%]
test_batches.py ........                                                              [ 60%]
test_orm.py FF..FF                                                                    [ 90%]
test_repository.py .F                                                                 [100%]

...

___________________________ test_orderline_mapper_can_load_lines ____________________________

session = <sqlalchemy.orm.session.Session object at 0x7f484b9d9390>

    def test_orderline_mapper_can_load_lines(session):
        session.execute(
            'INSERT INTO order_lines (orderid, sku, qty) VALUES '
            '("order1", "RED-CHAIR", 12),'
            '("order1", "RED-TABLE", 13),'
            '("order2", "BLUE-LIPSTICK", 14)'
        )
        expected = [
>           model.OrderLine("order1", "RED-CHAIR", 12),
            model.OrderLine("order1", "RED-TABLE", 13),
            model.OrderLine("order2", "BLUE-LIPSTICK", 14),
        ]

test_orm.py:12: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
<string>:2: in __init__
    ???
../../../.local/share/virtualenvs/p_patterns-tDNksF9p/lib/python3.7/site-packages/sqlalchemy/orm/instrumentation.py:377: in _new_state_if_none
    self._state_setter(instance, state)
<string>:1: in set
    ???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <[AttributeError("'OrderLine' object has no attribute '_sa_instance_state'") raised in repr()] OrderLine object at 0x7f484ba4c898>
name = '_sa_instance_state'
value = <sqlalchemy.orm.state.InstanceState object at 0x7f484b9d9dd8>

>   ???
E   dataclasses.FrozenInstanceError: cannot assign to field '_sa_instance_state'

I could not find much information about the error but it seems like passing an integer to an id that expects an object... like https://stackoverflow.com/questions/16151729/attributeerror-int-object-has-no-attribute-sa-instance-state?rq=1

But to tell the truth no idea whether that is the case. Do you suffer from the same issue?

Thanks in advance

chapter2: How to use alembic for migrations for sqlalchemy classical mapper?

Thanks to the very first and nice book for python developer to practice ddd. I have detailed questions when doing real project practice, one of those is:

We use sqlalchemy model and flask-migrate (based on alembic), at most of our projects. The work flow
about db seems like: 'flask db init' --> 'flask db migrate' --> 'flask db upgrade' --> ...
All the 'models'(sa model but not domain model) can be detected by alembic, so does the changing of the 'models'.

So if change to classical mapper, I'm not sure is there an easy way to change the workflow, or may be you already have some working code or build command example can be shown? Thanks.

what's the purpose of line_allocated_event for redis?

Hello, thanks for all the resources you provided in the book, helping me tremendously to understand what a clean architecture could look like in python world. Just have a quick question about this line in end-to-end test:

**subscription = redis_client.subscribe_to("line_allocated")**

what's line_allocated channel needed for? I could see that the message is getting published to "change_batch_quantity" channel but we are subscribed to a different channel but somehow still able to read the messages that come from change_batch_quantity. I might be missing something easy to understand as i don't have a lot of experience with redis pubsub. I thought the channel names should have been same for them to exchange messages. Regardless of it, the tests pass so it seems like i'm missing something here. Thank you

@pytest.mark.usefixtures("postgres_db")
@pytest.mark.usefixtures("restart_api")
@pytest.mark.usefixtures("restart_redis_pubsub")
def test_change_batch_quantity_leading_to_reallocation():
    # start with two batches and an order allocated to one of them
    orderid, sku = random_orderid(), random_sku()
    earlier_batch, later_batch = random_batchref("old"), random_batchref("newer")
    api_client.post_to_add_batch(earlier_batch, sku, qty=10, eta="2011-01-01")
    api_client.post_to_add_batch(later_batch, sku, qty=10, eta="2011-01-02")
    r = api_client.post_to_allocate(orderid, sku, 10)
    assert r.ok
    response = api_client.get_allocation(orderid)
    assert response.json()[0]["batchref"] == earlier_batch

    **subscription = redis_client.subscribe_to("line_allocated")**

    # change quantity on allocated batch so it's less than our order
    redis_client.publish_message(
        "change_batch_quantity",
        {"batchref": earlier_batch, "qty": 5},
    )

    # wait until we see a message saying the order has been reallocated
    messages = []
    for attempt in Retrying(stop=stop_after_delay(3), reraise=True):
        with attempt:
            message = subscription.get_message(timeout=1)
            if message:
                messages.append(message)
                print(messages)
            data = json.loads(messages[-1]["data"])
            assert data["orderid"] == orderid
            assert data["batchref"] == later_batch

[Question] How would you implement Flask event Listeners?

Hey,

I am looking at rebuilding some Flask services under the idea you are showcasing here.

Issue is that a lot of what we did was build on Flask event_listeners. What they do is essentially react to any changes to the data.

Do you have any proposal how to essentially be able to reuse those for SQLAlchemy prod use-case, but have them as mocks for tests?

A question on SqlAlchemyRepository in chapter 6

I see the below code for chapter 6. I'm wondering why we do .first(). Shouldn't we sort by version_number and get the latest Product?

class SqlAlchemyRepository(AbstractRepository):

    def __init__(self, session):
        self.session = session

    def add(self, product):
        self.session.add(product)

    def get(self, sku):
        return self.session.query(model.Product).filter_by(sku=sku).first()

ConnectionError: Need to add both User-Agent header and timeout to requests

Non-deterministically, tests will fail on master branch and all others due to requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',))

This even happens on Docker and is not due to waiting for containers to spin up.

This can be prevented by adding a timeout and a User-Agent header to the request objects in api_client (in master) and wherever request is used in previous chapters

Appendix DjangoProject - ModuleNotFoundError: No module named 'alloc'

Description

make all fails during pytest step.

Demonstration

$ git checkout appendix_django
$ make all
docker-compose down --remove-orphans
Removing cosmic-python_postgres_1 ... done
Removing network cosmic-python_default
docker-compose build
postgres uses an image, skipping
WARNING: Native build is an experimental feature and could change at any time
Building app
[+] Building 0.5s (13/13) FINISHED                                                                                                                                                                                
 => [internal] load build definition from Dockerfile                                                                                                                                                         0.0s
 => => transferring dockerfile: 38B                                                                                                                                                                          0.0s
 => [internal] load .dockerignore                                                                                                                                                                            0.1s 
 => => transferring context: 2B                                                                                                                                                                              0.0s 
 => [internal] load metadata for docker.io/library/python:3.9-slim-buster                                                                                                                                    0.3s 
 => [internal] load build context                                                                                                                                                                            0.0s
 => => transferring context: 2.18kB                                                                                                                                                                          0.0s 
 => [1/8] FROM docker.io/library/python:3.9-slim-buster@sha256:182f0eff727af9fccf88294dbfffb23ad408369c412e1267ddd5aa63ef8b5bf8                                                                              0.0s 
 => CACHED [2/8] COPY requirements.txt /tmp/                                                                                                                                                                 0.0s 
 => CACHED [3/8] RUN pip install -r /tmp/requirements.txt                                                                                                                                                    0.0s 
 => CACHED [4/8] RUN mkdir -p /src                                                                                                                                                                           0.0s 
 => CACHED [5/8] COPY src/ /src/                                                                                                                                                                             0.0s 
 => CACHED [6/8] RUN pip install -e /src                                                                                                                                                                     0.0s 
 => CACHED [7/8] COPY tests/ /tests/                                                                                                                                                                         0.0s 
 => CACHED [8/8] WORKDIR /src                                                                                                                                                                                0.0s 
 => exporting to image                                                                                                                                                                                       0.1s 
 => => exporting layers                                                                                                                                                                                      0.0s
 => => writing image sha256:9566a0aa0ba5691e04eaa23ea011c4047413369f81d62e4eca26c9f7528927be                                                                                                                 0.0s 
 => => naming to docker.io/library/cosmic-python_app                                                                                                                                                         0.0s
Successfully built 9566a0aa0ba5691e04eaa23ea011c4047413369f81d62e4eca26c9f7528927be
docker-compose up -d app
WARNING: Native build is an experimental feature and could change at any time
Creating network "cosmic-python_default" with the default driver
Creating cosmic-python_postgres_1 ... done
Creating cosmic-python_app_1      ... done
docker-compose run --rm --no-deps --entrypoint=pytest app /tests/unit /tests/integration /tests/e2e
WARNING: Native build is an experimental feature and could change at any time
Creating cosmic-python_app_run ... done
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/django/apps/config.py", line 244, in create
    app_module = import_module(app_name)
  File "/usr/local/lib/python3.9/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
  File "<frozen importlib._bootstrap>", line 984, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'alloc'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/bin/pytest", line 8, in <module>
    sys.exit(console_main())
  (... snip ...)

Listing or returning items with UoW raises sqlalchemy.orm.exc.DetachedInstanceError

Hi. I have a simple list endpoint as follows:

@app.get("/items/", response_model=list[schemas.Item])
async def read_items(uow: AbstractUnitOfWork = Depends(get_uow)) -> List[models.Item]:
    with uow:
        items = uow.repo.list()
    return items

items is a list of models.Item which is my domain model (dataclass). But when exiting the uow (closing the session) the items' attributes are refreshed by the ORM automatically and I get the sqlalchemy.orm.exc.DetachedInstanceError exception.

I would expect such a dataclass not to mutate since it is not an ORM model.
What would be the best approach in this case?

The same applies for the creation of an item.

Thanks!

Why does 'pytest tests/unit' work with docker but not locally?

Hi!

I'm trying to work out why i can't run pytest tests/unit locally (i e without Docker). I've set up a virtualenv, did pip install requirements.txt.

Using your docker-compose/makefile it works to run the unit tests, but running them locally results in:
ImportError while loading conftest '/Users/USERNAME/src/python/cosmicpython/code/tests/conftest.py'. ../tests/conftest.py:14: in <module> from allocation.adapters.orm import metadata, start_mappers E ModuleNotFoundError: No module named 'allocation'

The same goes on for pyCharm which does not recognize the allocation namespace.

What's actually going on here, is there some kind of magic that I'm missing?
I'd like to be able to use this setup of yours as it mimics a lot of setups I'm used to from the C# world, but I'd also like to use pyCharm and PyCrunch (https://github.com/gleb-sevruk/pycrunch-engine ) for continuous testing purposes which right now seems impossible.

A better optimistic concurrency solution

In chapter 7, we implement a version number to provide optimistic concurrency controls around each aggregate. However, the version number doesn't actually do anything to prevent concurrency conflicts in itself. All of the behaviour comes from the "REPEATABLE READ" transaction isolation level. This is a feature of PostgreSQL that allows the DB to detect a conflict between two concurrent transactions, hence the serialization error raised by psycopg2. The version number is only checked in the test, but this is kind of superfluous (we should easily just check that the changes weren't made by the second transaction).

SqlAlchemy has some built-in functionality for handling optimistic concurrency using version numbers. This works by using the version number within any UPDATE or DELETE statements to ensure the version in the DB is the same as the version you initially read and made changes to. This supports "offline" concurrency conflicts - where you take a copy of the object out of the session, make changes to it (i.e. maybe within a UI app or other service), and then merge it into the DB. It can handle this because your copy still has its original version number that is compared against the DB. It also handles other situations where things might get out of sync, e.g. from the docs:

The purpose of this feature is to... provide a guard against the usage of a “stale” row in a system that might be re-using data from a previous transaction without refreshing (e.g. if one sets expire_on_commit=False with a Session, it is possible to re-use the data from a previous transaction).

I've put quite a few hours into it already. The problem I've found is the version number managed by SqlAlchemy is only bumped on the table where the INSERT/UPDATE/DELETE is happening. This means the version number that lives in the table that represents my root aggregate doesn't get bumped when only some child entity is updated. Effectively, every table would need its own version number. One solution might be to have a property on the aggregate that gets touched every update, e.g. a last_updated_at timestamp, which would force an update on the root entity. Thus, making the whole aggregate one consistent boundary. I don't really like this idea, though, as it could easily allow bugs where the last_updated_at property is not updated.

Just wondered if anyone has some insights into this problem. Are the version numbers worth adding (provided you don't need offline semantics or hit the other edge cases), or should you just set REPEATABLE READ and call it a day?

[Question] How do you return values (like created ids) in Commands/from the message bus?

Hey,

first and foremost: thanks for this awesome book, it is the first time I had to plan an architecture for a bigger project and your book has helped me enormously to create maintainable code.

Question

In Chapter 9 you temporarly show how to return values from the handlers to the entrypoint. You mention that this is still ugly, because it mixes resposibilities (of read and write) and refactor everything according to CQRS in Chapter 12.

However during this you loose the ability ("feature") of returning the batchref in the entrypoint (see here in the code).

I agree that reading and writing are separate responsibilities, but sometimes there is a usecase for returning values even for a command and not a query. Examples:

  • Errors and reasons (you can implement this via the exception, you already do this with the invalid_sku)
  • Some object has been created and you need to provide your clientapplication/user with the id of this object from your microservice (e.g. a user can register and you provide a query to query for user_info_by_id, the user needs to somehow get the user_id from the creation command)

Possible Solutions

I've thought about two ways to solve this, but why I would choose which approach is more of a gut feeling. Maybe you can help me by providing feedback of why these are good/bad or if there is even a better/dogmatic way.

Just returning values from the handler (like in the ugly workaround section linked above)

This enables you to just use it like already shown in your book. What I dont like about here is that there is no real "interface"/"annotation" of the return value of messagebus.handle. It completely depends on the handler. Furthermore, how should we deal with a chain of commands and events (as one handler/model can create new events). Do we discard them or do we return a list/dict of all return values?
Also it just "feels wrong" to return Any.

Leverage the events to return values

As the return values will be mostly stuff like "ObjectCreated(id=1)" or similar, we could just use the available events. Just in the messagebus create a separate copy of all events that are added to self.queue and return this list of events. Then the handle-method just returns the list of thrown events. This solves both problems above: we have an "interface" (List[events.Event]) and also we can return stuff from ALL handlers in a handler chain.

The entrypoint just extracts the events/data it is interested in then.

What do you think about this idea? Or am I wrong and we can also solve this with CQRS? Or some other way completely?

mapper is deprecated on SQLAlchemy 1.4+

registry.map_imperatively() should be used instead.

Reference: https://docs.sqlalchemy.org/en/14/orm/mapping_api.html?highlight=mapper#sqlalchemy.orm.mapper

This involves some refactoring and updates on almost all chapters. Not sure what is the path (update the book first or the branches). I am available to help.

What needs to be done:

  1. Instead of importing MetaData on orm.py, you import his parent (registry):
from sqlalchemy.orm import registry

mapper_registry = registry()
  1. On each Table, you pass as second argument mapper_registry.metadata instead of metadata. Eg.:
order_lines = Table(
    "order_lines",
    mapper_registry.metadata,
    Column("id", Integer, primary_key=True, autoincrement=True),
    Column("sku", String(255)),
    Column("qty", Integer, nullable=False),
    Column("orderid", String(255)),
)
  1. You add the classes and tables with mapper_registry.map_imperatively instead of mapper. Eg.:
def start_mappers():
    mapper_registry.map_imperatively(model.OrderLine, order_lines)
    ...

Chapter 13. Dependency Injection use same uow concurrently in entrypoint?

booststrap script bind same uow to all handlers? Every request use same uow object. Should we create uow on every http request/or every event/every command ?
UoW is consistent boundary, it commit changes after one handler. but now, concurrent committing concurrently will broke it?


def bootstrap(
    start_orm: bool = True,
    uow: unit_of_work.AbstractUnitOfWork = unit_of_work.SqlAlchemyUnitOfWork(), #sameobject
    notifications: AbstractNotifications = None,
    publish: Callable = redis_eventpublisher.publish,
) -> messagebus.MessageBus:

class MessageBus:
    def __init__(
        self,
        uow: unit_of_work.AbstractUnitOfWork,
        event_handlers: Dict[Type[events.Event], List[Callable]],
        command_handlers: Dict[Type[commands.Command], Callable],
    ):
        self.uow = uow  #sameobject
        self.event_handlers = event_handlers
        self.command_handlers = command_handlers

allocate_endpoint returns "OK", 202 even though there is not enough quantity in any batch

Hi,

I noticed that your allocate_endpoint returns status OK event though there is not enough quantity in any batches.
OutOfStock event is called, thats ok, but shouldn't the client also receive the information that allocation has failed due to not having enough sku. Or have i missed something?

Anyway, your book is really great, it was a pleasure reading it!

Is there a reason why we don't use sql alchemy orm to create database models?

hello, amazing book so far. I really enjoy what i am reading. Just one thing i am trying to wrap my head around

why do we have a domain level separately and we use mappers to declaratively map database models to those domain models. I usually use SqlAlchemy models and domain kind of connected together by inheriting a Model class. something like this -for us we would have the same classes for product or order line

# class StudentClasses(Base):
#     __tablename__ = "student_classes"

#     id = Column(Integer, primary_key=True)
#     student_id = Column(Integer, ForeignKey('student.id'))
#     class_id = Column(Integer, ForeignKey('classes.id'))

i understand that having a domain level almost acts as a documentation for new developers who join to projects. It easily tells what's going on at our core domain logic but it just feels like we are also creating more complexity. I want to understand if there is more advantage to using mapper and table from sql alchemy instead of ORM funcitonalities. Thanks!

Possible typo with Chapter 5 code sample

In Chapter 5, a test for uow is displayed. However, it is using list unpacking, and its not obvious whether the difference between orderlineid and id in

[[orderlineid]] = session.execute(
    'SELECT id FROM order_lines WHERE orderid=:orderid AND sku=:sku',
    dict(orderid=orderid, sku=sku)
)

is deliberate or a typo. If it is deliberate, it would be helpful to have an explanation of what SQLAlchemy's orm.session package is doing to return the thing referred to with the name orderlineid

Chapter 4: we do `session.commit()` but do we have any changes to commit?

session.commit()

If I get it right, after the allocation take place in the service and domain layers, the changes are not saved to the database, because there isn't any call to the repository before we do session.commit().

So our allocation seems to not have been saved and thus is not persistent. We can make the same allocation many times again and again and never run out of stock.

Question: How this works withoug receiving IndexError: pop from empty list

Hi Harry & Bob,

I was not able to make this piece of code work without adding a checking for an empty list:

 def collect_new_events(self):
        for product in self.products.seen:
            while product.events:
                yield product.events.pop(0)

With this, I am getting the following exception: IndexError: pop from empty list

Is this working without checking if product.events is empty?

Reference:

yield product.events.pop(0)

question creating tables

I can't figure out this part, the orm.start_mapper should start the tables, but I can't find a metadata.create_all() anywhere, and then I am not sure if I should put it in here, or you are assuming the tables are already created before the start of the application.

def start_mappers():

Thank you for a nice book, I think it has opened my mind a bit more.

Chapter 6 Excercise for the Reader: Separate UoW and Context Manager

experiment with separating the UoW (whose responsibilities are
commit() , rollback() , and providing the .batches repository) from the
context manager, whose job is to initialize things, and then do the commit
or rollback on exit.

Could someone give a solution for this exercise?

Following is my code example. Would there be a point in splitting FakeUnitOfWork?

src/allocation/service_layer/unit_of_work.py

class AbstractUnitOfWork(abc.ABC):
    batches: repository.AbstractRepository
    
    @abc.abstractmethod
    def commit(self):
        raise NotImplementedError

    @abc.abstractmethod
    def rollback(self):
        raise NotImplementedError


DEFAULT_SESSION_FACTORY = sessionmaker(
    bind=create_engine(
        config.get_postgres_uri(),
    )
)


class SQLAlchemyUoWContextManager:
    def __init__(self, session_factory=DEFAULT_SESSION_FACTORY):
        self.session_factory = session_factory

    def __enter__(self):
        self.session: Session = self.session_factory()
        self.uow = SqlAlchemyUnitOfWork(session=self.session)
        return self.uow

    def __exit__(self, *args):
        self.uow.rollback()
        self.session.close()


class SqlAlchemyUnitOfWork(AbstractUnitOfWork):
    def __init__(self, session: Session):
        self.session = session
        self.batches = repository.SqlAlchemyRepository(self.session)

    def commit(self):
        self.session.commit()

    def rollback(self):
        self.session.rollback()

question: how would you approach eliminating the need for `redis` by leveraging the async ecosystem (event loop)?

hey Harry & Bob,

thanks so much for writing this amazing book (and open-sourcing it!),
I got myself a copy off ebooks.com this weekend 💪🏾


I've been looking at the async ecosystem;

apart from the performance gains, another very attractive feature is the ability to queue tasks to be executed later on in the event loop (basically eliminating a particular usecase for celery/redis)

I'd like to know how you would go about modifying the current implementation to take advantage of async features

Ch4: remove jsonify from examples

Hi There, this book is absolutely marvellous and I haven't read half of it yet. Thanks a ton!!

In Chapter 4 and maybe elsewhere, jsonify is used in the flask views to return the response.
Since flask 1.1, (see changelog), it is not mandatory.

Allow returning a dictionary from a view function. Similar to how returning a string will produce a text/html response, returning a dict will call jsonify to produce a application/json response. #3111

Since this book is not on flask itself, it may give an even simpler code example. If you think it makes the reader more aware of the nature of the response, I totally understand and this issue can be closed.

Thanks again ! All the best

race with multi-threaded wsgi server

When substituting a threaded WSGI server for the Flask server, the messagebus.queue and uow.session can have thread races, with POC here.

From inspecting the logs, it looks like a couple things can happen:

  • the messagebus.queue gets swapped out, and some events are dropped before they can be processed.
  • the uow.session gets swapped out, and when the uow tries to commit or rollback, SqlAlchemy throws stack traces with message that look like
sqlalchemy.orm.exc.DetachedInstanceError: Instance <Product at 0x7f55fb51abb0> is not bound to a Session; attribute refresh operation cannot proceed (Background on this error at: http://sqlalche.me/e/bhk3)
AttributeError: 'NoneType' object has no attribute '_iterate_self_and_parents'
sqlalchemy.exc.InvalidRequestError: This session is in 'inactive' state, due to the SQL transaction being rolled back; no further SQL can be emitted within this transaction.

Error in the tests

$ make test
docker-compose up -d
Creating network "code_default" with the default driver
Creating code_redis_1    ... done
Creating code_mailhog_1  ... done
Creating code_postgres_1 ... done
Creating code_redis_pubsub_1 ... done
Creating code_api_1          ... done
docker-compose run --rm --no-deps --entrypoint=pytest api /tests/unit /tests/integration /tests/e2e
Creating code_api_run ... done
================================================= test session starts ==================================================
platform linux -- Python 3.9.12, pytest-7.1.1, pluggy-1.0.0
rootdir: /tests, configfile: pytest.ini
plugins: icdiff-0.5
collected 31 items                                                                                                     

../tests/unit/test_batches.py ......                                                                             [ 19%]
../tests/unit/test_handlers.py ........                                                                          [ 45%]
../tests/unit/test_product.py ......                                                                             [ 64%]
../tests/integration/test_email.py .                                                                             [ 67%]
../tests/integration/test_repository.py .                                                                        [ 70%]
../tests/integration/test_uow.py ....                                                                            [ 83%]
../tests/integration/test_views.py ..                                                                            [ 90%]
../tests/e2e/test_api.py F.                                                                                      [ 96%]
../tests/e2e/test_external_events.py F                                                                           [100%]

======================================================= FAILURES =======================================================
__________________________________ test_happy_path_returns_202_and_batch_is_allocated __________________________________
/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py:703: in urlopen
    httplib_response = self._make_request(
/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py:449: in _make_request
    six.raise_from(e, None)
<string>:3: in raise_from
    ???
/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py:444: in _make_request
    httplib_response = conn.getresponse()
/usr/local/lib/python3.9/http/client.py:1377: in getresponse
    response.begin()
/usr/local/lib/python3.9/http/client.py:320: in begin
    version, status, reason = self._read_status()
/usr/local/lib/python3.9/http/client.py:289: in _read_status
    raise RemoteDisconnected("Remote end closed connection without"
E   http.client.RemoteDisconnected: Remote end closed connection without response

During handling of the above exception, another exception occurred:
/usr/local/lib/python3.9/site-packages/requests/adapters.py:440: in send
    resp = conn.urlopen(
/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py:785: in urlopen
    retries = retries.increment(
/usr/local/lib/python3.9/site-packages/urllib3/util/retry.py:550: in increment
    raise six.reraise(type(error), error, _stacktrace)
/usr/local/lib/python3.9/site-packages/urllib3/packages/six.py:769: in reraise
    raise value.with_traceback(tb)
/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py:703: in urlopen
    httplib_response = self._make_request(
/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py:449: in _make_request
    six.raise_from(e, None)
<string>:3: in raise_from
    ???
/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py:444: in _make_request
    httplib_response = conn.getresponse()
/usr/local/lib/python3.9/http/client.py:1377: in getresponse
    response.begin()
/usr/local/lib/python3.9/http/client.py:320: in begin
    version, status, reason = self._read_status()
/usr/local/lib/python3.9/http/client.py:289: in _read_status
    raise RemoteDisconnected("Remote end closed connection without"
E   urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))

During handling of the above exception, another exception occurred:
/tests/e2e/test_api.py:15: in test_happy_path_returns_202_and_batch_is_allocated
    api_client.post_to_add_batch(earlybatch, sku, 100, "2011-01-01")
/tests/e2e/api_client.py:7: in post_to_add_batch
    r = requests.post(
/usr/local/lib/python3.9/site-packages/requests/api.py:117: in post
    return request('post', url, data=data, json=json, **kwargs)
/usr/local/lib/python3.9/site-packages/requests/api.py:61: in request
    return session.request(method=method, url=url, **kwargs)
/usr/local/lib/python3.9/site-packages/requests/sessions.py:529: in request
    resp = self.send(prep, **send_kwargs)
/usr/local/lib/python3.9/site-packages/requests/sessions.py:645: in send
    r = adapter.send(request, **kwargs)
/usr/local/lib/python3.9/site-packages/requests/adapters.py:501: in send
    raise ConnectionError(err, request=request)
E   requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
__________________________________ test_change_batch_quantity_leading_to_reallocation __________________________________
/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py:703: in urlopen
    httplib_response = self._make_request(
/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py:449: in _make_request
    six.raise_from(e, None)
<string>:3: in raise_from
    ???
/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py:444: in _make_request
    httplib_response = conn.getresponse()
/usr/local/lib/python3.9/http/client.py:1377: in getresponse
    response.begin()
/usr/local/lib/python3.9/http/client.py:320: in begin
    version, status, reason = self._read_status()
/usr/local/lib/python3.9/http/client.py:289: in _read_status
    raise RemoteDisconnected("Remote end closed connection without"
E   http.client.RemoteDisconnected: Remote end closed connection without response

During handling of the above exception, another exception occurred:
/usr/local/lib/python3.9/site-packages/requests/adapters.py:440: in send
    resp = conn.urlopen(
/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py:785: in urlopen
    retries = retries.increment(
/usr/local/lib/python3.9/site-packages/urllib3/util/retry.py:550: in increment
    raise six.reraise(type(error), error, _stacktrace)
/usr/local/lib/python3.9/site-packages/urllib3/packages/six.py:769: in reraise
    raise value.with_traceback(tb)
/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py:703: in urlopen
    httplib_response = self._make_request(
/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py:449: in _make_request
    six.raise_from(e, None)
<string>:3: in raise_from
    ???
/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py:444: in _make_request
    httplib_response = conn.getresponse()
/usr/local/lib/python3.9/http/client.py:1377: in getresponse
    response.begin()
/usr/local/lib/python3.9/http/client.py:320: in begin
    version, status, reason = self._read_status()
/usr/local/lib/python3.9/http/client.py:289: in _read_status
    raise RemoteDisconnected("Remote end closed connection without"
E   urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))

During handling of the above exception, another exception occurred:
/tests/e2e/test_external_events.py:17: in test_change_batch_quantity_leading_to_reallocation
    r = api_client.post_to_allocate(orderid, sku, 10)
/tests/e2e/api_client.py:15: in post_to_allocate
    r = requests.post(
/usr/local/lib/python3.9/site-packages/requests/api.py:117: in post
    return request('post', url, data=data, json=json, **kwargs)
/usr/local/lib/python3.9/site-packages/requests/api.py:61: in request
    return session.request(method=method, url=url, **kwargs)
/usr/local/lib/python3.9/site-packages/requests/sessions.py:529: in request
    resp = self.send(prep, **send_kwargs)
/usr/local/lib/python3.9/site-packages/requests/sessions.py:645: in send
    r = adapter.send(request, **kwargs)
/usr/local/lib/python3.9/site-packages/requests/adapters.py:501: in send
    raise ConnectionError(err, request=request)
E   requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
------------------------------------------------ Captured stdout setup -------------------------------------------------
skipping restart, assumes running in container
=============================================== short test summary info ================================================
FAILED ../tests/e2e/test_api.py::test_happy_path_returns_202_and_batch_is_allocated - requests.exceptions.ConnectionE...
FAILED ../tests/e2e/test_external_events.py::test_change_batch_quantity_leading_to_reallocation - requests.exceptions...
============================================= 2 failed, 29 passed in 8.65s =============================================
ERROR: 1
make: *** [Makefile:17: test] Error 1

Chapter 9: Unit Testing Event Handlers in Isolation - small bug

Hi,

I found that to get the additional isolation test to work when testing BatchQuantityChanged I had to override collect_new_events in FakeUnitOfWorkWithFakeMessageBus not publish_events and then return an empty list.

Small issue but thought I would raise it. Nice book by the way.
Cheers,
Dan

Chapter 6, 7: Is it good to let UoW hold specified repo?

It seems that the service layer should play with repos(based on aggregation root), and use uow to deal with transactions, looks like: https://stackoverflow.com/questions/9808577/multiple-generic-repositories-in-unitofwork).

Let's say we have a use case that one customer in a biz organization need to confirm his/her purchase order, which will change the order status then the product supplier will send product to the customer with express, also record the total cost of the customer (may be by sending an event message to user management domain)

Suppose we have "customer" repo with models of "user", "organization", "customer" aggregated, and "purchase_order" repo with models of "sku", "customer", "supplier", or may be an "order" model with some value objects such as "shipping address" aggregated.

We do not need to create "customer uow", "purchase order uow" right? Should we just do:

class PurchaseOrderService(SomeBaseService):
    order_repo = PurchaseOrderRepository()
    @classmethod
    def OrderConfirm(customer: CustomerRepo, order_no: int):
        with SqlalchemyUnitOfWork as uow:
	        order = purchase_order_repo.get_by_order_no(order_no)
                order.confirm_by_customer(customer)
	        customer.add_total_cost(order.price * order.line_qty)
	        uow.commit()

Postgre/API never came up

In chapter 4 for the service layer, when i run the end to end tests i keep getting this error. When i try to comment out postgresql and use sqllite intead, the error turn insto api never came up. Are end to end tests supposed to use real database? I thought they would use sqllite like integration tests. These are the errors i'm getting.
Screen Shot 2023-03-21 at 11 40 29 PM
Screen Shot 2023-03-21 at 11 41 26 PM

when i try to dive deeper, i noticed the error is coming from web server

Screen Shot 2023-03-21 at 11 42 32 PM

any idea why this might be happening? Thanks for the great book, it helped me a lot!

Out of stock handling

It seems that situation when some goods are out of stock is not handled properly. Yes email will be sent, but allocate would still return "OK" as no exception will be raised. Am I missing something?

Exercises don't work

chapter_03_service_layer_exercise does not exist, along with all other exercises.

How would you deal with inherited types of a model?

Let's say you have different types of Batch that deal with different types of OrderLine, each type is derived from their related base classes.
Consider this use case: you can sale a Batch to a buyer that can verify the purchase by validating a proof using an information contained in another object that is stored on the network that you can retrieve by a different service. Each type of Batch implements a different type of validation.
How would you design this case using the architecture depicted on the book?

Multiply Money NamedTuple in Chapter 1 is no longer a valid operation

Talking about value objects, in the chapter 1 there is a Money NamedTuple:

class Money(NamedTupple):
    currency: str
    value: int

also there are some test to demonstrate how a object value works, in the case of Money class, there are tests for add, subtract and multiply, but for multiply operation with python 3.10.5 throw an error:

FAILED test_allocate_order.py::test_can_multiply_money - AssertionError: assert ('MX', 10, 'M...'MX', 10, ...) == Money(currency='MX', value=50)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.