GithubHelp home page GithubHelp logo

pyeventsourcing / eventsourcing Goto Github PK

View Code? Open in Web Editor NEW
1.4K 33.0 130.0 9.52 MB

A library for event sourcing in Python.

Home Page: https://eventsourcing.readthedocs.io/

License: BSD 3-Clause "New" or "Revised" License

Python 99.44% Shell 0.02% Makefile 0.54%
domain-driven-design ddd eventsourcing cqrs event-sourcing distributed-systems django sqlalchemy python python3

eventsourcing's Introduction

Build Status Coverage Status Documentation Status Latest Release Downloads Code Style: Black

Event Sourcing in Python

A library for event sourcing in Python.

"totally amazing and a pleasure to use"

"very clean and intuitive"

"a huge help and time saver"

Please read the docs. See also extension projects.

Installation

Use pip to install the stable distribution from the Python Package Index.

$ pip install eventsourcing

Please note, it is recommended to install Python packages into a Python virtual environment.

Synopsis

Define aggregates with the Aggregate class and the @event decorator.

from eventsourcing.domain import Aggregate, event

class Dog(Aggregate):
    @event('Registered')
    def __init__(self, name):
        self.name = name
        self.tricks = []

    @event('TrickAdded')
    def add_trick(self, trick):
        self.tricks.append(trick)

Define application objects with the Application class.

from eventsourcing.application import Application

class DogSchool(Application):
    def register_dog(self, name):
        dog = Dog(name)
        self.save(dog)
        return dog.id

    def add_trick(self, dog_id, trick):
        dog = self.repository.get(dog_id)
        dog.add_trick(trick)
        self.save(dog)

    def get_dog(self, dog_id):
        dog = self.repository.get(dog_id)
        return {'name': dog.name, 'tricks': tuple(dog.tricks)}

Write a test.

def test_dog_school():
    # Construct application object.
    school = DogSchool()

    # Evolve application state.
    dog_id = school.register_dog('Fido')
    school.add_trick(dog_id, 'roll over')
    school.add_trick(dog_id, 'play dead')

    # Query application state.
    dog = school.get_dog(dog_id)
    assert dog['name'] == 'Fido'
    assert dog['tricks'] == ('roll over', 'play dead')

    # Select notifications.
    notifications = school.notification_log.select(start=1, limit=10)
    assert len(notifications) == 3

Run the test with the default persistence module. Events are stored in memory using Python objects.

test_dog_school()

Configure the application to run with an SQLite database. Other persistence modules are available.

import os

os.environ["PERSISTENCE_MODULE"] = 'eventsourcing.sqlite'
os.environ["SQLITE_DBNAME"] = 'dog-school.db'

Run the test with SQLite.

test_dog_school()

See the documentation for more information.

Features

Aggregates and applications — base classes for event-sourced aggregates and applications. Suggests how to structure an event-sourced application. All classes are fully type-hinted to guide developers in using the library.

Flexible event store — flexible persistence of aggregate events. Combines an event mapper and an event recorder in ways that can be easily extended. Mapper uses a transcoder that can be easily extended to support custom model object types. Recorders supporting different databases can be easily substituted and configured with environment variables.

Application-level encryption and compression — encrypts and decrypts events inside the application. This means data will be encrypted in transit across a network ("on the wire") and at disk level including backups ("at rest"), which is a legal requirement in some jurisdictions when dealing with personally identifiable information (PII) for example the EU's GDPR. Compression reduces the size of stored aggregate events and snapshots, usually by around 25% to 50% of the original size. Compression reduces the size of data in the database and decreases transit time across a network.

Snapshotting — reduces access-time for aggregates that have many events.

Versioning - allows changes to be introduced after an application has been deployed. Both aggregate events and aggregate snapshots can be versioned.

Optimistic concurrency control — ensures a distributed or horizontally scaled application doesn't become inconsistent due to concurrent method execution. Leverages optimistic concurrency controls in adapted database management systems.

Notifications and projections — reliable propagation of application events with pull-based notifications allows the application state to be projected accurately into replicas, indexes, view models, and other applications. Supports materialized views and CQRS.

Event-driven systems — reliable event processing. Event-driven systems can be defined independently of particular persistence infrastructure and mode of running.

Detailed documentation — documentation provides general overview, introduction of concepts, explanation of usage, and detailed descriptions of library classes. All code is annotated with type hints.

Worked examples — includes examples showing how to develop aggregates, applications and systems.

Extensions

The GitHub organisation Event Sourcing in Python hosts extension projects for the Python eventsourcing library. There are projects that adapt popular ORMs such as Django and SQLAlchemy. There are projects that adapt specialist event stores such as Axon Server and EventStoreDB. There are projects that support popular NoSQL databases such as DynamoDB. There are also projects that provide examples of using the library with web frameworks such as FastAPI and Flask, and for serving applications and running systems with efficient inter-process communication technologies like gRPC. And there are examples of event-sourced applications and systems of event-sourced applications, such as the Paxos system, which is used as the basis for a replicated state machine, which is used as the basis for a distributed key-value store.

Project

This project is hosted on GitHub.

Please register questions, requests and issues on GitHub, or post in the project's Slack channel.

There is a Slack channel for this project, which you are welcome to join.

Please refer to the documentation for installation and usage guides.

eventsourcing's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

eventsourcing's Issues

EntityWithHashchain.Event.__mutate__ is never called

I noticed that in the code for EntityWithHashChain.Created, the super call does not put in the class of self, but of its parent:

        def __mutate__(self, entity_class=None):
            # Call super method.
            obj = super(EntityWithHashchain.Event, self).__mutate__(entity_class)

            # Set entity head from event hash.
            obj.__head__ = self.__event_hash__

            return obj

This means the __mutate__ of EntityWithHashChain.Event is skipped. In fact, fixing the super call will make the example code in the quickstart guide fail. Probably the fix should be to remove the method.

LGPL licensing?

Might you consider a switch to LGPL licensing for the library (e.g., toward improving/encouraging participation by commercial software developers), or is this something that has already been ruled out on principle?

Best,
K

Rebuilding Aggregate root

Hey @johnbywater

First of all a big thanks, this library is awesome!
I've got a question about rebuilding the aggregate root.

I've got this simple hangman web API and I do different calls to that API for guessing letters. I just noticed that every time that I do a call to the API, I will get new Aggregate Root id with the consequence that I never can guess the word, letters etc.

Is there a way to rebuild the aggregate to it's latest state?

Already many thanks in advance!

Actor framework

Serialising aggregate commands would avoid concurrency errors. Application state could be more easily kept in memory, avoiding lots of reads. Thespian seems promising, since it has "convention" mode that runs across a cluster of nodes. Would like to write an application class that uses an actor framework: the repository could return aggregate actors. Could scale by application partition, with each partition having an application actor.

Discussion: Projections Features

Hi @johnbywater! I hope this is an appropriate place to discuss features. :) If not, maybe create a Wiki page?

I am interested in your plans for these forthcoming features:

  • Base class for event sourced projections or views (forthcoming)
    • In memory event sourced projection, which needs to replay entire event stream when system starts up (forthcoming)
    • Persistent event sourced projection, which stored its projected state, but needs to replay entire event stream when initialized (forthcoming)
  • Event sourced indexes, as persisted event source projections, to discover extant entity IDs (forthcoming)

I'd like to help out.

  1. Do you have any existing prototypes, design documents, or notes for these features?
  2. How much of the d5-kanban-python projections implementation were you thinking of borrowing?

Thanks!

Getting save errors with various class construtions

I'm still trying to tackle an example app and making progress in some directions but still failing to understand the fundamentals in this library so please know that I'm super appreciative of your patience. Once I figure this out I hope to make the example library very clear and runnable.

I'm trying to understand how best to construct classes using either the WithReflexiveMutator or TimestampedVersionedEntity. I've created the same basic class user with the two modalities and getting different errors.

I'd love to understand the errors, yes. But I'd also like to get a decent representation of both so I'm 100% open to feedback, critique, even eye-rolling and muttered swears.

Here's what I have so far: https://github.com/dgonzo/eventsourcing-kanban. See the README for the errors I'm seeing and the links to the relevant files.

What to do when changing the code location of an entity class?

Hi, thanks for this excellent library.

I was wondering what I would do if I wanted to restructure my codebase and move a domain entity class from one module to another. As far as I can tell, the eventsourcing library uses get_topic to determine the topic used in the DB to store the type of an event or entity.

Is there a way to swap the implementation of get_topic and retrieve_topic (maybe via dependency injection) in order to have a custom mapping in place that keeps backwards compatibility?

As an example, in original implementation the entity World lives in myapp.domain.world.World and therefore gets the topic myapp.domain.world#World. Now when relocating to myapp2.world.World the topic will be myapp2.world#World which is a mismatch.

New Slack channel

Hey @renou @julianpistorius @subokita @yarbelkgazia @rogoman @pouledodue @danielccyr @jellevandehaterd @leonh @lukaszb!

In an attempt to improve communications around this project (perhaps we could organise meetings about this project) there's a new Slack channel: https://eventsourcinginpython.slack.com

There's a sign up page which is available for 7 days. Please sign up here:
https://eventsourcinginpython.slack.com/join/shared_invite/MjA2ODA4MTI0OTYyLTE0OTg5ODM1MDMtZjUyNTI1ZWIyZg

Sorry this has taken so long!

Multiple applications, duplicate events.

I've been reading through the documentation and I'm quite keen to try using the library but one thing caught my eye, which was the references to having to be careful only to instantiate one copy of the application.

"If your eventsourcing application object has any policies, for example if is has a persistence policy that will persist events whenever they are published, then constructing more than one instance of the application causes the policy event handlers to be subscribed more than once, so for example more than one attempt will be made to save each event, which won’t work."

"When deploying an event sourcing application with Django, just remember that there must only be one instance of the application in any given process, otherwise its subscribers will be registered too many times."

Is this duplication in publish and subscribe the only problem that you are aware of?

From a superficial reading of the code it looks as it would be possible to fix that, as you are using a global subscribe method and a global publish method, which are both hooked to a global store of event_handlers. Could these be moved to the Application object? Or is there an architectural reason for having it this way?

It certainly feels like it would make things cleaner and easier by not having these globals, but I realise I may be missing something from the bigger picture (I'm not deeply experienced with Event Sourcing in general).

I'd be happy to make the changes and do a PR if you think its a good idea.

Thanks

Ed

Reintroduce SQL active record with auto-incrementing ID

Compromises scalability at a certain level, but makes getting all events quite simple. Should be an option, and earlier version had this. So reintroduce an active record class that does this. And revisit method to get all events, so make sure it returns everything in order.

missing domain events

When trying to get all the entities in a repo, I have been seeing that several domain events were missing. (version 2.1.1, cassandra)

the code in question is:

class FooRepo(EventSourcedRepo):
    ...
    def get_foo_entities(self):
        foos_created = filter(
                lambda x: isinstance(x, CreatedFoo),
                self.event_store.all_domain_events())
        return [self.get_entity(f.entity_id) for f in foos_created]

When investigating, the event_store.all_domain_events() was returning 27 events, while there are 32 in the cassandra table.

switching to filtering on unique entity_ids seems to give the correct output:

def get_foo_entities(self):
        foo_ids = set(f.sequence_id for f in self.event_store.active_record_strategy.active_record_class.objects.value_list('s').distinct())
        return [self.get_entity(f) for f in foo_ids]

and getting the entity with get_entity for the missing events works correctly.

I have no idea why this is happening. Also, as random rows seem to be missing from the all_items calls, I'm leery about using either of these methods. (which is why I'm using the cassandra model class rather than all_items).

How to install with Cassandra?

I'm using pip version 9.01 and I tried using python 3.5, pypy 5.8, and python 2.7
But everytime when I try to install eventsourcing with either cassandra or sqlalchemy, it gives me error like the following:

$ pip install eventsourcing[cassandra]
zsh: no matches found: eventsourcing[cassandra]

Is there a way that I could install eventsourcing with the specified event store?

Latest release v7.1.4 tests fail

I've tried to contribute the project to fix some issues and found that unit tests have fail status
Environment: Windows 10 x64, Python 3.7.0
Run command: python -m unittest
Failed tests:
eventsourcing.tests.core_tests.test_events.TestEventWithTimestamp
AssertionError: Decimal('1563540932.143897') not greater than Decimal('1563540932.143897')
eventsourcing.tests.core_tests.test_events.TestEventWithTimestampAndOriginatorID
AssertionError: Decimal('1563540932.144888') not less than Decimal('1563540932.144888')

And sometimes there blinking assertion in test:
eventsourcing.tests.core_tests.test_utils.TestUtils)
AssertionError: Decimal('1563542114.857965') not greater than Decimal('1563542114.857965')

Is it OK?

Aggregate root event subscribe

Having following aggregate root:

class Order(AggregateRoot):
    class SomethingHappened(AggregateRoot.Event):
        pass

Subscribing to the event:

@subscribe_to(Order.SomethingHappened)
def on_something(e):
    pass

Subscriber never gets called.

After debugging subscribe_to i see that event_class is <class 'domain.order.SomethingHappened'> while event var is list so event_type_predicate is returning false.

A suggestion about the event topic versioning/point in time

Hi John,

I am learning both your excellent "eventsourcing" library and the event sourcing in general and just came across a potential implementation issue regarding the "amended events"/"point in time database". Unless I am missing something.

As the event data in the event store (database storage) ideally should be immutable, there is seems to be no graceful way to handle the situation when an event handler (class) that for example reconstruct an entity must be amended (due to the error) or extended (due to the change in the requirements).

It is of cause possible to amend the class itself, but then the ability to look "back in time" will be lost as the state/entity will be reconstructed according to the latest code that is running, not the one that was active at the given time. And the "point in time" aspect of event sourcing is of major benefits provided by this approach.

As "topic" is effectively a fully qualified "class path" binding the data record in the data storage (repository) to the event class, it will probably be useful to introduce some type of "topic cut-off point"/"topic version"/"event revision" that will allow to bind an old (existing) data item to the most recent active class implementation by default, or to the other one in a standard/customizeble way.

This will also allow to deal in a standard way with the classes being moved across the modules (which is a lesser issue IMHO, but still).

Thanks.

Regards.

ProcessManager/Saga docs and samples

Would be great to extend example module with sample ProcessManager code.
Maybe showcase several aggregates/entities and process manager in a real world scenario.

Data integrity

  • detect random mutation in stored records

  • validate aggregate event sequence

  • validate application (somehow use application log?)

  • sequenced item mapper could hash and check individual record?

  • event class could hash values or check values when constructed with a hash?

  • event store could get previous event before appending next?

  • aggregate could hash event with last hash when triggered

  • aggregate mutator could set last hash (maybe in validate method?)

  • (*) aggregate could track hash of events, setting last hash in current event, such an event could be validated by an aggregate against the previous event before being applied

1.10 release?

Any chance you could release the 1.10 version to PyPI? 1.09 doesn't seem to work with MySQL and SQLAlchemy.

sqlalchemy.exc.CompileError: (in table 'stored_events', column 'event_id'): VARCHAR requires a length on dialect mysql

This is already fixed in the main branch.

More in depth documentation

The documentation now does a good job at explaining how to build your own custom parts into a toy application, but it doesn't document how to use all of the parts you have created, such as EventSourcedApplication, and the various built in Event types.

It would be nice to get the documentation into readthedocs, with more in depth docs on usage

Recommended way to store plain records

I've been struggling with the simple use case of needing to store an email lookup alongside the event store.

Maybe I'm thinking of this wrong but in my mind, the simplest way to store an account email would be an "email" table that enforces uniqueness of the email. This would be a plain-jane table with an entity_id primary key and an email column.

However, I'm hitting an escalating complexity trying to either work with the eventsourcing global scoped session or trying to work around it with a separate session (e.g. records are colliding or I'm hitting a locked database.)

What is the recommended way to do something like an email store? The email is stored in the event_store as well but it's in TEXT blog and sorting through versions and then doing a full-text search seems like a huge over-invention when standard rdbms tables do this "for free".

Sequences aren't separated by type

With two entity classes and two repositories, one repository for each entity class, the ID of an aggregate of one type will exist in the repository for the other class of entity. It probably shouldn't.

Solution: either map IDs into a namespace (which gets messy), or expand the stored event classes to have a sequence type (backwards incompatible changes).

Read events from DynamoDB

With the rise of ES/CQRS with DynamoDB Streams-->AWS Lambda, do you plan to add support for reading events from DynamoDB? ^_^

Cassandra Datastore

using v2.1.1
When setting up a casandra EventSourcedApplication, how is the CassandraDatastore used?

In the readme for SQLAlchemy, it (and in the code) is an argument to SQLAlchemyActiveRecordStrategy, but this is not part of CassandraActiveRecordStrategy. As such, I cannot figure out how to setup and use a datastore for cassandra

Aggregate event in docs is better than the library version

The library's Aggregate classes should be renamed to be just "entity" classes. Then the Aggregate classes in the docs should be replicated in the library, so that there is a library class that reflects and perhaps refines the style of saving several pending events introduced in the docs.

Snapshotting and mutator initial state

When I have snapshotting enabled, I need to set up a mutator for initial state of the replay:
https://github.com/johnbywater/eventsourcing/blob/v2.1.1/eventsourcing/infrastructure/eventplayer.py#L69

which is None. with the default handler, you get an unsupported exception. This should probably added into the docs as it is a per entity type handler


@example_mutator.register(type(None))
def example_none_mutator(event, _):
    """Due to the snapshotting - you need to return the class on
    empty"""
    return ExampleEntity

Event migration

There are five approaches... it might be useful for the library to support them.

splitting up tests of cassandra and sqlalchemy

After playing about with Robert Smallshire's kanban example, I was very interested in trying out your more generic event sourcing project.

However I know nothing of the Cassandra db. I ran the test suite be noticed tests were failing due to my missing Cassandra dependency. I tried to split out the cassandra tests and sqlachemy tests into different modules. So that my lack of Cassandra was not hampering my ability to test SqlAlchemy.
leonh@919fad7

Assumptions on type in persistance policy

When using the Combined Persistence policy, there are undocumented assumptions about what the class hierarchy of the events are. This should probably be documented.

(If you haven't noticed, I'm adding notes for the documentation)

[question] rebuild snaphots

Hi John! Is there any ability to discard old snaphots and rebuild new ones? In the case of the change of mutator function, the old snapshot will become invalid and needs to be discarded and rebuilt.

Hitting eventsourcing.exceptions.EventHashError

Hi there, I'm currently playing around with the library and I'm hitting the following run time error when running two commands in a row and then attempting to look up the aggregate root id within app.repository:

python3.7/site-packages/eventsourcing/domain/model/events.py", line 139, in check_hash
raise EventHashError()
eventsourcing.exceptions.EventHashError

The code is available here https://github.com/AlanFoster/eventsourcing and the error is reproducible with:

pipenv shell
pipenv install
pipenv run python main.py

From what I can tell this might be a bug within the library, but I'm not sure just yet! I'd be keen to know your thoughts 👍

How do you encorporate this into web frameworks?

Specifically, I'm interested in how I would use the app in a django/uwsgi stack. I'm not sure of the thread safety, or where the context manager should be set up.

my ideas are:

  • in a middleware
  • in the views/api
  • in a thread.local object.

Projections rebuild

I wonder in case of rebuilding all projections is there an easy way for getting all aggregates and rebuild their projections. We do have repository.get_entity() no repository.get_entities().

Discard entity not being removed from collection

I'm not sure what I'm doing wrong with either my entity class or my projection policy but I can't get my discard event to remove that entity from the collection.

A simplified version of my entity class looks like this:

class User(WithReflexiveMutator, AggregateRoot):
    """Aggregate root for user.
    A user is a namespace for accessing all workflow platform resources.
    """
    def __init__(self, user_id, name, password, email, default_domain, **kwargs):
        super(User, self).__init__(**kwargs)
        self.user_id = self._validate_user_id(user_id)
        self.default_domain = self._validate_domain(default_domain)
        self.domains = set()

    class Created(Event, AggregateRoot.Created):
        """Published when a user is created."""

        @property
        def user_id(self):
            return self.__dict__['user_id']

        @property
        def name(self):
            return self.__dict__['name']

        @property
        def password(self):
            return self.__dict__['password']

        @property
        def email(self):
            return self.__dict__['email']

        @property
        def default_domain(self):
            return self.__dict__['default_domain']

        def mutate(self, cls):
            entity = cls(**self.__dict__)
            entity.domains.add(self.default_domain)
            entity.increment_version()
            return entity

    class Discarded(Event, AggregateRoot.Discarded):
        """Published when a user is discarded."""

        @property
        def domain_namespace(self):
            return self.__dict__['domain_namespace']

        def mutate(self, entity):
            entity._is_discarded = True
            return None

    @staticmethod
    def create(name, password, email, default_domain, **kwargs):
        """Creates a new user."""
        user_id = uuid4()
        event = User.Created(
            originator_id=user_id,
            user_id=user_id,
            default_domain=default_domain,
            **kwargs
        )
        entity = event.mutate(cls=User)
        publish(event)
        return entity

    def discard(self):
        self._apply_and_publish(
            self._construct_event(
                User.Discarded,
                domain_namespace=self.default_domain
            )
        )

Here's my projection policy:

class UserProjectionPolicy:
    """Updates user collection whenever a user is created or discarded.
    """

    def __init__(self, user_collections):
        self.user_collections = user_collections
        subscribe(self.add_user_to_collection, self.is_user_created)
        subscribe(self.remove_user_from_collection, self.is_user_discarded)

    def close(self):
        unsubscribe(self.add_user_to_collection, self.is_user_created)
        unsubscribe(self.remove_user_from_collection, self.is_user_discarded)

    def is_user_created(self, event):
        if isinstance(event, (list, tuple)):
            return all(map(self.is_user_created, event))
        return isinstance(event, User.Created)

    def is_user_discarded(self, event):
        if isinstance(event, (list, tuple)):
            return all(map(self.is_user_discarded, event))
        return isinstance(event, User.Discarded)

    def add_user_to_collection(self, event):
        assert isinstance(event, User.Created), event
        domain_namespace = event.default_domain
        collection_id = make_user_collection_id(domain_namespace)
        try:
            collection = self.user_collections[collection_id]
        except KeyError:
            collection = register_new_collection(collection_id=collection_id)

        assert isinstance(collection, Collection)
        collection.add_item(event.originator_id)

    def remove_user_from_collection(self, event):
        if isinstance(event, (list, tuple)):
            return map(self.remove_user_from_collection, event)
        assert isinstance(event, User.Discarded), event
        domain_namespace = event.domain_namespace
        collection_id = make_user_collection_id(domain_namespace)
        try:
            collection = self.user_collections[collection_id]
        except KeyError:
            pass
        else:
            assert isinstance(collection, Collection)
            collection.remove_item(event.originator_id)

I can verify that when I use User.create the user is instantiated and added to the collection.

__qualname__

I am currently experimenting with eventsourcing and django and I am getting the following error;
AttributeError: type object has no attribute __qualname__
from the ObjectJSONEncoder, it expects obj.__class__.__qualname__ to exist

It would be nice to be able to inject my own ObjectJSONEncoder/Decoder

storage layer abstraction propose

Hi John!

I'm working on a new storage layer for the library using Django ORM for easy django integration. However, I find it hard to do in the current setup.

Currently when we save a domain event we go through DomainEvent -> StoredEvent -> DB implementation. it works great if all DB implementation uses schema that follows StoredEvent schema. However, if I'd like to have a different schema I need to somehow "deserialize" back to domain event since many DomainEvent fields are combined into one in StoredEvent.

My proposal is to move the abstraction a layer up from StoredEventRepository to EventStore(and maybe rename "EventStore" to "EventRepo" to follow the repo pattern). And the developer can have the flexibility of transcode the Domain event to whatever schema they like. And the downstream can be the default implementation of the repo.

The lib works great in most cases as long as we use it as is and not caring about the actual storage. However, there are cases, we do care about how the data is stored.

Let me know what you think, thank you!

Here is an example of the code I have to write to implement a different schema:

    def write_version_and_event(self, new_stored_event, new_entity_version=None, max_retries=3, artificial_failure_rate=0):
        domain_event = self.deserialize(new_stored_event) # :(
        dt = datetime.datetime.fromtimestamp(timestamp_from_uuid(new_stored_event.event_id))
        Event.objects.create(
            event_id=domain_event.event_id,
            event_type=type(domain_event).__name__,
            event_data=new_stored_event.event_attrs,
            aggregate_id=domain_event.entity_id,
            aggregate_type=id_prefix_from_event(domain_event),
            aggregate_version=new_entity_version,
            create_date=make_aware_if_needed(dt),
        )

# models.py
class Event(models.Model):
    event_id = models.CharField(max_length=255, primary_key=True)
    event_type = models.CharField(max_length=255)
    event_data = models.TextField()
    aggregate_id = models.CharField(max_length=255, db_index=True)
    aggregate_type = models.CharField(max_length=255)
    aggregate_version = models.IntegerField()
    metadata = models.TextField()
    create_date = models.DateTimeField(db_index=True)

    class Meta:
        unique_together = ('aggregate_id', 'aggregate_version',)

I think the EventStore is a good line of abstraction. It takes in domain events on write and returns domain events on read.
Bo

Question: inject services into aggregates.

Hello
I am new to your library.
I want to create event sourced aggregate with service injected into the constructor of aggregates.

Say I want to create an aggregate that handles a command that contains a password.
I need to hash the password before constructing the event.

Something like this:

class Registration(AggregateRoot):
    def __init__(self, *args, **kwargs):
        # injected
        self._id = kwargs.pop('id')
        self._hash_password = kwargs.pop('hash_password')
        super().__init__(*args, **kwargs)
        # aggregate state
        self._hashed_password = None

    def set_password(self, **kwargs):
        if self._hashed_password:
            raise InvalidOperation
        self.__trigger_event__(
            PasswordSet,
            **PasswordSet.validate({
                'id': self._id,
                'hashed_password': self._hash_password(kwargs.pop('password')),
            })
        )

    def on_event(self, event):
        if isinstance(event, PasswordSet):
            self._hashed_password = event.hashed_password

as you can see here I inject "hash_password" service which is actually a python method.
However, when I try to construct the aggregate.

        test_registration_aggregate = Registration.__create__(
            id='abcd',
            hash_password=lambda x: x,
        )

I got the error

Traceback (most recent call last):
  File "/home/runner/app/user_registrations/tests/practice.py", line 20, in setUp
    hash_password=lambda: 'new_password',
  File "/home/runner/.local/share/virtualenvs/app-i5Lb5gVx/lib/python3.7/site-packages/eventsourcing/domain/model/entity.py", line 229, in __create__
    return super(EntityWithHashchain, cls).__create__(*args, **kwargs)
  File "/home/runner/.local/share/virtualenvs/app-i5Lb5gVx/lib/python3.7/site-packages/eventsourcing/domain/model/entity.py", line 63, in __create__
    **kwargs
  File "/home/runner/.local/share/virtualenvs/app-i5Lb5gVx/lib/python3.7/site-packages/eventsourcing/domain/model/events.py", line 167, in __init__
    self.__dict__['__event_hash__'] = self.__hash_object__(self.__dict__)
  File "/home/runner/.local/share/virtualenvs/app-i5Lb5gVx/lib/python3.7/site-packages/eventsourcing/domain/model/events.py", line 145, in __hash_object__
    return hash_object(cls.__json_encoder_class__, obj)
  File "/home/runner/.local/share/virtualenvs/app-i5Lb5gVx/lib/python3.7/site-packages/eventsourcing/utils/hashing.py", line 12, in hash_object
    cls=json_encoder_class,
  File "/home/runner/.local/share/virtualenvs/app-i5Lb5gVx/lib/python3.7/site-packages/eventsourcing/utils/transcoding.py", line 133, in json_dumps
    cls=cls,
  File "/usr/local/lib/python3.7/json/__init__.py", line 238, in dumps
    **kw).encode(obj)
  File "/usr/local/lib/python3.7/json/encoder.py", line 199, in encode
    chunks = self.iterencode(o, _one_shot=True)
  File "/home/runner/.local/share/virtualenvs/app-i5Lb5gVx/lib/python3.7/site-packages/eventsourcing/utils/transcoding.py", line 23, in iterencode
    return super(ObjectJSONEncoder, self).iterencode(o, _one_shot=_one_shot)
  File "/usr/local/lib/python3.7/json/encoder.py", line 257, in iterencode
    return _iterencode(o, 0)
  File "/home/runner/.local/share/virtualenvs/app-i5Lb5gVx/lib/python3.7/site-packages/eventsourcing/utils/transcoding.py", line 65, in default
    return JSONEncoder.default(self, obj)
  File "/usr/local/lib/python3.7/json/encoder.py", line 179, in default
    raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type builtin_function_or_method is not JSON serializable

So I assumed that whatever passed into __create__ should be serializable
but how else do I pass domain services into aggregates.

Projections and domain code duplication

I update my domain model state in AggregateRoot by using mutators in events. So my state is consistent in the domain model.

I find quite often that the read model which i rebuild in projections is exactly same as state i have in domain model. But because I am subscribed to an event i have to do kinda the same logic of calculation and assignment of the state.

Ideally my projection would just dump the domain model state into a read store (ES in my particular case) and deal with some versioning conflicts.

Not sure how to structure it.

# domain
class SomethingHappend(Event, AggregateRoot.Event):
    def mutate(self, state):
        state.calculated = self.data * 100
# projection
@subscribe_to(SomethingHappend)
def on_something(event):
    state = get_from_read_db(event.originator_id)
    state.calculated = self.data * 100
    save_to_read_db(state)

EventStore

Is there an example somewhere how to use it with EventStore (geteventstore.com) (hydrate aggregate from events stored in EventStore)

Documentation improvements

Rewrite the documentation to follow the structure of the library rather than the list of features.

Firstly, document the core event sourcing persistence mechanism ("given infrastructure is setup, when an event is stored, the event can be retrieved"). Use domain event classes from the library to show how to store and retrieve a sequence of events. The aspects of the persistence mechanism are:

  • active record classes (stored event schema, indexes and performance of the queries);
  • active record strategy (database management system);
  • the JSON object encoder and decoder;
  • the optional cipher strategy; and
  • the event store object which holds it all together.

Secondly, describe how each of the DDD concepts can be implemented using the library classes:

  • the layers (interface, application, domain, infrastructure);
  • how an aggregate can respond to commands by constructing and applying and publishing events;
  • how to replay events to get current state using a mutator function; how an entity factory can work;
  • how a repository can provide a dictionary-like interface for accessing domain entities by ID;
  • how domain services can work;
  • how an aggregate root can work;
  • how to use entities within an aggregate;
  • how to use value objects within an aggregate;
  • how to use an application object both to bind the domain layer and infrastructure layer, and to present application services;
  • application policies and publish-subscribe mechanisms;
  • how interfaces can use an application object;
  • how application logs can allow projections to update themselves from the application state; and
  • how notification logs can allow remote projections to update another application.

And then include some example applications.

Anything else?

MySQL requires length of String column defined

Hi! When I'm use eventsouring against MySQL, it gives an error on migrating the database:

sqlalchemy.exc.CompileError: (in table 'stored_events', column 'event_id'): VARCHAR requires a length on dialect mysql

Should we add length to the library or do we need to handle schema creation manually?

Thanks!

questions

hi

2 questions

  1. ordering of events and clock drift
  • sqlalchemy_stored_events sorts by a Sequence() which seems to be clock-drift-proof
  • cassandra_stored_events sort by a timeuuid which mean it is vulnerable to a cassandra server clock drift right?
  1. concurrency
  • if multiple processes are working on the same entity at the same time, will there be concurrency problems?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.