GithubHelp home page GithubHelp logo

akka / akka-persistence-r2dbc Goto Github PK

View Code? Open in Web Editor NEW
24.0 24.0 17.0 1.56 MB

Home Page: https://doc.akka.io/docs/akka-persistence-r2dbc/current/index.html

License: Other

Scala 99.49% Shell 0.05% Java 0.46%

akka-persistence-r2dbc's Introduction

Akka

The Akka family of projects is managed by teams at Lightbend with help from the community.

We believe that writing correct concurrent & distributed, resilient and elastic applications is too hard. Most of the time it's because we are using the wrong tools and the wrong level of abstraction.

Akka is here to change that.

Using the Actor Model we raise the abstraction level and provide a better platform to build correct concurrent and scalable applications. This model is a perfect match for the principles laid out in the Reactive Manifesto.

For resilience, we adopt the "Let it crash" model which the telecom industry has used with great success to build applications that self-heal and systems that never stop.

Actors also provide the abstraction for transparent distribution and the basis for truly scalable and fault-tolerant applications.

Learn more at akka.io.

Reference Documentation

The reference documentation is available at doc.akka.io, for Scala and Java.

Current versions of all Akka libraries

The current versions of all Akka libraries are listed on the Akka Dependencies page. Releases of the Akka core libraries in this repository are listed on the GitHub releases page.

Community

You can join these groups and chats to discuss and ask Akka related questions:

In addition to that, you may enjoy following:

Contributing

Contributions are very welcome!

If you see an issue that you'd like to see fixed, or want to shape out some ideas, the best way to make it happen is to help out by submitting a pull request implementing it. We welcome contributions from all, even you are not yet familiar with this project, We are happy to get you started, and will guide you through the process once you've submitted your PR.

Refer to the CONTRIBUTING.md file for more details about the workflow, and general hints on how to prepare your pull request. You can also ask for clarifications or guidance in GitHub issues directly, or in the akka/dev chat if a more real time communication would be of benefit.

License

Akka is licensed under the Business Source License 1.1, please see the Akka License FAQ.

Tests and documentation are under a separate license, see the LICENSE file in each documentation and test root directory for details.

akka-persistence-r2dbc's People

Contributors

aludwiko avatar audacioustux avatar ennru avatar franciscolopezsancho avatar johanandren avatar leviramsey avatar octonato avatar odd avatar patriknw avatar pvlugter avatar scala-steward avatar sebastian-alfers avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

akka-persistence-r2dbc's Issues

transaction_timestamp() not as expected

Looks like the transaction_timestamp() is not monotonically increasing for the same persistenceId.

From a failing ChaosSpec run:

postgres=# select persistence_id, sequence_number, db_timestamp, write_timestamp from event_journal where persistence_id = 'TestEntity-1|p6' order by db_timestamp, sequence_number;

 persistence_id  | sequence_number |         db_timestamp          | write_timestamp
-----------------+-----------------+-------------------------------+-----------------
 TestEntity-1|p6 |               1 | 2021-10-20 14:55:50.54862+00  |   1634741750497
 TestEntity-1|p6 |               2 | 2021-10-20 14:55:50.699197+00 |   1634741750655
 TestEntity-1|p6 |               3 | 2021-10-20 14:55:50.750724+00 |   1634741750712
 TestEntity-1|p6 |               4 | 2021-10-20 14:55:56.860302+00 |   1634741756816
 TestEntity-1|p6 |               5 | 2021-10-20 14:56:01.528778+00 |   1634741761493
 TestEntity-1|p6 |               9 | 2021-10-20 14:56:03.613466+00 |   1634741763608
 TestEntity-1|p6 |               6 | 2021-10-20 14:56:03.637993+00 |   1634741763597
 TestEntity-1|p6 |               7 | 2021-10-20 14:56:03.645786+00 |   1634741763608
 TestEntity-1|p6 |               8 | 2021-10-20 14:56:03.647508+00 |   1634741763608
 TestEntity-1|p6 |              10 | 2021-10-20 14:56:10.347297+00 |   1634741770330

9 is before 6.

Here it was a PersistAll of 7,8,9 and that also shows that the transaction_timestamp() is not the same within the transaction, which is surprising.

We already have some FIXMEs for batch inserts and maybe this FIXME explains the "wrong" order within the transaction:

// FIXME is it ok to execute next like this before previous has completed?

However, more important is that 9 got an earlier timestamp than 6. That means that the ORDER BY db_timestamp, sequence_number in the slice query will not work.

This might also explain #36

For reference, this is how it looks when ordered by the client timestamp:

postgres=# select persistence_id, sequence_number, db_timestamp, write_timestamp from event_journal where persistence_id = 'TestEntity-1|p6' order by write_timestamp, sequence_number;
 persistence_id  | sequence_number |         db_timestamp          | write_timestamp
-----------------+-----------------+-------------------------------+-----------------
 TestEntity-1|p6 |               1 | 2021-10-20 14:55:50.54862+00  |   1634741750497
 TestEntity-1|p6 |               2 | 2021-10-20 14:55:50.699197+00 |   1634741750655
 TestEntity-1|p6 |               3 | 2021-10-20 14:55:50.750724+00 |   1634741750712
 TestEntity-1|p6 |               4 | 2021-10-20 14:55:56.860302+00 |   1634741756816
 TestEntity-1|p6 |               5 | 2021-10-20 14:56:01.528778+00 |   1634741761493
 TestEntity-1|p6 |               6 | 2021-10-20 14:56:03.637993+00 |   1634741763597
 TestEntity-1|p6 |               7 | 2021-10-20 14:56:03.645786+00 |   1634741763608
 TestEntity-1|p6 |               8 | 2021-10-20 14:56:03.647508+00 |   1634741763608
 TestEntity-1|p6 |               9 | 2021-10-20 14:56:03.613466+00 |   1634741763608
 TestEntity-1|p6 |              10 | 2021-10-20 14:56:10.347297+00 |   1634741770330
(10 rows)

Failed: EventsBySliceSpec

Against Yugabyte

 - should retrieve from several slices *** FAILED *** (12 seconds, 948 milliseconds)
[info]   A timeout occurred waiting for a future to complete. Waited 10000000000 nanoseconds. (EventsBySliceSpec.scala:338)
[info]   org.scalatest.concurrent.ScalaFutures$$anon$1$$anon$2:
[info]   at org.scalatest.concurrent.ScalaFutures$$anon$1.futureValueImpl(ScalaFutures.scala:339)
[info]   at org.scalatest.concurrent.Futures$FutureConcept.futureValue(Futures.scala:476)
[info]   at org.scalatest.concurrent.Futures$FutureConcept.futureValue$(Futures.scala:475)
[info]   at org.scalatest.concurrent.ScalaFutures$$anon$1.futureValue(ScalaFutures.scala:281)
[info]   at akka.persistence.r2dbc.query.EventsBySliceSpec$$anon$8.<init>(EventsBySliceSpec.scala:338)
[info]   at akka.persistence.r2dbc.query.EventsBySliceSpec.$anonfun$new$25(EventsBySliceSpec.scala:296)

logs_220.zip

Guard against transaction_timestamp() moving backwards

There is no guarantee that transaction_timestamp() is monotonically increasing.

We don't need it to be globally monotonically increasing, but for a persistence_id it would be strange to not have the db_timestamp in the same order as the sequence_number. That order is used in the eventsBySlices query.

Maybe it can be configured for the system clock on the database node and that might be enough for ordinary Postgres, but we also need to understand how this works for the distributed sql databases.

See https://forum.cockroachlabs.com/t/question-about-consistency-monotonic-timestamps/2353

review the logging

Make sure that the log messages contains useful context information (to be able to correlate).

Change some logs to trace or remove.

Backtracking doesn't find last event when no progress

https://github.com/akka/akka-persistence-r2dbc/runs/4079463405?check_suite_focus=true#step:7:2415

[2021-11-02 11:49:02,221] [INFO] [akka.actor.testkit.typed.scaladsl.LogCapturing] [] [] [pool-1-thread-1-ScalaTest-running-EndToEndSpec] - Logging finished for test [akka.projection.r2dbc.EndToEndSpec: A R2DBC projection with eventsBySlices source must handle all events exactlyOnce] that [Failed(java.lang.AssertionError: timeout (20 seconds) while expecting 200 messages (got 194))]
2414
<-- [akka.projection.r2dbc.EndToEndSpec: A R2DBC projection with eventsBySlices source must handle all events exactlyOnce] End of log messages of test that [Failed(java.lang.AssertionError: timeout (20 seconds) while expecting 200 messages (got 194))]
2415
[info] - must handle all events exactlyOnce *** FAILED *** (26 seconds, 502 milliseconds)
2416
[info]   java.lang.AssertionError: timeout (20 seconds) while expecting 200 messages (got 194)
2417
[info]   at akka.actor.testkit.typed.internal.TestProbeImpl.assertFail(TestProbeImpl.scala:399)
2418
[info]   at akka.actor.testkit.typed.internal.TestProbeImpl.$anonfun$receiveMessages_internal$1(TestProbeImpl.scala:262)
2419
[info]   at akka.actor.testkit.typed.internal.TestProbeImpl.$anonfun$receiveMessages_internal$1$adapted(TestProbeImpl.scala:257)
2420
[info]   at scala.collection.immutable.Range.map(Range.scala:59)
2421
[info]   at akka.actor.testkit.typed.internal.TestProbeImpl.receiveMessages_internal(TestProbeImpl.scala:257)
2422
[info]   at akka.actor.testkit.typed.internal.TestProbeImpl.receiveMessages(TestProbeImpl.scala:247)
2423
[info]   at akka.projection.r2dbc.EndToEndSpec.$anonfun$new$2(EndToEndSpec.scala:217)

Rename entity_type_hint

Sorry for bring up a naming bikeshedding topic, but I'm not happy with the _hint in column and method parameter names.

I associate "hint" with something that is optional but here it is essential.

Can we rename it to entity_type?

The name come from PersistenceId and there it has some history and is optional.

Flaky DurableStateEndToEndSpec

Seen at https://github.com/akka/akka-persistence-r2dbc/runs/4157477944?check_suite_focus=true#step:7:8286

[info] - must handle latest updated state exactlyOnce *** FAILED *** (16 seconds, 138 milliseconds)
[info]   projectionId ProjectionId(f030042a-e8c0-4a89-8a0e-bcaf9db94a88, 64-95), persistenceId PersistenceId(TestEntity-4|p1), slice 95: The code passed to eventually never returned normally. Attempted 109 times over 10.037617700999999 seconds. Last failure message: empty.tail. (DurableStateEndToEndSpec.scala:232)
[info]   org.scalatest.exceptions.TestFailedDueToTimeoutException:
[info]   at org.scalatest.enablers.Retrying$$anon$4.tryTryAgain$2(Retrying.scala:185)
[info]   at org.scalatest.enablers.Retrying$$anon$4.retry(Retrying.scala:192)
[info]   at org.scalatest.concurrent.Eventually.eventually(Eventually.scala:402)
[info]   at org.scalatest.concurrent.Eventually.eventually$(Eventually.scala:401)
[info]   at akka.actor.testkit.typed.scaladsl.ScalaTestWithActorTestKit.eventually(ScalaTestWithActorTestKit.scala:31)
[info]   at akka.projection.r2dbc.DurableStateEndToEndSpec.$anonfun$new$10(DurableStateEndToEndSpec.scala:232)
[info]   at org.scalatest.Assertions.withClue(Assertions.scala:1065)
[info]   at org.scalatest.Assertions.withClue$(Assertions.scala:1052)
[info]   at akka.actor.testkit.typed.scaladsl.ScalaTestWithActorTestKit.withClue(ScalaTestWithActorTestKit.scala:31)
[info]   at akka.projection.r2dbc.DurableStateEndToEndSpec.$anonfun$new$9(DurableStateEndToEndSpec.scala:231)
[info]   at akka.projection.r2dbc.DurableStateEndToEndSpec.$anonfun$new$9$adapted(DurableStateEndToEndSpec.scala:224)
[info]   at scala.collection.immutable.Range.foreach(Range.scala:190)
[info]   at akka.projection.r2dbc.DurableStateEndToEndSpec.$anonfun$new$8(DurableStateEndToEndSpec.scala:224)
[info]   at akka.projection.r2dbc.DurableStateEndToEndSpec.$anonfun$new$8$adapted(DurableStateEndToEndSpec.scala:223)
[info]   at scala.collection.immutable.Map$Map4.foreach(Map.scala:571)
[info]   at akka.projection.r2dbc.DurableStateEndToEndSpec.$anonfun$new$2(DurableStateEndToEndSpec.scala:223)
[info]   at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
[info]   at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)

Unexpected concurrent modification of state from saveOffset

Something wrong with the concurrency assumption after projection restart...

[2021-10-25 10:00:25,128] [WARN] [akka.stream.scaladsl.RestartWithBackoffSource] [RestartWithBackoffSource(akka://test)] [] [test-akka.actor.default-dispatcher-20] [] - Restarting stream due to failure [1]: java.lang.IllegalArgumentException: Unexpected offset [TimestampOffset(2021-10-25T10:00:23.283236Z,2021-10-25T10:00:24.584984Z,Map(configurable|test-1635156008202-49 -> 33))] before latest [TimestampOffset(2021-10-25T10:00:23.313051Z,2021-10-25T10:00:24.584984Z,Map(configurable|test-1635156008202-50 -> 33))].
java.lang.IllegalArgumentException: Unexpected offset [TimestampOffset(2021-10-25T10:00:23.283236Z,2021-10-25T10:00:24.584984Z,Map(configurable|test-1635156008202-49 -> 33))] before latest [TimestampOffset(2021-10-25T10:00:23.313051Z,2021-10-25T10:00:24.584984Z,Map(configurable|test-1635156008202-50 -> 33))].
	at akka.persistence.r2dbc.query.scaladsl.R2dbcReadJournal.nextOffset$2(R2dbcReadJournal.scala:208)
	at akka.persistence.r2dbc.query.scaladsl.R2dbcReadJournal.$anonfun$eventsBySlices$2(R2dbcReadJournal.scala:304)
	at akka.persistence.r2dbc.internal.ContinuousQuery$$anon$1.akka$persistence$r2dbc$internal$ContinuousQuery$$anon$$pushAndUpdateState(ContinuousQuery.scala:69)
	at akka.persistence.r2dbc.internal.ContinuousQuery$$anon$1$$anon$2.onPush(ContinuousQuery.scala:101)
	at akka.stream.stage.GraphStageLogic$SubSinkInlet.$anonfun$_sink$1(GraphStage.scala:1436)
	at akka.stream.stage.GraphStageLogic$SubSinkInlet.$anonfun$_sink$1$adapted(GraphStage.scala:1431)
	at akka.stream.impl.fusing.GraphInterpreter.runAsyncInput(GraphInterpreter.scala:467)
	at akka.stream.impl.fusing.GraphInterpreterShell$AsyncInput.execute(ActorGraphInterpreter.scala:517)
	at akka.stream.impl.fusing.GraphInterpreterShell.processEvent(ActorGraphInterpreter.scala:625)
	at akka.stream.impl.fusing.ActorGraphInterpreter.akka$stream$impl$fusing$ActorGraphInterpreter$$processEvent(ActorGraphInterpreter.scala:800)
	at akka.stream.impl.fusing.ActorGraphInterpreter.akka$stream$impl$fusing$ActorGraphInterpreter$$shortCircuitBatch(ActorGraphInterpreter.scala:787)
	at akka.stream.impl.fusing.ActorGraphInterpreter$$anonfun$receive$1.applyOrElse(ActorGraphInterpreter.scala:819)
	at akka.actor.Actor.aroundReceive(Actor.scala:537)
	at akka.actor.Actor.aroundReceive$(Actor.scala:535)
	at akka.stream.impl.fusing.ActorGraphInterpreter.aroundReceive(ActorGraphInterpreter.scala:716)
	at akka.actor.ActorCell.receiveMessage(ActorCell.scala:580)
	at akka.actor.ActorCell.invoke(ActorCell.scala:548)
	at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270)
	at akka.dispatch.Mailbox.run(Mailbox.scala:231)
	at akka.dispatch.Mailbox.exec(Mailbox.scala:243)
	at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
	at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
	at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
	at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
	at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)
  MDC: {akkaAddress=akka://[email protected]:2551, sourceThread=test-akka.actor.default-dispatcher-22, akkaSource=RestartWithBackoffSource(akka://test), sourceActorSystem=test, akkaTimestamp=10:00:25.128UTC}
[2021-10-25 10:00:25,683] [WARN] [akka.projection.r2dbc.internal.R2dbcProjectionImpl$R2dbcInternalProjectionState] [R2dbcProjectionImpl$R2dbcInternalProjectionState(akka://test)] [] [test-akka.actor.default-dispatcher-3] [] - [test-projection-id-0-projection-0-tag-1] First attempt to process envelopes with offsets from [TimestampOffset(2021-10-25T10:00:22.768451Z,2021-10-25T10:00:23.621201Z,Map(configurable|test-1635156008202-48 -> 22))] to [TimestampOffset(2021-10-25T10:00:22.920426Z,2021-10-25T10:00:23.621201Z,Map(configurable|test-1635156008202-50 -> 23))] failed. Will retry [2] time(s). Exception: java.lang.IllegalStateException: Unexpected concurrent modification of state from saveOffset.  MDC: {akkaAddress=akka://[email protected]:2551, sourceThread=test-akka.actor.default-dispatcher-23, akkaSource=R2dbcProjectionImpl$R2dbcInternalProjectionState(akka://test), sourceActorSystem=test, akkaTimestamp=10:00:25.683UTC}

The error that caused the restart is tracked in issue #36 but this ticket is about the concurrent modification.

Results of explain analyze with Yugabyte

Loaded event_journal2 table via akka-projection-testing with 80,000 rows.

Results of the explain analyze for the most important sql.

tl;dr; looking good


Far behind (12200 behind):

explain analyze SELECT * FROM event_journal2 WHERE slice BETWEEN 0 AND 31 AND entity_type_hint = 'configurable'
  AND db_timestamp >= '2021-10-25 10:30:20.042089+00' AND db_timestamp < transaction_timestamp() - interval '200 milliseconds'
  AND deleted = false
  ORDER BY db_timestamp, sequence_number
  LIMIT 1000;
--------------------------------------------------------------------------------------------------------------------
 Limit  (cost=5.54..5.57 rows=10 width=3197) (actual time=586.759..586.988 rows=1000 loops=1)
   ->  Sort  (cost=5.54..5.57 rows=10 width=3197) (actual time=586.757..586.904 rows=1000 loops=1)
         Sort Key: db_timestamp, sequence_number
         Sort Method: top-N heapsort  Memory: 900kB
         ->  Index Scan using event_journal2_slice_idx on event_journal2  (cost=0.00..5.38 rows=10 width=3197) (actual time=52.541..583.150 rows=12200 loops=1)
               Index Cond: (((entity_type_hint)::text = 'configurable'::text) AND (slice >= 0) AND (slice <= 31) AND (db_timestamp >= '2021-10-25 10:30:20.042089+00'::timestamp with time zone) AND (db_timestamp < (transaction_timestamp() - '00:00:00.2'::interval)))
               Filter: (NOT deleted)
 Planning Time: 0.181 ms
 Execution Time: 588.810 ms

Almost at the end, 1100 rows:

explain analyze SELECT * FROM event_journal2 WHERE slice BETWEEN 0 AND 31 AND entity_type_hint = 'configurable'
  AND db_timestamp >= '2021-10-25 10:54:20.042089+00' AND db_timestamp < transaction_timestamp() - interval '200 milliseconds'
  AND deleted = false
  ORDER BY db_timestamp, sequence_number
  LIMIT 1000;
--------------------------------------------------------------------------------------------------------------------
 Limit  (cost=5.54..5.57 rows=10 width=3197) (actual time=55.551..55.771 rows=1000 loops=1)
   ->  Sort  (cost=5.54..5.57 rows=10 width=3197) (actual time=55.550..55.681 rows=1000 loops=1)
         Sort Key: db_timestamp, sequence_number
         Sort Method: quicksort  Memory: 616kB
         ->  Index Scan using event_journal2_slice_idx on event_journal2  (cost=0.00..5.38 rows=10 width=3197) (actual time=49.381..55.188 rows=1100 loops=1)
               Index Cond: (((entity_type_hint)::text = 'configurable'::text) AND (slice >= 0) AND (slice <= 31) AND (db_timestamp >= '2021-10-25 10:54:20.042089+00'::timestamp with time zone) AND (db_timestamp < (transaction_timestamp() - '00:00:00.2'::interval)))
               Filter: (NOT deleted)
 Planning Time: 0.172 ms
 Execution Time: 56.041 ms

insert:

explain analyze INSERT INTO event_journal2
    (slice, entity_type_hint, persistence_id, sequence_number, writer, event_ser_id, event_ser_manifest, event_payload, db_timestamp)
    VALUES (1, 'Entity', 'pid1', 2, '', 1, 'A', '', GREATEST(transaction_timestamp(),
      (SELECT db_timestamp + '1 microsecond'::interval FROM event_journal WHERE slice = 1 AND entity_type_hint = 'Entity' AND persistence_id = 'pid1' AND sequence_number = 1)));
------------------------------------------------------------------------------------------------------------------
 Insert on event_journal2  (cost=4.12..4.14 rows=1 width=2229) (actual time=1.349..1.349 rows=0 loops=1)
   InitPlan 1 (returns $0)
     ->  Index Scan using event_journal_pkey on event_journal  (cost=0.00..4.12 rows=1 width=8) (actual time=1.190..1.193 rows=1 loops=1)
           Index Cond: ((slice = 1) AND ((entity_type_hint)::text = 'Entity'::text) AND ((persistence_id)::text = 'pid1'::text) AND (sequence_number = 1))
   ->  Result  (cost=0.00..0.01 rows=1 width=2229) (actual time=1.203..1.204 rows=1 loops=1)
 Planning Time: 1.907 ms
 Execution Time: 10.099 ms

insert without the previous timestamp subselect:

explain analyze INSERT INTO event_journal2
    (slice, entity_type_hint, persistence_id, sequence_number, writer, event_ser_id, event_ser_manifest, event_payload, db_timestamp) 
    VALUES (1, 'Entity', 'pid2', 1, '', 1, 'A', '', transaction_timestamp());
---------------------------------------------------------------------------------------------------------
 Insert on event_journal2  (cost=0.00..0.01 rows=1 width=2229) (actual time=0.290..0.290 rows=0 loops=1)
   ->  Result  (cost=0.00..0.01 rows=1 width=2229) (actual time=0.003..0.004 rows=1 loops=1)
 Planning Time: 0.082 ms
 Execution Time: 8.854 ms

highest sequence numer:

explain analyze SELECT MAX(sequence_number) from event_journal2 WHERE slice = 77 AND entity_type_hint = 'configurable' AND persistence_id = 'configurable|test-1635156008202-1' AND sequence_number >= 1;
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=4.12..4.13 rows=1 width=8) (actual time=4.038..4.039 rows=1 loops=1)
   ->  Index Scan using event_journal2_pkey on event_journal2  (cost=0.00..4.12 rows=1 width=8) (actual time=3.877..4.015 rows=100 loops=1)
         Index Cond: ((slice = 77) AND ((entity_type_hint)::text = 'configurable'::text) AND ((persistence_id)::text = 'configurable|test-1635156008202-1'::text) AND (sequence_number >= 1))
 Planning Time: 0.215 ms
 Execution Time: 4.170 ms

replay events:

explain analyze SELECT * from event_journal2 WHERE slice = 77 AND entity_type_hint = 'configurable' AND persistence_id = 'configurable|test-1635156008202-1' AND sequence_number >= 17 AND sequence_number <= 100 AND deleted = false ORDER BY sequence_number;
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Index Scan using event_journal2_pkey on event_journal2  (cost=0.00..4.12 rows=1 width=3197) (actual time=3.634..3.798 rows=84 loops=1)
   Index Cond: ((slice = 77) AND ((entity_type_hint)::text = 'configurable'::text) AND ((persistence_id)::text = 'configurable|test-1635156008202-1'::text) AND (sequence_number >= 17) AND (sequence_number <= 100))
   Filter: (NOT deleted)
 Planning Time: 0.187 ms
 Execution Time: 3.917 ms

Missing events

Here is a test failure of ChaosSpec
ChaosSpec-log.txt

It's missing p20/4 "e00160"

It is processing p20/3 and p20/5 but no processing of p20/4.

There is a Filtering out duplicate of p20/4 which indicates that is in the OffsetStore State, but there is no saving timestamp offset of p20/4 before that.

Extremely annoying and puzzling. I can't understand this (yet).

Try using batch statements again

Something didn't work when I tried them, but maybe that was related to the concurrency problems we had.

Note that there are two types of batch statements.

  1. Statements with bind parameters that are separated with .add()
  2. Batch without bind parameters

I don't think 2 is interesting for us because we want use bind parameters (e.g. binary param).

Combine with live stream of events sent as Akka messages

With the precise deduplication in place I think we have good opportunity for publishing the events over Distributed PubSub to the projection streams.

It should only be fire-n-forget publishing and not any attempts of reliable delivery.

Then we can reduce the database polling frequency.

How to handle first event with unknown sequence number

We must be careful with accepting events with seqNr > 1 where previous sequence number is not known by the offset store. It could have missed previous sequence number because how timestamp based offset works (see design notes).

Current implementation will therefore reject such event unless it is 3 seconds old (configurable by accept-new-sequence-number-after-age). The idea is that it will be picked up by the backtracking query and then it would be safe since the transaction_timestamp issues would have been resolved.

I don't like that solution. It introduces additional delay for that case (an entity that wasn't active for a long time). How long must the delay be to be safe?

What can we do instead? I'm not sure, but one thought would be to query the journal for the db_timestamp of the previous event. If that is older than the offset store window then it's safe to accept immediately.

Something wrong with select highest seq nr

The response times for the select highest is very high (seconds). Running at low load with Yugabyte Cloud:

[2021-10-26 07:22:28,116] [DEBUG] [akka.persistence.r2dbc.journal.JournalDao] [] [] [test-akka.persistence.dispatchers.default-plugin-dispatcher-38] [] - select highest seqNr [configurable|test-1635232942795-2] - Selected [1] rows in [1090601] ยตs  MDC: {}
[2021-10-26 07:22:31,291] [DEBUG] [akka.persistence.r2dbc.journal.JournalDao] [] [] [test-akka.persistence.dispatchers.default-plugin-dispatcher-38] [] - select highest seqNr [configurable|test-1635232942795-3] - Selected [1] rows in [1141654] ยตs  MDC: {}
[2021-10-26 07:22:34,365] [DEBUG] [akka.persistence.r2dbc.journal.JournalDao] [] [] [test-akka.persistence.dispatchers.default-plugin-dispatcher-38] [] - select highest seqNr [configurable|test-1635232942795-4] - Selected [1] rows in [1109827] ยตs  MDC: {}
[2021-10-26 07:22:37,439] [DEBUG] [akka.persistence.r2dbc.journal.JournalDao] [] [] [test-akka.persistence.dispatchers.default-plugin-dispatcher-38] [] - select highest seqNr [configurable|test-1635232942795-5] - Selected [1] rows in [1103863] ยตs  MDC: {}
[2021-10-26 07:22:40,662] [DEBUG] [akka.persistence.r2dbc.journal.JournalDao] [] [] [test-akka.persistence.dispatchers.default-plugin-dispatcher-38] [] - select highest seqNr [configurable|test-1635232942795-6] - Selected [1] rows in [1095839] ยตs  MDC: {}
[2021-10-26 07:22:43,829] [DEBUG] [akka.persistence.r2dbc.journal.JournalDao] [] [] [test-akka.persistence.dispatchers.default-plugin-dispatcher-38] [] - select highest seqNr [configurable|test-1635232942795-7] - Selected [1] rows in [1073033] ยตs  MDC: {}
[2021-10-26 07:22:47,058] [DEBUG] [akka.persistence.r2dbc.journal.JournalDao] [] [] [test-akka.persistence.dispatchers.default-plugin-dispatcher-38] [] - select highest seqNr [configurable|test-1635232942795-8] - Selected [1] rows in [1072653] ยตs  MDC: {}
[2021-10-26 07:22:50,315] [DEBUG] [akka.persistence.r2dbc.journal.JournalDao] [] [] [test-akka.persistence.dispatchers.default-plugin-dispatcher-38] [] - select highest seqNr [configurable|test-1635232942795-9] - Selected [1] rows in [1148679] ยตs  MDC: {}
[2021-10-26 07:22:53,346] [DEBUG] [akka.persistence.r2dbc.journal.JournalDao] [] [] [test-akka.persistence.dispatchers.default-plugin-dispatcher-38] [] - select highest seqNr [configurable|test-1635232942795-10] - Selected [1] rows in [1090215] ยตs  MDC: {}
[2021-10-26 07:22:56,509] [DEBUG] [akka.persistence.r2dbc.journal.JournalDao] [] [] [test-akka.persistence.dispatchers.default-plugin-dispatcher-38] [] - select highest seqNr [configurable|test-1635232942795-11] - Selected [1] rows in [1113912] ยตs  MDC: {}
[2021-10-26 07:22:59,617] [DEBUG] [akka.persistence.r2dbc.journal.JournalDao] [] [] [test-akka.persistence.dispatchers.default-plugin-dispatcher-38] [] - select highest seqNr [configurable|test-1635232942795-12] - Selected [1] rows in [1139872] ยตs  MDC: {}
[2021-10-26 07:23:02,657] [DEBUG] [akka.persistence.r2dbc.journal.JournalDao] [] [] [test-akka.persistence.dispatchers.default-plugin-dispatcher-38] [] - select highest seqNr [configurable|test-1635232942795-13] - Selected [1] rows in [1049832] ยตs  MDC: {}
[2021-10-26 07:23:05,892] [DEBUG] [akka.persistence.r2dbc.journal.JournalDao] [] [] [test-akka.persistence.dispatchers.default-plugin-dispatcher-38] [] - select highest seqNr [configurable|test-1635232942795-14] - Selected [1] rows in [1125764] ยตs  MDC: {}
[2021-10-26 07:23:09,052] [DEBUG] [akka.persistence.r2dbc.journal.JournalDao] [] [] [test-akka.persistence.dispatchers.default-plugin-dispatcher-38] [] - select highest seqNr [configurable|test-1635232942795-15] - Selected [1] rows in [1106523] ยตs  MDC: {}
[2021-10-26 07:23:12,278] [DEBUG] [akka.persistence.r2dbc.journal.JournalDao] [] [] [test-akka.persistence.dispatchers.default-plugin-dispatcher-38] [] - select highest seqNr [configurable|test-1635232942795-16] - Selected [1] rows in [1102701] ยตs  MDC: {}
[2021-10-26 07:23:15,624] [DEBUG] [akka.persistence.r2dbc.journal.JournalDao] [] [] [test-akka.persistence.dispatchers.default-plugin-dispatcher-38] [] - select highest seqNr [configurable|test-1635232942795-17] - Selected [1] rows in [1228560] ยตs  MDC: {}
[2021-10-26 07:24:17,990] [DEBUG] [akka.persistence.r2dbc.journal.JournalDao] [] [] [test-akka.persistence.dispatchers.default-plugin-dispatcher-38] [] - select highest seqNr [configurable|test-1635232942795-18] - Selected [1] rows in [60363838] ยตs  MDC: {}
[2021-10-26 07:24:24,439] [DEBUG] [akka.persistence.r2dbc.journal.JournalDao] [] [] [test-akka.persistence.dispatchers.default-plugin-dispatcher-38] [] - select highest seqNr [configurable|test-1635232942795-18] - Selected [1] rows in [1815248] ยตs  MDC: {}
[2021-10-26 07:24:29,830] [DEBUG] [akka.persistence.r2dbc.journal.JournalDao] [] [] [test-akka.persistence.dispatchers.default-plugin-dispatcher-38] [] - select highest seqNr [configurable|test-1635232942795-19] - Selected [1] rows in [1083394] ยตs  MDC: {}
[2021-10-26 07:24:32,968] [DEBUG] [akka.persistence.r2dbc.journal.JournalDao] [] [] [test-akka.persistence.dispatchers.default-plugin-dispatcher-38] [] - select highest seqNr [configurable|test-1635232942795-20] - Selected [1] rows in [1083638] ยตs  MDC: {}
[2021-10-26 07:24:36,103] [DEBUG] [akka.persistence.r2dbc.journal.JournalDao] [] [] [test-akka.persistence.dispatchers.default-plugin-dispatcher-38] [] - select highest seqNr [configurable|test-1635232942795-21] - Selected [1] rows in [1148865] ยตs  MDC: {}
[2021-10-26 07:24:39,239] [DEBUG] [akka.persistence.r2dbc.journal.JournalDao] [] [] [test-akka.persistence.dispatchers.default-plugin-dispatcher-38] [] - select highest seqNr [configurable|test-1635232942795-22] - Selected [1] rows in [1194330] ยตs  MDC: {}
[2021-10-26 07:24:42,195] [DEBUG] [akka.persistence.r2dbc.journal.JournalDao] [] [] [test-akka.persistence.dispatchers.default-plugin-dispatcher-38] [] - select highest seqNr [configurable|test-1635232942795-23] - Selected [1] rows in [1110167] ยตs  MDC: {}
[2021-10-26 07:24:45,267] [DEBUG] [akka.persistence.r2dbc.journal.JournalDao] [] [] [test-akka.persistence.dispatchers.default-plugin-dispatcher-38] [] - select highest seqNr [configurable|test-1635232942795-24] - Selected [1] rows in [1122530] ยตs  MDC: {}
[2021-10-26 07:24:48,282] [DEBUG] [akka.persistence.r2dbc.journal.JournalDao] [] [] [test-akka.persistence.dispatchers.default-plugin-dispatcher-38] [] - select highest seqNr [configurable|test-1635232942795-25] - Selected [1] rows in [1007913] ยตs  MDC: {}
[2021-10-26 07:24:51,522] [DEBUG] [akka.persistence.r2dbc.journal.JournalDao] [] [] [test-akka.persistence.dispatchers.default-plugin-dispatcher-38] [] - select highest seqNr [configurable|test-1635232942795-26] - Selected [1] rows in [1117221] ยตs  MDC: {}
[2021-10-26 07:24:54,584] [DEBUG] [akka.persistence.r2dbc.journal.JournalDao] [] [] [test-akka.persistence.dispatchers.default-plugin-dispatcher-38] [] - select highest seqNr [configurable|test-1635232942795-27] - Selected [1] rows in [1078440] ยตs  MDC: {}
[2021-10-26 07:24:57,807] [DEBUG] [akka.persistence.r2dbc.journal.JournalDao] [] [] [test-akka.persistence.dispatchers.default-plugin-dispatcher-38] [] - select highest seqNr [configurable|test-1635232942795-28] - Selected [1] rows in [1072950] ยตs  MDC: {}
[2021-10-26 07:25:01,001] [DEBUG] [akka.persistence.r2dbc.journal.JournalDao] [] [] [test-akka.persistence.dispatchers.default-plugin-dispatcher-38] [] - select highest seqNr [configurable|test-1635232942795-29] - Selected [1] rows in [1056384] ยตs  MDC: {}
[2021-10-26 07:25:04,171] [DEBUG] [akka.persistence.r2dbc.journal.JournalDao] [] [] [test-akka.persistence.dispatchers.default-plugin-dispatcher-38] [] - select highest seqNr [configurable|test-1635232942795-30] - Selected [1] rows in [1056532] ยตs  MDC: {}
[2021-10-26 07:25:07,230] [DEBUG] [akka.persistence.r2dbc.journal.JournalDao] [] [] [test-akka.persistence.dispatchers.default-plugin-dispatcher-38] [] - select highest seqNr [configurable|test-1635232942795-31] - Selected [1] rows in [1065767] ยตs  MDC: {}
[2021-10-26 07:25:10,362] [DEBUG] [akka.persistence.r2dbc.journal.JournalDao] [] [] [test-akka.persistence.dispatchers.default-plugin-dispatcher-38] [] - select highest seqNr [configurable|test-1635232942795-32] - Selected [1] rows in [1057321] ยตs  MDC: {}
[2021-10-26 07:25:13,556] [DEBUG] [akka.persistence.r2dbc.journal.JournalDao] [] [] [test-akka.persistence.dispatchers.default-plugin-dispatcher-38] [] - select highest seqNr [configurable|test-1635232942795-33] - Selected [1] rows in [1070987] ยตs  MDC: {}

also note that 60 seconds in the middle.

Running explain analyze at the same time has good execution time:

explain analyze SELECT MAX(sequence_number) from event_journal2 WHERE slice = 122 AND entity_type_hint = 'configurable' AND persistence_id = 'configurable|test-1635170303188-98b' AND sequence_number >= 1;
                                                                                           QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=4.12..4.13 rows=1 width=8) (actual time=1.505..1.506 rows=1 loops=1)
   ->  Index Scan using event_journal2_pkey on event_journal2  (cost=0.00..4.12 rows=1 width=8) (actual time=1.497..1.497 rows=0 loops=1)
         Index Cond: ((slice = 122) AND ((entity_type_hint)::text = 'configurable'::text) AND ((persistence_id)::text = 'configurable|test-1635170303188-98b'::text) AND (sequence_number >= 1))
 Planning Time: 0.225 ms
 Execution Time: 1.649 ms

Optimize backtracking data fetch

Assuming that missed events are rare it's wasteful to fetch the full event payload in the backtracking queries.

One simple optimization would be to just do the deserialization work lazyily when verified that the event will be used.

Another approach would be to not fetch the event payload at all in the backtracking queries and instead fetch that for individual pid/seqNr from the journal table when when verified that the event will be used.

Change of slice distribution

#107 added the slices to the offset store table, which should make it easy to change the slice distribution for projections, but that should be tried out for real.

IllegalArgumentException: Unexpected offset

Noticed this when running akka-projection-testing akka/akka-projection-testing#5

I can't understand how this would be possible...

[2021-10-18 15:36:25,724] [WARN] [akka.stream.scaladsl.RestartWithBackoffSource] [RestartWithBackoffSource(akka://test)] [] [test-akka.actor.default-dispatcher-33] [] - Restarting stream due to failure [1]: java.lang.IllegalArgumentException: Unexpected offset 
[TimestampOffset(2021-10-18T13:34:08.434982Z,2021-10-18T13:36:25.690672Z,Map(configurable|test-1634564027287-333 -> 9))] before latestBacktracking 
[TimestampOffset(2021-10-18T13:34:08.436100Z,2021-10-18T13:36:25.690672Z,Map(configurable|test-1634564027287-371 -> 7))].

not only for the backtracking:

[2021-10-18 16:16:01,540] [ERROR] [akka.actor.RepointableActorRef] [akka://test/system/Materializers/StreamSupervisor-0/flow-31-0-ignoreSink] [] [test-akka.actor.default-dispatcher-27] [] - Error in stage [akka.persistence.r2dbc.internal.ContinuousQuery$$anon$1-futureFlattenSource]: Unexpected offset 
[TimestampOffset(2021-10-18T14:15:44.575466Z,2021-10-18T14:15:56.998003Z,Map(configurable|test-1634566540418-63 -> 58))] before latest 
[TimestampOffset(2021-10-18T14:15:44.575856Z,2021-10-18T14:15:56.998003Z,Map(configurable|test-1634566540418-29 -> 57))]. 

Monitor consumer lag

  • timestamp_offset is the db_timestamp of the original event
  • last_updated is when the offset was stored
  • the consumer lag is last_updated - timestamp_offset

It would be nice to periodically query and log consumer lag (can be done before delete).

How far behind should the backtracking query be?

Currently there are two non-overlapping queries in progress. The tail and the backtracking queries. The backtracking query is configured to be 3 seconds behind current (db) time.

That is probably not enough for a distributed database. Let's say that there is a partition for one shard in the db. Then data from that shard will show up later when new raft leader has been elected (or similar). That's probably 5-10 seconds, at least.

Do we need a second backtracking query that is looking further back?

Somewhat related to #74

Leak of Direct buffer memory when running tests without fork

This is still a problem so we must continue using forked tests.

After about 3 full runs of test with the same sbt session:

java.lang.OutOfMemoryError: Direct buffer memory
	at java.base/java.nio.Bits.reserveMemory(Bits.java:175)
	at java.base/java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:118)
	at java.base/java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:317)
	at io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:632)
	at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:607)
	at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:202)
	at io.netty.buffer.PoolArena.tcacheAllocateSmall(PoolArena.java:172)
	at io.netty.buffer.PoolArena.allocate(PoolArena.java:134)
	at io.netty.buffer.PoolArena.allocate(PoolArena.java:126)
	at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:395)
	at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:188)
	at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:179)
	at io.netty.buffer.AbstractByteBufAllocator.buffer(AbstractByteBufAllocator.java:116)
	at io.r2dbc.postgresql.codec.IntegerCodec.lambda$doEncode$0(IntegerCodec.java:56)

Note that this can be because how sbt manages classloaders for tests and probably not a problem for a real application.

Store only one snapshot?

Why do we store snapshots as append only and keep old snapshots?

I don't see a compelling reason for that and suggest that we use upsert to replace the snapshot for a persistenceId.

I would guess the reason is from the Cassandra plugin where snapshots can be stored with a lower consistency level or that append only is much more preferred for Cassandra.

The reason could also be the early ideas of replaying to a certain point in time (not always latest). The different parameters in the SnapshotCriteria. We don't need to support that for snapshots in my opinion.

Log warning: there is no transaction in progress

[WARN] [io.r2dbc.postgresql.client.ReactorNettyClient] [] [] [reactor-tcp-nio-1] - Notice: SEVERITY_LOCALIZED=WARNING, SEVERITY_NON_LOCALIZED=WARNING, CODE=25P01, MESSAGE=there is no transaction in progress, FILE=xact.c, 

For example when running ChaosSpec

Failed to delete timestamp offset

Noticed this when running akka-projection-testing for a longer time with Yugabyte:

[2021-11-02 11:53:14,157] [WARN] [akka.projection.r2dbc.internal.R2dbcOffsetStore] [] [] [test-akka.actor.default-dispatcher-27] [] - Failed to delete timestamp offset until [2021-11-02T11:46:34.550356Z] for projection [test-projection-id-0-projection-0-tag-2]: io.r2dbc.postgresql.ExceptionFactory$PostgresqlRollbackException: [40001] Query error: Restart read required at: { read: { physical: 1635853993768562 } local_limit: { physical: 1635853994138972 } global_limit: <min> in_txn_limit: <max> serial_no: 0 }  MDC: {}

Flaky failure in EventsBySlicePerfSpec

See in https://github.com/akka/akka-persistence-r2dbc/runs/3968156162?check_suite_focus=true#step:7:11286 for PR #52

[akka.persistence.r2dbc.query.EventsBySlicePerfSpec: EventsBySlices performance should retrieve from several slices] End of log messages of test that [Failed(java.lang.AssertionError: timeout (3 seconds) while expecting 20 messages (got 18))]
[info] - should retrieve from several slices *** FAILED *** (6 seconds, 621 milliseconds)
[info]   java.lang.AssertionError: timeout (3 seconds) while expecting 20 messages (got 18)
[info]   at akka.actor.testkit.typed.internal.TestProbeImpl.assertFail(TestProbeImpl.scala:399)
[info]   at akka.actor.testkit.typed.internal.TestProbeImpl.$anonfun$receiveMessages_internal$1(TestProbeImpl.scala:262)
[info]   at akka.actor.testkit.typed.internal.TestProbeImpl.$anonfun$receiveMessages_internal$1$adapted(TestProbeImpl.scala:257)
[info]   at scala.collection.immutable.Range.map(Range.scala:59)
[info]   at akka.actor.testkit.typed.internal.TestProbeImpl.receiveMessages_internal(TestProbeImpl.scala:257)
[info]   at akka.actor.testkit.typed.internal.TestProbeImpl.receiveMessages(TestProbeImpl.scala:244)
[info]   at akka.persistence.r2dbc.query.EventsBySlicePerfSpec.$anonfun$new$2(EventsBySlicePerfSpec.scala:72)
[info]   at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
[info]   at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
[info]   at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
[info]   at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[info]   at org.scalatest.Transformer.apply(Transformer.scala:22)
[info]   at org.scalatest.Transformer.apply(Transformer.scala:20)

Include slice in the offset table

Easer change number of projections (change slice ranges) if we include the slice number in the offset table. Then a new projection with a different slice range can continue with the same offset rows without having to copy and such.

When deleting old records we should probably keep 1 from each slice.

One small issue is that the maxNumberOfSlices is a property on the write side, and for this we would have to recompute it from the persistenceId on the projection side. That would be difficult if we make maxNumberOfSlices configurable since the offset store isn't really aware of the source provider. Having a fixed maxNumberOfSlices might be ok? Alternatively the slice must be included in the EventEnvelope and then we need another EventEnvelope class.

Flaky DurableStateBySliceSpec

This was observed in local run

<-- [akka.persistence.r2dbc.state.DurableStateBySliceSpec: Current changesBySlices should return latest state for NoOffset] End of log messages of test that [Failed(java.lang.AssertionError: timeout (3 seconds) during fishForMessage, seen messages List(), hint: )]
[info]   at akka.actor.testkit.typed.internal.TestProbeImpl.assertFail(TestProbeImpl.scala:399)
[info]   at akka.actor.testkit.typed.internal.TestProbeImpl.loop$1(TestProbeImpl.scala:316)
[info]   at akka.actor.testkit.typed.internal.TestProbeImpl.fishForMessage_internal(TestProbeImpl.scala:320)
[info]   at akka.actor.testkit.typed.internal.TestProbeImpl.fishForMessage(TestProbeImpl.scala:268)
[info]   at akka.actor.testkit.typed.internal.TestProbeImpl.fishForMessage(TestProbeImpl.scala:275)
[info]   at akka.persistence.r2dbc.state.DurableStateBySliceSpec.fishForState(DurableStateBySliceSpec.scala:80)
[info]   at akka.persistence.r2dbc.state.DurableStateBySliceSpec$$anon$1.<init>(DurableStateBySliceSpec.scala:123)
[info]   at akka.persistence.r2dbc.state.DurableStateBySliceSpec.$anonfun$new$3(DurableStateBySliceSpec.scala:111)
[info]   at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
[info]   at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
[info]   at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[info]   at org.scalatest.Transformer.apply(Transformer.scala:22)
[info]   at org.scalatest.Transformer.apply(Transformer.scala:20)
[info]   at org.scalatest.wordspec.AnyWordSpecLike$$anon$3.apply(AnyWordSpecLike.scala:1076)
[info]   at akka.actor.testkit.typed.scaladsl.LogCapturing.withFixture(LogCapturing.scala:70)
[info]   at akka.actor.testkit.typed.scaladsl.LogCapturing.withFixture$(LogCapturing.scala:68)
[info]   at akka.persistence.r2dbc.state.DurableStateBySliceSpec.withFixture(DurableStateBySliceSpec.scala:55)
[info]   at org.scalatest.wordspec.AnyWordSpecLike.invokeWithFixture$1(AnyWordSpecLike.scala:1074)
[info]   at org.scalatest.wordspec.AnyWordSpecLike.$anonfun$runTest$1(AnyWordSpecLike.scala:1086)
[info]   at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
[info]   at org.scalatest.wordspec.AnyWordSpecLike.runTest(AnyWordSpecLike.scala:1086)
[info]   at org.scalatest.wordspec.AnyWordSpecLike.runTest$(AnyWordSpecLike.scala:1068)
[info]   at akka.persistence.r2dbc.state.DurableStateBySliceSpec.runTest(DurableStateBySliceSpec.scala:55)
[info]   at org.scalatest.wordspec.AnyWordSpecLike.$anonfun$runTests$1(AnyWordSpecLike.scala:1145)
[info]   at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:413)
[info]   at scala.collection.immutable.List.foreach(List.scala:333)
[info]   at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
[info]   at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:390)
[info]   at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:427)
[info]   at scala.collection.immutable.List.foreach(List.scala:333)
[info]   at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
[info]   at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:396)
[info]   at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:475)
[info]   at org.scalatest.wordspec.AnyWordSpecLike.runTests(AnyWordSpecLike.scala:1145)
[info]   at org.scalatest.wordspec.AnyWordSpecLike.runTests$(AnyWordSpecLike.scala:1144)
[info]   at akka.persistence.r2dbc.state.DurableStateBySliceSpec.runTests(DurableStateBySliceSpec.scala:55)
[info]   at org.scalatest.Suite.run(Suite.scala:1112)
[info]   at org.scalatest.Suite.run$(Suite.scala:1094)
[info]   at akka.actor.testkit.typed.scaladsl.ScalaTestWithActorTestKit.org$scalatest$BeforeAndAfterAll$$super$run(ScalaTestWithActorTestKit.scala:31)
[info]   at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
[info]   at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
[info]   at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
[info]   at akka.persistence.r2dbc.state.DurableStateBySliceSpec.org$scalatest$wordspec$AnyWordSpecLike$$super$run(DurableStateBySliceSpec.scala:55)
[info]   at org.scalatest.wordspec.AnyWordSpecLike.$anonfun$run$1(AnyWordSpecLike.scala:1190)
[info]   at org.scalatest.SuperEngine.runImpl(Engine.scala:535)
[info]   at org.scalatest.wordspec.AnyWordSpecLike.run(AnyWordSpecLike.scala:1190)
[info]   at org.scalatest.wordspec.AnyWordSpecLike.run$(AnyWordSpecLike.scala:1188)
[info]   at akka.persistence.r2dbc.state.DurableStateBySliceSpec.run(DurableStateBySliceSpec.scala:55)
[info]   at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:318)
[info]   at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:513)
[info]   at sbt.ForkMain$Run.lambda$runTest$1(ForkMain.java:413)
[info]   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
[info]   at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
[info]   at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
[info]   at java.base/java.lang.Thread.run(Thread.java:834)

Flaky failure in ChaosSpec

https://github.com/akka/akka-persistence-r2dbc/runs/3968156162?check_suite_focus=true#step:7:11286 for PR #52

[akka.projection.r2dbc.ChaosSpec: A R2DBC projection under random conditions must handle all events exactlyOnce] End of log messages of test that [Failed(java.lang.AssertionError: Timeout (15 seconds) during receiveMessage while waiting for message.)]
[info] - must handle all events exactlyOnce *** FAILED *** (23 seconds, 779 milliseconds)
[info]   java.lang.AssertionError: Timeout (15 seconds) during receiveMessage while waiting for message.
[info]   at akka.actor.testkit.typed.internal.TestProbeImpl.assertFail(TestProbeImpl.scala:399)
[info]   at akka.actor.testkit.typed.internal.TestProbeImpl.$anonfun$receiveMessage_internal$1(TestProbeImpl.scala:182)
[info]   at scala.Option.getOrElse(Option.scala:201)
[info]   at akka.actor.testkit.typed.internal.TestProbeImpl.receiveMessage_internal(TestProbeImpl.scala:182)
[info]   at akka.actor.testkit.typed.internal.TestProbeImpl.receiveMessage(TestProbeImpl.scala:179)
[info]   at akka.projection.r2dbc.ChaosSpec.$anonfun$new$7(ChaosSpec.scala:194)
[info]   at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:190)
[info]   at akka.projection.r2dbc.ChaosSpec.verify$1(ChaosSpec.scala:191)
[info]   at akka.projection.r2dbc.ChaosSpec.$anonfun$new$15(ChaosSpec.scala:298)
[info]   at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:190)
[info]   at akka.projection.r2dbc.ChaosSpec.$anonfun$new$2(ChaosSpec.scala:228)
[info]   at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
[info]   at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
[info]   at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
[info]   at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
[info]   at org.scalatest.Transformer.apply(Transformer.scala:22)

NotSerializableException: org.scalatest.matchers.should.Matchers$AWord

There is a problem with the sbt/ScalaTest setup when running forked tests.

It throws and logs weird things like:

Reporter completed abruptly with an exception after receiving event: TestFailed(Ordinal(0, 26),A timeout occurred waiting for a future to complete. Waited 10000000000 nanoseconds.,DurableStateBySliceSpec,akka.persistence.r2dbc.state.DurableStateBySliceSpec,Some(akka.persistence.r2dbc.state.DurableStateBySliceSpec),Live changesBySlices should retrieve from several slices,retrieve from several slices,Vector(),Vector(),Some(org.scalatest.concurrent.ScalaFutures$$anon$1$$anon$2: A timeout occurred waiting for a future to complete. Waited 10000000000 nanoseconds.),Some(10201),Some(IndentedText(- should retrieve from several slices,should retrieve from several slices,1)),Some(SeeStackDepthException),Some(akka.persistence.r2dbc.state.DurableStateBySliceSpec),None,pool-1-thread-1-ScalaTest-running-DurableStateBySliceSpec,1635406044954).
java.io.NotSerializableException: org.scalatest.matchers.should.Matchers$AWord
	at java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1185)
	at java.base/java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1553)
	
	
Exception in thread "Thread-82" java.io.InvalidClassException: akka.persistence.r2dbc.state.DurableStateBySliceSpec; unable to create instance
	at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2183)
	at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1679)
	at java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2464)
	at java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2358)
	at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2196)
	at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1679)
	at java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2464)
	at java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2358)
	at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2196)
	at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1679)
	at java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2464)
	at java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2358)
	at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2196)
	at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1679)
	at java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2464)
	at java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2358)
	at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2196)
	at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1679)
	at java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2464)
	at java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2358)
	at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2196)
	at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1679)
	at java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:493)
	at java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:451)
	at org.scalatest.tools.Framework$ScalaTestRunner$Skeleton$1$React.react(Framework.scala:839)
	at org.scalatest.tools.Framework$ScalaTestRunner$Skeleton$1.run(Framework.scala:828)
	at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.reflect.InvocationTargetException
	at jdk.internal.reflect.GeneratedSerializationConstructorAccessor352.newInstance(Unknown Source)
	at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
	at java.base/java.io.ObjectStreamClass.lambda$newInstance$0(ObjectStreamClass.java:1097)
	at java.base/java.security.AccessController.doPrivileged(Native Method)
	at java.base/java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:85)
	at java.base/java.io.ObjectStreamClass.newInstance(ObjectStreamClass.java:1105)
	at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2180)
	... 26 more
Caused by: com.typesafe.config.ConfigException$Missing: merge of system properties,reference.conf @ jar:file:/Users/patrik/.sbt/boot/scala-2.12.14/org.scala-sbt/sbt/1.5.5/ssl-config-core_2.12-0.4.0.jar!/reference.conf: 1: No configuration setting found for key 'akka'
	at com.typesafe.config.impl.SimpleConfig.findKeyOrNull(SimpleConfig.java:156)
	at com.typesafe.config.impl.SimpleConfig.findKey(SimpleConfig.java:149)
	at com.typesafe.config.impl.SimpleConfig.findOrNull(SimpleConfig.java:176)
	at com.typesafe.config.impl.SimpleConfig.find(SimpleConfig.java:188)
	at com.typesafe.config.impl.SimpleConfig.find(SimpleConfig.java:193)
	at com.typesafe.config.impl.SimpleConfig.getList(SimpleConfig.java:262)
	at com.typesafe.config.impl.SimpleConfig.getHomogeneousUnwrappedList(SimpleConfig.java:348)
	at com.typesafe.config.impl.SimpleConfig.getStringList(SimpleConfig.java:406)
	at akka.actor.ActorSystem$Settings$.amendSlf4jConfig(ActorSystem.scala:336)
	at akka.actor.ActorSystemImpl.<init>(ActorSystem.scala:817)
	at akka.actor.typed.ActorSystem$.createInternal(ActorSystem.scala:289)
	at akka.actor.typed.ActorSystem$.apply(ActorSystem.scala:204)
	at akka.actor.testkit.typed.scaladsl.ActorTestKit$.apply(ActorTestKit.scala:91)
	at akka.actor.testkit.typed.scaladsl.ActorTestKitBase.<init>(ActorTestKitBase.scala:36)

See also #19

EventsByTagQuery

eventsByTag and currentEventsByTag

Same offset tracking as for eventsBySlices.

DurableStateStoreQuery

currentChanges and changes

Same offset tracking as for eventsBySlices.

DurableState should also be slice based.

Related to #78

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.