GithubHelp home page GithubHelp logo

akka / akka-persistence-jdbc Goto Github PK

View Code? Open in Web Editor NEW
307.0 307.0 139.0 7.1 MB

Asynchronously writes journal and snapshot entries to configured JDBC databases so that Akka Actors can recover state

Home Page: https://doc.akka.io/docs/akka-persistence-jdbc/

License: Other

Scala 95.59% Shell 1.01% PLSQL 1.29% Java 1.71% TSQL 0.39%
akka-persistence journal persistence-query scala slick

akka-persistence-jdbc's Introduction

Akka

The Akka family of projects is managed by teams at Lightbend with help from the community.

We believe that writing correct concurrent & distributed, resilient and elastic applications is too hard. Most of the time it's because we are using the wrong tools and the wrong level of abstraction.

Akka is here to change that.

Using the Actor Model we raise the abstraction level and provide a better platform to build correct concurrent and scalable applications. This model is a perfect match for the principles laid out in the Reactive Manifesto.

For resilience, we adopt the "Let it crash" model which the telecom industry has used with great success to build applications that self-heal and systems that never stop.

Actors also provide the abstraction for transparent distribution and the basis for truly scalable and fault-tolerant applications.

Learn more at akka.io.

Reference Documentation

The reference documentation is available at doc.akka.io, for Scala and Java.

Current versions of all Akka libraries

The current versions of all Akka libraries are listed on the Akka Dependencies page. Releases of the Akka core libraries in this repository are listed on the GitHub releases page.

Community

You can join these groups and chats to discuss and ask Akka related questions:

In addition to that, you may enjoy following:

Contributing

Contributions are very welcome!

If you see an issue that you'd like to see fixed, or want to shape out some ideas, the best way to make it happen is to help out by submitting a pull request implementing it. We welcome contributions from all, even you are not yet familiar with this project, We are happy to get you started, and will guide you through the process once you've submitted your PR.

Refer to the CONTRIBUTING.md file for more details about the workflow, and general hints on how to prepare your pull request. You can also ask for clarifications or guidance in GitHub issues directly, or in the akka/dev chat if a more real time communication would be of benefit.

License

Akka is licensed under the Business Source License 1.1, please see the Akka License FAQ.

Tests and documentation are under a separate license, see the LICENSE file in each documentation and test root directory for details.

akka-persistence-jdbc's People

Contributors

aenevala avatar aldenml avatar chbatey avatar debasishg avatar dispalt avatar dmi3zkm avatar dnvriend avatar dwijnand avatar ennru avatar fcristovao avatar frederic-gendebien avatar ignasi35 avatar jiminhsieh avatar johanandren avatar jroper avatar jtysper avatar ktoso avatar marcospereira avatar notnotdaniel avatar nvollmar avatar octonato avatar odd avatar onsails avatar patriknw avatar rayroestenburg avatar rockjam avatar roiocam avatar scala-steward avatar skisel avatar wellingr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

akka-persistence-jdbc's Issues

Custom PersistentRepr serialization to the database

I am currently serializing PersistentRepr by using an custom serialization. Although it works I still have to save redundant data in my json to be able to reconstruct the PersistentRepr. A sample of the data is (persistence-id, sequence-number).

How about making a custom serialization of the PersistentRepr in the driver (make it optional), that way no redundant data is stored and everybody can still write there own custom serialization for there own events instead of also for the PersistentRepr.

A question on eventsByTag behaviour

Hello!
Actively using your plugin, thank you for it :)

I have a question regarding persistence-query realtime streams like eventsByTag. A system I develop has two separate applications for write-back and read-back and my intent is to make read-back poll the journal and fill the query-side database.
I'm stuck with eventsByTag replaying only existing events without supplying events that come later (so it behaves like currentEventsByTag). I inspected the code and it seems that the stream is filled with realtime events only if there's a persistence actor in place. In my case it works on another machine (write-back) and can't feed the read-back stream.

So the question is, am I correct about this behaviour?
If so, what do you think is the best way to overcome this limitation?

Thank you in advance!

Enable configuration of connection pool

It would be great if we could create common connection pool for both
journal and other applications.
For example it would be possible to use Slick and JDBC Persistance
without separate connection pools.

All async queries do not work as expected.

All async queries do not work as expected. I must refactor the async query api to do polling. Please only use the current* commands and do your own client-side polling strategy for now.

java.sql.SQLRecoverableException: No more data to read from socket

Hello,

The project I'm currently working on uses akka cluster sharding with this plugin connected to an Oracle database. It works fine but after leaving the project running for some time or when the database is stopped it throws the following error and it is unable to recover:

2014/12/05 09:25:23.887 ERROR[t-dispatcher-14] s.StatementExecutor$$anon$1 81 --- SQL execution failed (Reason: No more data to read from socket):

MERGE INTO USERSIS.snapshot snapshot USING (SELECT '/user/sharding/RequestIDRouterCoordinator/singleton/coordinator' AS persistence_id, 7 AS seq_nr from DUAL) val ON (snapshot.persistence_id = val.persistence_id and snapshot.sequence_nr = val.seq_nr) WHEN MATCHED THEN UPDATE SET snapshot='qAAAAKztAAVzcgAtYWtrYS5wZXJzaXN0ZW5jZS5zZXJpYWxpemF0aW9uLlNuYXBzaG90SGVhZGVyAAAAAAAAAAECAAJJAAxzZXJp... (1268)' WHEN NOT MATCHED THEN INSERT (PERSISTENCE_ID, SEQUENCE_NR, SNAPSHOT, CREATED) VALUES ('/user/sharding/RequestIDRouterCoordinator/singleton/coordinator', 7, 'qAAAAKztAAVzcgAtYWtrYS5wZXJzaXN0ZW5jZS5zZXJpYWxpemF0aW9uLlNuYXBzaG90SGVhZGVyAAAAAAAAAAECAAJJAAxzZXJp... (1268)', 1417789523876)

2014/12/05 09:25:23.888 ERROR[n-dispatcher-44] s.StatementExecutor$$anon$1 81 --- SQL execution failed (Reason: No more data to read from socket):

DELETE FROM USERSIS.snapshot WHERE persistence_id = '/user/sharding/RequestIDRouterCoordinator/singleton/coordinator' AND sequence_nr = 7

[ERROR] [12/05/2014 09:25:23.889] [TokenCluster-akka.actor.default-dispatcher-14] [akka://TokenCluster/system/snapshot-store] No more data to read from socket
java.sql.SQLRecoverableException: No more data to read from socket
at oracle.jdbc.driver.T4CMAREngine.unmarshalUB1(T4CMAREngine.java:1157)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:350)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:227)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:531)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:208)
at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:1046)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1336)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3613)
at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:3694)
at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeUpdate(OraclePreparedStatementWrapper.java:1354)
at org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
at org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
at scalikejdbc.StatementExecutor$$anonfun$executeUpdate$1.apply$mcI$sp(StatementExecutor.scala:337)
at scalikejdbc.StatementExecutor$$anonfun$executeUpdate$1.apply(StatementExecutor.scala:337)
at scalikejdbc.StatementExecutor$$anonfun$executeUpdate$1.apply(StatementExecutor.scala:337)
at scalikejdbc.StatementExecutor$NakedExecutor.apply(StatementExecutor.scala:33)
at scalikejdbc.StatementExecutor$$anon$1.scalikejdbc$StatementExecutor$LoggingSQLAndTiming$$super$apply(StatementExecutor.scala:317)
at scalikejdbc.StatementExecutor$LoggingSQLAndTiming$class.apply(StatementExecutor.scala:264)
at scalikejdbc.StatementExecutor$$anon$1.scalikejdbc$StatementExecutor$LoggingSQLIfFailed$$super$apply(StatementExecutor.scala:317)
at scalikejdbc.StatementExecutor$LoggingSQLIfFailed$class.apply(StatementExecutor.scala:295)
at scalikejdbc.StatementExecutor$$anon$1.apply(StatementExecutor.scala:317)
at scalikejdbc.StatementExecutor.executeUpdate(StatementExecutor.scala:337)
at scalikejdbc.DBSession$$anonfun$updateWithFilters$1.apply(DBSession.scala:352)
at scalikejdbc.DBSession$$anonfun$updateWithFilters$1.apply(DBSession.scala:350)
at scalikejdbc.LoanPattern$class.using(LoanPattern.scala:33)
at scalikejdbc.ActiveSession.using(DBSession.scala:457)
at scalikejdbc.DBSession$class.updateWithFilters(DBSession.scala:349)
at scalikejdbc.ActiveSession.updateWithFilters(DBSession.scala:457)
at scalikejdbc.DBSession$class.updateWithFilters(DBSession.scala:327)
at scalikejdbc.ActiveSession.updateWithFilters(DBSession.scala:457)
at scalikejdbc.SQLUpdate$$anonfun$10.apply(SQL.scala:486)
at scalikejdbc.SQLUpdate$$anonfun$10.apply(SQL.scala:486)
at scalikejdbc.DBConnection$class.autoCommit(DBConnection.scala:183)
at scalikejdbc.DB.autoCommit(DB.scala:75)
at scalikejdbc.DB$$anonfun$autoCommit$1.apply(DB.scala:218)
at scalikejdbc.DB$$anonfun$autoCommit$1.apply(DB.scala:217)
at scalikejdbc.LoanPattern$class.using(LoanPattern.scala:33)
at scalikejdbc.DB$.using(DB.scala:150)
at scalikejdbc.DB$.autoCommit(DB.scala:217)
at scalikejdbc.SQLUpdate.apply(SQL.scala:486)
at akka.persistence.jdbc.snapshot.GenericStatements$class.deleteSnapshot(Statements.scala:30)
at akka.persistence.jdbc.snapshot.OracleSyncSnapshotStore.deleteSnapshot(SnapshotStores.scala:16)
at akka.persistence.jdbc.snapshot.JdbcSyncSnapshotStore$class.delete(JdbcSyncSnapshotStore.scala:34)
at akka.persistence.jdbc.snapshot.OracleSyncSnapshotStore.delete(SnapshotStores.scala:16)
at akka.persistence.snapshot.SnapshotStore$$anonfun$receive$1.applyOrElse(SnapshotStore.scala:44)
at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
at akka.persistence.jdbc.snapshot.OracleSyncSnapshotStore.aroundReceive(SnapshotStores.scala:16)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
at akka.actor.ActorCell.invoke(ActorCell.scala:487)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
at akka.dispatch.Mailbox.run(Mailbox.scala:220)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

It seems that it is unable to detect that the connection has expired or has been closed.

I'm not sure if this is the correct place to post this.

OracleProfile, Blob issue

Hi,

I'm working on proof of concept event sourcing application using Oracle as a database for now, and also akka-persistence-jdbc for journal and snapshot storage.

I have a problem persisting snapshots if they exceed 4Kb size.

I get the following:
Failed to saveSnapshot given metadata [SnapshotMetadata(view,74090,0)] due to: [java.sql.SQLException: ORA-01461: can bind a LONG value only for insert into a LONG column]

I assume there is a problem with Oracle drivers or column mapping.

I was using the following version: "com.github.dnvriend" %% "akka-persistence-jdbc" % "2.6.5-RC2".

Thank you in advance with any help,
Sergey

OutOfMemory error when recovering with a large number of snapshots

Hello,

I was doing some performance testing, and 1 one of my persistent actors makes heavy use of snapshotting. I'm getting an OutOfMemory error when I try to start up this persistent actor (at app startup time) and it tries to recover. I am using Oracle, Akka 2.3.9, and akka-persistence-jdbc 1.1.1

I believe the problem code is:

  def selectSnapshotsFor(persistenceId: String, criteria: SnapshotSelectionCriteria): List[SelectedSnapshot] =
    SQL(s"SELECT * FROM $schema$table WHERE persistence_id = ? AND sequence_nr <= ? ORDER BY sequence_nr DESC")
      .bind(persistenceId, criteria.maxSequenceNr)
      .map { rs => SelectedSnapshot(SnapshotMetadata(rs.string("persistence_id"), rs.long("sequence_nr"), rs.long("created")), unmarshal(rs.string("snapshot")).data) }
      .list()
      .apply()
      .filterNot(snap => snap.metadata.timestamp > criteria.maxTimestamp)

Here is the JDBC call it tries to make:

SELECT * FROM SNAPSHOT WHERE persistence_id = 'myPersistenceId' AND sequence_nr <= 9223372036854775807 ORDER BY sequence_nr DESC;

It appears that this code is reading all of the snapshots into memory and filtering by timestamp only after. Why not put this filter within the query itself?

Here is the stack-trace:

[ERROR] - from akka.actor.ActorSystemImpl in application-akka.actor.default-dispatcher-4
Uncaught error from thread [application-akka.actor.default-dispatcher-16] shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled
java.lang.OutOfMemoryError: Java heap space
        at java.lang.reflect.Array.newArray(Native Method) ~[na:1.8.0_11]
        at java.lang.reflect.Array.newInstance(Array.java:75) ~[na:1.8.0_11]
        at oracle.jdbc.driver.BufferCache.get(BufferCache.java:226) ~[com.oracle.ojdbc6-11.2.0.3.0.jar:11.2.0.3.0]
        at oracle.jdbc.driver.PhysicalConnection.getByteBuffer(PhysicalConnection.java:7659) ~[com.oracle.ojdbc6-11.2.0.3.0.jar:11.2.0.3.0]
        at oracle.jdbc.driver.T4C8TTIClob.read(T4C8TTIClob.java:226) ~[com.oracle.ojdbc6-11.2.0.3.0.jar:11.2.0.3.0]
        at oracle.jdbc.driver.T4CConnection.getChars(T4CConnection.java:3187) ~[com.oracle.ojdbc6-11.2.0.3.0.jar:11.2.0.3.0]
        at oracle.sql.CLOB.getChars(CLOB.java:459) ~[com.oracle.ojdbc6-11.2.0.3.0.jar:11.2.0.3.0]
        at oracle.sql.CLOB.getSubString(CLOB.java:321) ~[com.oracle.ojdbc6-11.2.0.3.0.jar:11.2.0.3.0]
        at oracle.jdbc.driver.T4CClobAccessor.getString(T4CClobAccessor.java:474) ~[com.oracle.ojdbc6-11.2.0.3.0.jar:11.2.0.3.0]
        at oracle.jdbc.driver.OracleResultSetImpl.getString(OracleResultSetImpl.java:1297) ~[com.oracle.ojdbc6-11.2.0.3.0.jar:11.2.0.3.0]
        at oracle.jdbc.driver.OracleResultSet.getString(OracleResultSet.java:494) ~[com.oracle.ojdbc6-11.2.0.3.0.jar:11.2.0.3.0]
        at scalikejdbc.TypeBinder$$anonfun$57.apply(TypeBinder.scala:112) ~[org.scalikejdbc.scalikejdbc-core_2.11-2.2.4.jar:2.2.4]
        at scalikejdbc.TypeBinder$$anonfun$57.apply(TypeBinder.scala:112) ~[org.scalikejdbc.scalikejdbc-core_2.11-2.2.4.jar:2.2.4]
        at scalikejdbc.TypeBinder$$anon$2.apply(TypeBinder.scala:43) ~[org.scalikejdbc.scalikejdbc-core_2.11-2.2.4.jar:2.2.4]
        at scalikejdbc.WrappedResultSet$$anonfun$get$2.apply(WrappedResultSet.scala:474) ~[org.scalikejdbc.scalikejdbc-core_2.11-2.2.4.jar:2.2.4]
        at scalikejdbc.WrappedResultSet.wrapIfError(WrappedResultSet.scala:40) ~[org.scalikejdbc.scalikejdbc-core_2.11-2.2.4.jar:2.2.4]
        at scalikejdbc.WrappedResultSet.get(WrappedResultSet.scala:474) ~[org.scalikejdbc.scalikejdbc-core_2.11-2.2.4.jar:2.2.4]
        at scalikejdbc.WrappedResultSet.string(WrappedResultSet.scala:361) ~[org.scalikejdbc.scalikejdbc-core_2.11-2.2.4.jar:2.2.4]
        at akka.persistence.jdbc.snapshot.GenericStatements$$anonfun$1.apply(Statements.scala:50) ~[com.github.dnvriend.akka-persistence-jdbc_2.11-1.1.1.jar:1.1.1]
        at akka.persistence.jdbc.snapshot.GenericStatements$$anonfun$1.apply(Statements.scala:48) ~[com.github.dnvriend.akka-persistence-jdbc_2.11-1.1.1.jar:1.1.1]
        at scalikejdbc.DBSession$$anonfun$traversable$1$$anonfun$apply$2.apply(DBSession.scala:311) ~[org.scalikejdbc.scalikejdbc-core_2.11-2.2.4.jar:2.2.4]
        at scalikejdbc.DBSession$$anonfun$traversable$1$$anonfun$apply$2.apply(DBSession.scala:311) ~[org.scalikejdbc.scalikejdbc-core_2.11-2.2.4.jar:2.2.4]
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245) ~[org.scala-lang.scala-library-2.11.6.jar:na]
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245) ~[org.scala-lang.scala-library-2.11.6.jar:na]
        at scalikejdbc.ResultSetTraversable$$anonfun$foreach$1.apply(ResultSetTraversable.scala:37) ~[org.scalikejdbc.scalikejdbc-core_2.11-2.2.4.jar:2.2.4]
        at scalikejdbc.ResultSetTraversable$$anonfun$foreach$1.apply(ResultSetTraversable.scala:34) ~[org.scalikejdbc.scalikejdbc-core_2.11-2.2.4.jar:2.2.4]
        at scalikejdbc.LoanPattern$class.using(LoanPattern.scala:33) ~[org.scalikejdbc.scalikejdbc-core_2.11-2.2.4.jar:2.2.4]
        at scalikejdbc.ResultSetTraversable.using(ResultSetTraversable.scala:24) ~[org.scalikejdbc.scalikejdbc-core_2.11-2.2.4.jar:2.2.4]
        at scalikejdbc.ResultSetTraversable.foreach(ResultSetTraversable.scala:34) ~[org.scalikejdbc.scalikejdbc-core_2.11-2.2.4.jar:2.2.4]
        at scala.collection.TraversableLike$class.map(TraversableLike.scala:245) ~[org.scala-lang.scala-library-2.11.6.jar:na]
        at scalikejdbc.ResultSetTraversable.map(ResultSetTraversable.scala:24) ~[org.scalikejdbc.scalikejdbc-core_2.11-2.2.4.jar:2.2.4]
        at scalikejdbc.DBSession$$anonfun$traversable$1.apply(DBSession.scala:311) ~[org.scalikejdbc.scalikejdbc-core_2.11-2.2.4.jar:2.2.4]

Using a Datasource to initialize akka-persistence-jdbc

I've got an application that is switching over to Oracle Wallet based authentication-this means that it will no longer use username and passwords but will use certificates instead.

Without hacking akka-persistence-jdbc, how can I override the database the code uses?

FYI, I'm using akka(2.3.5) and akka-persistence-jdbc(1.08) via Groovy and spring.

Why shouldn't we use this in production?

Wondering why the disclaimer not to use this in production. I'll be working on a small Akka project that's not really high traffic. Since I already have an rdbms that will be serving the view part, I want to avoid putting up a separate cassandra instance.

Provide a way to shut-down connections explicitly

When running a simple app like:

import akka.actor.{ActorSystem, Props}
import akka.persistence.{Persistence, PersistentActor}

object Application extends App {
  val system = ActorSystem()

  system.actorOf(Props(new PersistentActor {
    override def persistenceId: String = "the-guy"

    override def receiveRecover: Receive = {
      case x => println ("recover  : " + x + ", seqNr: " + lastSequenceNr)
    }

    override def receiveCommand: Receive = {
      case c => persist(c) { it => println("persisted: " + it + ", seqNr: " + lastSequenceNr) }
    }
  })) ! "hello world"
}

a number of times after each other (default configuration, postgresql), connections seem to linger around and it's relatively simple to get:

[ERROR] [04/12/2016 12:08:30.612] [default-akka.actor.default-dispatcher-3] [akka://default/user/$a] Persistence failure when replaying events for persistenceId [the-guy]. Last known sequence number [0]
java.sql.SQLTimeoutException: Timeout after 1004ms of waiting for a connection.
    at com.zaxxer.hikari.pool.BaseHikariPool.getConnection(BaseHikariPool.java:227)
    at com.zaxxer.hikari.pool.BaseHikariPool.getConnection(BaseHikariPool.java:182)
    at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:93)
    at slick.jdbc.hikaricp.HikariCPJdbcDataSource.createConnection(HikariCPJdbcDataSource.scala:12)
    at slick.jdbc.JdbcBackend$BaseSession.conn$lzycompute(JdbcBackend.scala:415)
    at slick.jdbc.JdbcBackend$BaseSession.conn(JdbcBackend.scala:414)
    at slick.jdbc.JdbcBackend$BaseSession.startInTransaction(JdbcBackend.scala:437)
    at slick.driver.JdbcActionComponent$StartTransaction$.run(JdbcActionComponent.scala:41)
    at slick.driver.JdbcActionComponent$StartTransaction$.run(JdbcActionComponent.scala:38)
    at slick.backend.DatabaseComponent$DatabaseDef$$anon$2.liftedTree1$1(DatabaseComponent.scala:237)
    at slick.backend.DatabaseComponent$DatabaseDef$$anon$2.run(DatabaseComponent.scala:237)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.postgresql.util.PSQLException: FATAL: sorry, too many clients already
    at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:427)

Would be good to provide a way to close connections cleanly once ActorSystem terminates (you can register actions to run when this happens)

Wrong parameters list in MS SQL Server statements

In file journal/Statements.scala a parameter is missing for MS SQL Server (3 '?', expected 4). After correction some other exception are thrown and simple workaround (no binding) may be applied:

trait MSSqlServerStatements extends GenericStatements {
override def selectMessagesFor(persistenceId: String, fromSequenceNr: Long, toSequenceNr: Long, max: Long): List[PersistentRepr] = {
SQL(s"SELECT TOP $max message FROM $schema$table WHERE persistence_id = '$persistenceId' AND (sequence_number >= $fromSequenceNr AND sequence_number <= $toSequenceNr) ORDER BY sequence_number")
//.bind(max, persistenceId, fromSequenceNr, toSequenceNr)
.map(rs => Journal.fromBytes(Base64.decodeBinary(rs.string(1))))
.list()
.apply
}
}

Snapshots table defines three primary keys

The snapshots table defines three primary keys, which is illegal:

  class Snapshot(_tableTag: Tag) extends Table[SnapshotRow](_tableTag, _schemaName = snapshotTableCfg.schemaName, _tableName = snapshotTableCfg.tableName) {
    def * = (persistenceId, sequenceNumber, created, snapshot) <> (SnapshotRow.tupled, SnapshotRow.unapply)

    val persistenceId: Rep[String] = column[String](snapshotTableCfg.columnNames.persistenceId, O.Length(255, varying = true), O.PrimaryKey)
    val sequenceNumber: Rep[Long] = column[Long](snapshotTableCfg.columnNames.sequenceNumber, O.PrimaryKey)
    val created: Rep[Long] = column[Long](snapshotTableCfg.columnNames.created)
    val snapshot: Rep[Array[Byte]] = column[Array[Byte]](snapshotTableCfg.columnNames.snapshot)
    val pk = primaryKey("snapshot_pk", (persistenceId, sequenceNumber))
  }

Essentially this is saying that the persistenceId column is the primary key, the sequenceNumber column is the primary key, and there is also a composite primary key of the two of them. Here's an example schema that Slick will generate for this:

create table "snapshot" (
  "persistence_id" VARCHAR(255) NOT NULL PRIMARY KEY,
  "sequence_number" BIGINT NOT NULL PRIMARY KEY,
  "created" BIGINT NOT NULL,
  "snapshot" BLOB NOT NULL
);
alter table "snapshot" add constraint "snapshot_pk" primary key("persistence_id","sequence_number");

No database will execute this successfully since there are three primary keys defined. The O.PrimaryKey option should be removed from the column definitions.

akka-persistence-jdbc.tables.*.schemaName has no effect for postgres

I used the following script:

CREATE SCHEMA IF NOT EXISTS akka_persistence_jdbc;

DROP TABLE IF EXISTS akka_persistence_jdbc.journal;

CREATE TABLE IF NOT EXISTS akka_persistence_jdbc.journal (
  persistence_id VARCHAR(255) NOT NULL,
  sequence_number BIGINT NOT NULL,
  created BIGINT NOT NULL,
  tags VARCHAR(255) DEFAULT NULL,
  message BYTEA NOT NULL,
  PRIMARY KEY(persistence_id, sequence_number)
);

DROP TABLE IF EXISTS akka_persistence_jdbc.deleted_to;

CREATE TABLE IF NOT EXISTS akka_persistence_jdbc.deleted_to (
  persistence_id VARCHAR(255) NOT NULL,
  deleted_to BIGINT NOT NULL
);

DROP TABLE IF EXISTS akka_persistence_jdbc.snapshot;

CREATE TABLE IF NOT EXISTS akka_persistence_jdbc.snapshot (
  persistence_id VARCHAR(255) NOT NULL,
  sequence_number BIGINT NOT NULL,
  created BIGINT NOT NULL,
  snapshot BYTEA NOT NULL,
  PRIMARY KEY(persistence_id, sequence_number)
);

and specified tables.{journal/deletedTo/snapshot}.schemaName = "akka_persistence_jdbc".
In the log I got org.postgresql.util.PSQLException: ERROR: relation "deleted_to" does not exist.
So I decided to use the literal config from README, which worked, then I changed the schemaName in application.conf to "akka_persistence_jdbc" again and it continued to work. After that I deleted public.deleted_to table and got the abovementioned error again.

ReadJournalDao is tied to ByteArrayDao

Hi,

I have some trouble to implement my own ReadJournal Dao. Indeed, this latter defines the method eventsByTag with a return type that references akka.persistence.jdbc.dao.bytea.journal.JournalTables.JournalRow....which is the byte-array-based default DAO implementation.
Hence, it is not possible to implement my own version based on different row type.

From what I understood by searching through the code, this JournalRow value is used only to retrieve the "ordering" value of the journal row.

As a very ugly workaround, I can construct a fake bytea.JournalRow.
I was also thinking about forking JdbcReadJournal

Build with scalikejdbc-2.2.4

ScalikeJDBC's new version is not binary compatible with some of the methods used in this plugin: scalikejdbc/scalikejdbc#391

Bumping the dependency from 2.2.2 to 2.2.4 and rebuilding should resolve this issue.

Let me know if there are any problems with this - thanks!

No more jndi support?

In 1.x version there was an ability to share app connection pool with persistence module. Why it's gone in 2.x? Is there another way of sharing pool?

How to setup json converter

Hi,

I tried to setup json converter

akka-persistence-jdbc {

  jdbc-connection {
    journal-converter  = "akka.persistence.jdbc.serialization.journal.JsonJournalConverter"
    snapshot-converter = "akka.persistence.jdbc.serialization.journal.JsonJournalConverter"
  }

  slick {
    driver = "slick.driver.PostgresDriver"
    db {
      host = "docker"
      port = "5432"
      name = "name"

      url = "jdbc:postgresql://"${akka-persistence-jdbc.slick.db.host}":"${akka-persistence-jdbc.slick.db.port}"/"${akka-persistence-jdbc.slick.db.name}
      user = "name"
      password = "pass"
      driver = "org.postgresql.Driver"
      keepAliveConnection = on
      numThreads = 1
      queueSize = 1000
    }
  }
}

I also tried to move to root, no changed.

however it's being serialized to bytes still. Where should I put the configuration?

thanks.

Offset in ReadJournal eventsByTag query is actually sequenceNr

The README mentions:

You can retrieve a subset of all events by specifying offset, or use 0L to retrieve all events with a given tag. The offset corresponds to an ordered sequence number for the specific tag. Note that the corresponding offset of each event is provided in the EventEnvelope, which makes it possible to resume the stream at a later point from a given offset.

However what is actually used for the offset field of the EventEnvelope is the persistenceId's sequenceNr as it can be seen in the ReadJournal implementation:

  override def currentEventsByTag(tag: String, offset: Long): Source[EventEnvelope, NotUsed] =
    journalDao.eventsByTag(tag, offset)
      .via(serializationFacade.deserializeRepr)
      .mapAsync(1)(deserializedRepr  Future.fromTry(deserializedRepr))
      .map(repr  EventEnvelope(repr.sequenceNr, repr.persistenceId, repr.sequenceNr, repr.payload))

Looks like either the documentation or the implementation is wrong, or I may be missing something quite terribly here.

duplicate key exceptions

Sometimes I see lots of exceptions came from akka-persistence-jdbc:

 ERROR s.StatementExecutor$$anon$1    SQL execution failed (Reason: ERROR: duplicate key value violates unique constraint "akka_snapshot_pkey"
  Detail: Key (persistence_id, sequence_nr)=(/user/sharding/GroupProcessorCoordinator/singleton/coordinator, 1880) already exists.):

   INSERT INTO public.snapshot (persistence_id, sequence_nr, created, snapshot) VALUES ('/user/sharding/GroupProcessorCoordinator/singleton/coordinator', 1880, 1439705139409, 'BAAAAAEAAACs7QAFc3IANGFra2EuY29udHJpYi5wYXR0ZXJuLlNoYXJkQ29vcmRpbmF0b3IkSW50ZXJuYWwkU3RhdGUAAAAAAAAA... (2360)')

 ERROR s.StatementExecutor$$anon$1    SQL execution failed (Reason: ERROR: duplicate key value violates unique constraint "akka_snapshot_pkey"
  Detail: Key (persistence_id, sequence_nr)=(/user/sharding/GroupDialogCoordinator/singleton/coordinator, 129) already exists.):

   INSERT INTO public.snapshot (persistence_id, sequence_nr, created, snapshot) VALUES ('/user/sharding/GroupDialogCoordinator/singleton/coordinator', 129, 1439705139418, 'BAAAAAEAAACs7QAFc3IANGFra2EuY29udHJpYi5wYXR0ZXJuLlNoYXJkQ29vcmRpbmF0b3IkSW50ZXJuYWwkU3RhdGUAAAAAAAAA... (1652)')

 ERROR s.StatementExecutor$$anon$1    SQL execution failed (Reason: ERROR: duplicate key value violates unique constraint "akka_snapshot_pkey"
  Detail: Key (persistence_id, sequence_nr)=(/user/sharding/PrivateDialogCoordinator/singleton/coordinator, 94) already exists.):

   INSERT INTO public.snapshot (persistence_id, sequence_nr, created, snapshot) VALUES ('/user/sharding/PrivateDialogCoordinator/singleton/coordinator', 94, 1439705139448, 'BAAAAAEAAACs7QAFc3IANGFra2EuY29udHJpYi5wYXR0ZXJuLlNoYXJkQ29vcmRpbmF0b3IkSW50ZXJuYWwkU3RhdGUAAAAAAAAA... (1412)')

 ERROR s.StatementExecutor$$anon$1    SQL execution failed (Reason: ERROR: duplicate key value violates unique constraint "akka_snapshot_pkey"
  Detail: Key (persistence_id, sequence_nr)=(/user/sharding/SeqUpdatesManagerCoordinator/singleton/coordinator, 3614) already exists.):

After tons of such errors connection pool stucks:

java.util.concurrent.RejectedExecutionException: Task slick.backend.DatabaseComponent$DatabaseDef$$anon$2@e4aa79f rejected from java.util.concurrent.ThreadPoolExecutor@312aac97[Running, pool size = 20, active threads = 20, queued tasks = 997, completed tasks = 381983]

I use akka 2.3.12, akka-persistence-jdbc 1.1.7 and PostgreSQL 9.4.0.

Missing Oracle Support

Hi,

after being back from vacation, I'd like to continue our discussion about Oracle support.
I originally encountered issues consuming events from read-journals.

After receiving eventsByTag the EventEnvelops don't contain the expected data.
The "offset" value of the last consumed event is stored and used for later queries but contains an invalid value. Somehow I still don't get around JdbcReadJournal.scala#L116.
It seems to me that the first parameter should not be the ordering but the total offset of the row within the query (offset parameter + the position within result) like it was before 0d02096.

I don't really understand, why this isn't an issue in other DBs, though.
Could you please share your thoughts on this?

I'd like to help if I can but still have way to many questions...

Regards
Jörn

Cryptic error message

I've recently updated to 2.6.1 and I'm getting stack traces like this:

2016-07-26 12:29:33,108 [default-akka.actor.default-dispatcher-7] [ERROR] [com.optrak.opkakka.ddd.persistence.InitialisingPersistor] Failed to persist event type [com.optrak.opkakka.ddd.test.individual.Orders$QuantityUpdated] with sequence number [4] for persistenceId [382a7739-e7da-4d85-b453-509fdfba577f-orders-1].
scala.MatchError: Vector(Success(JournalRow(-9223372036854775808,false,382a7739-e7da-4d85-b453-509fdfba577f-orders-1,4,[B@36fb4165,None))) (of class scala.collection.immutable.Vector)
    at akka.persistence.jdbc.util.TrySeq$.sequence(TrySeq.scala:31)
    at akka.persistence.jdbc.serialization.FlowPersistentReprSerializer$class.akka$persistence$jdbc$serialization$FlowPersistentReprSerializer$class$$$anonfun$3(PersistentReprSerializer.scala:52)
    at akka.persistence.jdbc.serialization.FlowPersistentReprSerializer$class$$Lambda$56/1661213932.apply(Unknown Source)
    at akka.stream.impl.fusing.Map.onPush(Ops.scala:29)
    at akka.stream.impl.fusing.Map.onPush(Ops.scala:28)
    at akka.stream.stage.AbstractStage$PushPullGraphLogic$$anon$1.onPush(Stage.scala:55)
    at akka.stream.impl.fusing.GraphInterpreter.processElement$1(GraphInterpreter.scala:590)
    at akka.stream.impl.fusing.GraphInterpreter.processEvent(GraphInterpreter.scala:601)
    at akka.stream.impl.fusing.GraphInterpreter.execute(GraphInterpreter.scala:542)
    at akka.stream.impl.fusing.GraphInterpreterShell.runBatch(ActorGraphInterpreter.scala:471)
    at akka.stream.impl.fusing.GraphInterpreterShell.init(ActorGraphInterpreter.scala:381)
    at akka.stream.impl.fusing.ActorGraphInterpreter.tryInit(ActorGraphInterpreter.scala:538)
    at akka.stream.impl.fusing.ActorGraphInterpreter.preStart(ActorGraphInterpreter.scala:586)
    at akka.actor.Actor$class.aroundPreStart(Actor.scala:489)
    at akka.stream.impl.fusing.ActorGraphInterpreter.aroundPreStart(ActorGraphInterpreter.scala:529)
    at akka.actor.ActorCell.create(ActorCell.scala:590)
    at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:461)
    at akka.actor.ActorCell.systemInvoke(ActorCell.scala:483)
    at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:282)
    at akka.dispatch.Mailbox.run(Mailbox.scala:223)
    at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) 

The code doesn't look right to me: you are matching a Seq against List patterns here https://github.com/dnvriend/akka-persistence-jdbc/blob/master/src/main/scala/akka/persistence/jdbc/util/TrySeq.scala#L22

Use compiled queries

Slick query compiler is very slow in comparsion with its compiled queries. Compiled queries usage gives >10x performance boost.
I'll do a pull request.

Ability to use custom connection pool

We are using akka-persistence-jdbc as default akka persistence layer but also our app uses Slick with HikariCP for some business logic, so the app uses two pools at the same time. Would be useful to have an ability to make akka-persistence-jdbc to use external connection pool.
We can contribute it, and I am bringing the issue here to discuss how API should look like. Do you have an advise?

Event adapters not applied on `eventsByTag`

Hi! I'm migrating our database from Cassandra to Postgres, and with that, our connector to akka-persistence-jdbc. The problem I am having is that it seems that, unlike with the Cassandra connector, the event adapters are not being called when reading tagged events from JdbcReadJournal. My configuration is as follows (omitting the slick configuration):

jdbc-journal {
  
  slick = ${slick}

  recovery-event-timeout = 60m


  event-adapters {
  
    adapter = "com.x.Adapter"
  
  }

  
  event-adapter-bindings {
    
    "com.x.PersistedEvent" = adapter
  
  }

}



jdbc-snapshot-store {
  
  slick = ${slick}

}



jdbc-read-journal {
  
  refresh-interval = "100ms"
  
  max-buffer-size = "500"
  
  batch-size = "250"
  
  slick = ${slick}

}

The adapter is being called correctly when writing events, but not when reading them from JdbcReadJournal.

Am I misunderstanding something, missing something from my configuration, or are the event adapters not supported when reading from the JdbcReadJournal with eventsByTag?

I am using version 2.6.5-RC2

Is migration from 2.2.x to 2.6.x possible?

I have an existing journal in postgres created using 2.2.8 version and want to use the most recent version now. I've migrated code and database schema to reflect the changes but reading the journal silently fails, with my suspicion being deserialization of the message field. The same code and schema with empty journal works as expected, persists new events and properly reads them.

Is it possible to migrate that data to be compliant with the most recent version of the library? My intuition is that it should be possible to implement a DAO that can handle events in both formats but wanted to make sure it is possible first. I'd appreciate any tips and hints :)

eventsByTag returns events for wrong tag

Hi everyone,

When looking for events with tag "User" with currentEventsByTag, this also returns events with tag "UserEmail". In my opinion only events with tag "User" should be returned. I currently experience the problem when using the default dao configuration and a postgres backend.

When searching for the cause of this problem I found that in package akka.persistence.jdbc.dao.bytea.readjournal in trait BaseByteArrayReadJournalDao events are filtered by s"%$tag%". Might this be a bug?

With kind regards,
Felix

Compatibility with Akka 2.4.0-xx

I am working on a rewrite of the plugin to support Akka 2.4.0-xx. The new API allows for some smarter queries to be made and I assume the plugin will have a better memory and performance profile, both are welcome :)

Getting 'Table "journal" not found' error with H2

I am new to akka-persistence and am trying to use H2 database for persistence. I referred the README and added the application.conf file as mentioned. But I am getting the error when I try to start the application. Do I need to create the tables explicitly ? I saw the h2-schema.sql file, but where to actually keep it in the project ?

org.h2.jdbc.JdbcSQLException: Table "journal" not found; SQL statement:
select "sequence_number" from "journal" where "persistence_id" = ? order by "sequence_number" desc limit 1 [42102-192]
    at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
    at org.h2.message.DbException.get(DbException.java:179)

Leaking connections

After migrating most of business-logic to the akka-persistence with akka-persistence-jdbc as a journal and snapshot-store plugin, application goes down because pool is running out of connections.

Postgres stat shows lots of long-running queries:

But in fact it's connections not returned to the pool. When I run this queries manually I get result in ~ 1 millisecond.

I am using slick 3.1.1, HikariCP 2.4.5, PostgreSQL 9.4
I share Database with akka-persistence-jdbc via jndi.

Support the new query APIs from Akka 2.4.12

Akka 2.4.12 introduced a shared akka.persistence.query.Offset type to replace the previously used long offset and two new queries using that Offset type: CurrentEventsByTagQuery2 and EventsByTagQuery2 (the awkward naming is because we intend to remove the old interfaces in next major Akka version and replace with the new ones but cannot right away because of compatibility)

Pluggable serialization proxy

This is probably on your radar already, but in my great quest for JSON serialization (which I think I'm abandoning) I looked into using the brand new varchar stuff. While it looks like this does indeed decouple things at the storage layer from the choice of database-level representation, it seems that all the plumbing up until that point still expects a byte stream. So somehow, PersistentRepr must still be serialized to a byte array. Even if that serialization is really UTF-8 encoded JSON as raw bytes, this seems to not quite be ideal, compared to letting the data be converted just once from in-memory representation to String.

Especially since I'm in the process of going the Protobuf route instead, this may be a request I don't actually need. But just for the sake of ticketing it, I thought I'd write it up.

In case it's helpful, my situation is that I'm using Akka Persistence to back up a RESTful API, so there are already JSON converters defined for all my types. While I realize protobuf will have much higher performance, the bandwidth to and from the persistent store isn't expected to be a bottleneck for a very long time, if ever. So the ability to only maintain one serialization layer, to not have to learn Protobuf, and to be able to directly inspect the database was really desirable.

Connection leak

Howdy,

I'm getting the below stack trace:

[warn] 17:43:57.131 ProxyLeakTask  - Connection leak detection triggered for conn147: url=jdbc:h2:file:./var/project/h2-journal/h2:journal user=ROOT, stack trace follows
java.lang.Exception: Apparent connection leak detected
    at slick.jdbc.hikaricp.HikariCPJdbcDataSource.createConnection(HikariCPJdbcDataSource.scala:12)
    at slick.jdbc.JdbcBackend$BaseSession.conn$lzycompute(JdbcBackend.scala:415)
    at slick.jdbc.JdbcBackend$BaseSession.conn(JdbcBackend.scala:414)
    at slick.jdbc.JdbcBackend$SessionDef$class.prepareStatement(JdbcBackend.scala:297)
    at slick.jdbc.JdbcBackend$BaseSession.prepareStatement(JdbcBackend.scala:407)
    at slick.jdbc.StatementInvoker.results(StatementInvoker.scala:33)
    at slick.jdbc.StatementInvoker.iteratorTo(StatementInvoker.scala:22)
    at slick.jdbc.StreamingInvokerAction$class.emitStream(StreamingInvokerAction.scala:28)
    at slick.driver.JdbcActionComponent$QueryActionExtensionMethodsImpl$$anon$1.emitStream(JdbcActionComponent.scala:218)
    at slick.driver.JdbcActionComponent$QueryActionExtensionMethodsImpl$$anon$1.emitStream(JdbcActionComponent.scala:218)
    at slick.backend.DatabaseComponent$DatabaseDef$$anon$3.run(DatabaseComponent.scala:285)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

I'm using Slick 3.1.1, HikariCP 2.4.7, akka 2.4.9, h2 1.4.192.

Any advice? I don't think I'm doing anything unusual, and I don't touch slick or HikariCP directly.

Thanks!

Unable to differentiate between persistence failures and serialization issues

In our application we use proto serializer for serializing domain events. What we have observed is the plugin/akka persistence invokes onPersistFailure() instead of onPersistRejected() for serialization issues. I believe this is because of the following code in SerializationProxy.serializeAtomicWrite()
if (xs.exists(.isFailure)) xs.filter(.isFailure).head.asInstanceOf[Try[Iterable[SerializationResult]]] // SI-8566

Is this something that can be fixed or do we have any alternative way to differentiate the two failures?
We are using version 2.2.19 of this plugin and version 2.4.4 of akka persistence

Efficient querying of many persistence IDs

I'm trying to optimize the read side of a system where I'm expecting to have on the order of, say, 1000 journals active at any given time. I'd like to have <1s latency between the write and read side of my system. I've read the source a bit of the Persistence Query implementation here, and correct me if I'm wrong, but if I set the polling interval to 1s and I have 1000 eventsByPersistenceId Sources open, I'm going to be generating 1000 DB queries per second. This doesn't seem scalable.

Do I have this correct? If so, is there a better way to accomplish what I'm going for here?

SQL Error using sharding region and timeout

I have a bunch of actors under a sharding region. They do persistence and snapshots. When an actor reaches an inactivity timeout it gets terminated. However when sharding region awakes it, I get an error like this:

[error] 3157 [example-akka.actor.default-dispatcher-18] ERROR scalikejdbc.StatementExecutor$$anon$1 - SQL execution failed (Reason: Duplicate entry 'SnapshotActor-4-11' for key 'PRIMARY'):
[error] 
[error]    INSERT INTO snapshot (persistence_id, sequence_nr, created, snapshot) VALUES ('SnapshotActor-4', 11, 1430927955209, 'BAAAAAEAAACs7QAFc3IAL3NhbXBsZS5wZXJzaXN0ZW5jZS5TbmFwc2hvdEV4YW1wbGUkRXhhbXBsZVN0YXRlhG/BFNOFpT4CAAFM... (492)')

[error]

Since it's quite difficult to reproduce, I prepared an artificial example at https://github.com/giampaolotrapasso/akka-sharding-jdbc-persistence-problem. The example it's not something I'd actually code, but it's useful to reproduce the problem I see in my project (that I cannot share).

setting default schema

Hi,

We're using oracle and all the tables need to be in a specific schema 'x'.

select * from journal -- does not work
select * from x.journal -- works

If using H2 (configured in the same oracle-way using liquibase) the following h2-connection-url worked:

jdbc:h2:ncstest;MODE=ORACLE;schema=x

I do not know of a similar oracle-connection-url-setting.

Deleting journal messages won't work well over time

A client's system was getting an increasing amount of timed out circuit breaker exceptions when it came to deleting past messages in one of the actors (once a snapshot had been created), even without any load on the system. It turned out that there were about 600000 soft-deleted messages in the database.

The query to delete those messages does not filter out messages already marked as deleted, which means that there'll be an update on all messages (which turns out to take some time).

So I think it would make sense to add this filter on the deleted field to avoid this kind of situation.

I haven't seen any way to not use soft deletion - I suppose a custom DAO would be the right answer to that?

Thanks!

ClassCastException while replaying events

It's me again. :)

This time I'm having an issue after storing some events in the DB and restarting the application, thus causing the actor to update its state with replayed events.

The startup seems to go well:

[DEBUG] [02/23/2016 19:03:59.467] [run-main-0] [EventStream(akka://default)] Default Loggers started
[DEBUG] [02/23/2016 19:03:59.543] [default-akka.actor.default-dispatcher-3] [PersistenceQuery(akka://default)] Create plugin: jdbc-read-journal akka.persistence.jdbc.query.journal.JdbcReadJournalProvider
[DEBUG] [02/23/2016 19:03:59.566] [default-akka.actor.default-dispatcher-4] [akka.persistence.Persistence(akka://default)] Create plugin: jdbc-journal akka.persistence.jdbc.journal.JdbcAsyncWriteJournal
[DEBUG] [02/23/2016 19:03:59.657] [default-akka.actor.default-dispatcher-4] [akka.persistence.Persistence(akka://default)] Create plugin: jdbc-snapshot-store akka.persistence.jdbc.snapshot.JdbcSnapshotStore
[DEBUG] [02/23/2016 19:03:59.690] [default-akka.actor.default-dispatcher-3] [AkkaPersistenceConfigImpl(akka://default)] 
 ====================================
 Akka Persistence JDBC Configuration:
 ====================================
 SlickConfiguration(slick.driver.PostgresDriver,None)
 ====================================
 PersistenceQueryConfiguration(,)
 ====================================
 JournalTableConfiguration(journal,None,JournalTableColumnNames(persistence_id,sequence_number,created,tags,message))
 ====================================
 DeletedToTableConfiguration(deleted_to,None,DeletedToTableColumnNames(persistence_id,deleted_to))
 ====================================
 SnapshotTableConfiguration(snapshot,None,SnapshotTableColumnNames(persistence_id,sequence_number,created,snapshot))
 ====================================

[DEBUG] [02/23/2016 19:03:59.734] [run-main-0] [AkkaSSLConfig(akka://default)] Initializing AkkaSSLConfig extension...
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
[DEBUG] [02/23/2016 19:03:59.980] [run-main-0] [AkkaSSLConfig(akka://default)] buildHostnameVerifier: created hostname verifier: com.typesafe.sslconfig.ssl.DefaultHostnameVerifier@5c271025
[DEBUG] [02/23/2016 19:04:01.976] [default-akka.actor.default-dispatcher-4] [akka://default/system/IO-TCP/selectors/$a/0] Successfully bound to /0:0:0:0:0:0:0:0:9000
[DEBUG] [02/23/2016 19:04:02.417] [default-akka.actor.default-dispatcher-4] [akka.serialization.Serialization(akka://default)] Using serializer[akka.persistence.serialization.MessageSerializer] for message [akka.persistence.PersistentRepr]

Then the problem appears:

[ERROR] [02/23/2016 19:04:02.492] [default-akka.actor.default-dispatcher-3] [akka://default/user/$a] Persistence failure when replaying events for persistenceId [Counter]. Last known sequence number [19]
java.lang.ClassCastException: scala.runtime.BoxedUnit cannot be cast to akka.Done
    at akka.persistence.jdbc.journal.SlickAsyncWriteJournal$$anonfun$asyncReplayMessages$2.apply(SlickAsyncWriteJournal.scala:88)
    at scala.util.Success$$anonfun$map$1.apply(Try.scala:236)
    at scala.util.Try$.apply(Try.scala:191)
    at scala.util.Success.map(Try.scala:236)
    at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
    at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
    at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
    at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
    at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
    at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
    at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
    at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
    at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
    at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:39)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

The write actor:

object CounterActor {
  case class Increment(value: Int) extends Command
  case class Incremented(value: Int) extends Event

  case class IncrementState(value: Int = 0) {
    def updated(event: Incremented): IncrementState = copy(event.value + value)
  }
}


class CounterActor extends PersistentActor {
  private var state = IncrementState()

  private def updateState(event: Incremented): Unit = {
    state = state.updated(event)
  }

  override def receiveRecover: Receive = {
    case event:Incremented => updateState(event)
    case SnapshotOffer(_, snapshot: IncrementState) => state = snapshot
  }

  override def receiveCommand: Receive = {
    case Increment(value) =>
      println("incrementing")
      persist(Incremented(value)) {event =>
        println("incremented")
        state = state.updated(event)
      }

    case "snap" => saveSnapshot(state)
    case "print" => println
  }


  override def persistenceId: String = "Counter"
}

The read actor:

object CounterView {
  case object Get
}

class CounterView(val materializer: ActorMaterializer) extends Actor {
  val readJournal = PersistenceQuery(context.system).readJournalFor[JdbcReadJournal](JdbcReadJournal.Identifier)

  val source: Source[EventEnvelope, NotUsed] = readJournal.currentEventsByPersistenceId("Counter", 0, Long.MaxValue)

  implicit val ec = context.dispatcher

  override def receive: Receive = {
    case CounterView.Get =>
      getAllIncrements.pipeTo(sender())
  }

  def getAllIncrements: Future[List[Incremented]] = {
    source.runFold(List.empty[Incremented])((result: List[Incremented], envelope: EventEnvelope) => {
      envelope match {
        case EventEnvelope(offset, persistenceId, sequenceNr, value: Incremented) =>
          println(offset, persistenceId, sequenceNr, value)
          value :: result
      }
    })(materializer)
  }
}

And application.conf:

akka {
  loglevel = DEBUG

  persistence {
    journal {
      plugin = "jdbc-journal"
    }

    snapshot-store {
      plugin = "jdbc-snapshot-store"
    }
  }
}


akka-persistence-jdbc {
  slick {
    driver = "slick.driver.PostgresDriver"
    db {
      host = "localhost"
      host = ${?POSTGRES_HOST}
      port = "5432"
      port = ${?POSTGRES_PORT}
      name = "tinki"

      url = "jdbc:postgresql://"${akka-persistence-jdbc.slick.db.host}":"${akka-persistence-jdbc.slick.db.port}"/"${akka-persistence-jdbc.slick.db.name}
      user = "postgres"
      password = "apass"
      driver = "org.postgresql.Driver"
      keepAliveConnection = off
      numThreads = 2
      queueSize = 100
      connectionTestQuery="select 1"
    }
  }

  tables {
    journal {
      tableName = "journal"
      schemaName = ""
      columnNames {
        persistenceId = "persistence_id"
        sequenceNumber = "sequence_number"
        created = "created"
        tags = "tags"
        message = "message"
      }
    }

    deletedTo {
      tableName = "deleted_to"
      schemaName = ""
      columnNames = {
        persistenceId = "persistence_id"
        deletedTo = "deleted_to"
      }
    }

    snapshot {
      tableName = "snapshot"
      schemaName = ""
      columnNames {
        persistenceId = "persistence_id"
        sequenceNumber = "sequence_number"
        created = "created"
        snapshot = "snapshot"
      }
    }
  }

  query {
    separator = ","
  }
}

Am I still doing something wrong here?

Option to auto create tables

It would be very convenient for users in development and testing (and perhaps even in production) if akka-persistence-jdbc offered the option to auto create tables if they don't exist. akka-persistence-cassandra offers this for example:

https://github.com/akka/akka-persistence-cassandra/blob/master/src/main/resources/reference.conf#L52

Slick can automagically generate and execute database specific DDL, so this shouldn't be too hard. I'd imagine it would be sufficient to simply check if each table exists first, and if it doesn't execute the create. To deal with cluster scenarios with two nodes creating the tables at once, if the creation fails you could then check if the table has been created after the failure, and if it has then assume the failure was because another node created it at the same time and so ignore it, otherwise rethrow.

Adding a `test-query` configuration parameter?

Hi!

I'm using akka-persistence-jdbc with postgres-9.1.901.jdbc4 (PostgreSQL 9.3.9, Ubuntu 14.04).

The problem I'm getting while trying to use Persistence Query with the read journal is java.sql.SQLException: JDBC4 Connection.isValid() method not supported, connection test query must be configured.

From http://engineering.covata.com/programming/2014/07/11/akka-persistence-sql/ and https://github.com/brettwooldridge/HikariCP#initialization I gather that I should set up an equivalent of akka.persistence.sql.common.db-connection.connectionTestQuery = "SELECT 1" for it to initialize properly.
Is there a way to set that here somewhere?

My application.conf:

akka {
  loglevel = DEBUG

  persistence {
    journal {
      plugin = "jdbc-journal"
    }

    snapshot-store {
      plugin = "jdbc-snapshot-store"
    }
  }
}


akka-persistence-jdbc {
  slick {
    driver = "slick.driver.PostgresDriver"
    db {
      host = "localhost"
      host = ${?POSTGRES_HOST}
      port = "5432"
      port = ${?POSTGRES_PORT}
      name = "postgres"

      url = "jdbc:postgresql://"${akka-persistence-jdbc.slick.db.host}":"${akka-persistence-jdbc.slick.db.port}"/"${akka-persistence-jdbc.slick.db.name}
      user = "postgres"
      password = "apass"
      driver = "org.postgresql.Driver"
      keepAliveConnection = on
      numThreads = 2
      queueSize = 100
    }
  }

  tables {
    journal {
      tableName = "journal"
      schemaName = ""
      columnNames {
        persistenceId = "persistence_id"
        sequenceNumber = "sequence_number"
        created = "created"
        tags = "tags"
        message = "message"
      }
    }

    deletedTo {
      tableName = "deleted_to"
      schemaName = ""
      columnNames = {
        persistenceId = "persistence_id"
        deletedTo = "deleted_to"
      }
    }

    snapshot {
      tableName = "snapshot"
      schemaName = ""
      columnNames {
        persistenceId = "persistence_id"
        sequenceNumber = "sequence_number"
        created = "created"
        snapshot = "snapshot"
      }
    }
  }

  query {
    separator = ","
  }
}

I'm aware that there is a 90% probability that I am doing something wrong and I would appreciate having pointed out what that might be.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.