GithubHelp home page GithubHelp logo

petabridge / akka.persistence.azure Goto Github PK

View Code? Open in Web Editor NEW
13.0 7.0 11.0 477 KB

Azure-powered Akka.Persistence for Akka.NET actors

License: Apache License 2.0

Batchfile 0.06% F# 3.84% PowerShell 5.02% Shell 1.14% C# 89.94%
akkadotnet akka-persistence azure azure-table-storage azure-blob-storage

akka.persistence.azure's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

akka.persistence.azure's Issues

Getting Stack Traces in Logs from Azure Persistence Failures

Something new that has come up is that I am now seeing azure persistence plugin stack traces in the console when loading an element from blob storage unsuccessfully. I don't know if this error is a misconfiguration on my end or normal azure log spam:

[ERROR][10/24/2023 17:33:17.452Z][Thread 0146][akka.tcp://[email protected]:9223/system/sharding/sessions/10/mysession-dffffd5a-2917-4286-8650-e6d2e028dbd8] Persistence failure when replaying events for persistenceId [mysession-dffffd5a-2917-4286-8650-e6d2e028dbd8]. Last known sequence number [0]
Cause: Azure.RequestFailedException: The specified blob does not exist.
RequestId:<guid>
Time:2023-10-24T17:33:17.4521243Z
Status: 404 (The specified blob does not exist.)
ErrorCode: BlobNotFound
Headers:
Server: Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
x-ms-request-id: <guid>
x-ms-client-request-id: <guid>
x-ms-version: 2021-12-02
x-ms-error-code: BlobNotFound
Date: Tue, 24 Oct 2023 17:33:17 GMT
Content-Length: 215
Content-Type: application/xml
at Azure.Storage.Blobs.BlobRestClient.DownloadAsync(String snapshot, String versionId, Nullable`1 timeout, String range, String leaseId, Nullable`1 rangeGetContentMD5, Nullable`1 rangeGetContentCRC64, String encryptionKey, String encryptionKeySha256, Nullable`1 encryptionAlgorithm, Nullable`1 ifModifiedSince, Nullable`1 ifUnmodifiedSince, String ifMatch, String ifNoneMatch, String ifTags, CancellationToken cancellationToken)
at Azure.Storage.Blobs.Specialized.BlobBaseClient.StartDownloadAsync(HttpRange range, BlobRequestConditions conditions, DownloadTransferValidationOptions validationOptions, Int64 startOffset, Boolean async, CancellationToken cancellationToken)
at Azure.Storage.Blobs.Specialized.BlobBaseClient.DownloadStreamingInternal(HttpRange range, BlobRequestConditions conditions, DownloadTransferValidationOptions transferValidationOverride, IProgress`1 progressHandler, String operationName, Boolean async, CancellationToken cancellationToken)
at Azure.Storage.Blobs.Specialized.BlobBaseClient.DownloadStreamingDirect(HttpRange range, BlobRequestConditions conditions, DownloadTransferValidationOptions transferValidationOverride, IProgress`1 progressHandler, String operationName, Boolean async, CancellationToken cancellationToken)
at Azure.Storage.Blobs.Specialized.BlobBaseClient.DownloadInternal(HttpRange range, BlobRequestConditions conditions, Boolean rangeGetContentHash, Boolean async, CancellationToken cancellationToken)
at Azure.Storage.Blobs.Specialized.BlobBaseClient.DownloadAsync(HttpRange range, BlobRequestConditions conditions, Boolean rangeGetContentHash, CancellationToken cancellationToken)
at Azure.Storage.Blobs.Specialized.BlobBaseClient.DownloadAsync(CancellationToken cancellationToken)
at Akka.Persistence.Azure.Snapshot.AzureBlobSnapshotStore.LoadAsync(String persistenceId, SnapshotSelectionCriteria criteria)
at Akka.Util.Internal.AtomicState.CallThrough[T](Func`1 task)
at Akka.Util.Internal.AtomicState.CallThrough[T](Func`1 task)

Using development connection fails when attempting to check for Azure storage

Getting this error when the connection is development and nothing is setup to run on azure

akka.persistence.snapshot-store.tester#1153123657]: Akka.Actor.ActorInitializationException: Exception during creation
---> System.AggregateException: One or more errors occurred. (Object reference not set to an instance of an object.)
---> System.NullReferenceException: Object reference not set to an instance of an object
at Akka.Persistence.Azure.Snapshot.AzureBlobSnapshotStore.InitCloudStorage(Int32 remainingTries)

It might be this line needs to be using null interpolation in case nothing is setup

            if (credentialSetup**?**.HasValue)

Add default table names

Need to add default table and blob container names for table and snapshot storage, respectively.

Upgrade to .NET 8 is breaking recovery

Today we upgraded our service which is using Hyperion serialization to .NET 8. We are using Akka.Persistence.Azure and suddenly our state is broken.

We use Akka.NET 1.5.13 but I also tried with 1.5.14 and 1.5.15. The errors which we get (by only upgrading to .NET 8 with hyperion serialization) are:

Failed to deserialize instance of type . Failed to deserialize object of type [MyCompany.SystemStateService.Core.Actors.StaticStateActor+NewStateMessage] from the stream. Cause: Failed to deserialize object of type [System.Text.Json.JsonElement] from the stream. Cause: Failed to deserialize object of type [System.Text.Json.JsonDocument] from the stream. Cause: Unable to cast object of type 'System.Boolean' to type 'MetadataDb'. Failed to deserialize object of type [MyCompany.SystemStateService.Core.Actors.StaticStateActor+NewStateMessage] from the stream. Cause: Failed to deserialize object of type [System.Text.Json.JsonElement] from the stream. Cause: Failed to deserialize object of type [System.Text.Json.JsonDocument] from the stream. Cause: Unable to cast object of type 'System.Boolean' to type 'MetadataDb'. Failed to deserialize object of type [System.Text.Json.JsonElement] from the stream. Cause: Failed to deserialize object of type [System.Text.Json.JsonDocument] from the stream. Cause: Unable to cast object of type 'System.Boolean' to type 'MetadataDb'. Failed to deserialize object of type [System.Text.Json.JsonDocument] from the stream. Cause: Unable to cast object of type 'System.Boolean' to type 'MetadataDb'. Unable to cast object of type 'System.Boolean' to type 'MetadataDb'. 


System.Runtime.Serialization.SerializationException:
   at Akka.Serialization.HyperionSerializer.FromBinary (Akka.Serialization.Hyperion, Version=1.5.14.0, Culture=neutral, PublicKeyToken=null)
   at Akka.Serialization.Serialization.Deserialize (Akka, Version=1.5.14.0, Culture=neutral, PublicKeyToken=null)
   at Akka.Persistence.Serialization.PersistenceMessageSerializer.GetPayload (Akka.Persistence, Version=1.5.14.0, Culture=neutral, PublicKeyToken=null)
   at Akka.Persistence.Serialization.PersistenceMessageSerializer.GetPersistentRepresentation (Akka.Persistence, Version=1.5.14.0, Culture=neutral, PublicKeyToken=null)
   at Akka.Persistence.Serialization.PersistenceMessageSerializer.FromBinary (Akka.Persistence, Version=1.5.14.0, Culture=neutral, PublicKeyToken=null)
   at Akka.Serialization.Serializer.FromBinary (Akka, Version=1.5.14.0, Culture=neutral, PublicKeyToken=null)
   at Akka.Persistence.Azure.SerializationHelper.PersistentFromBytes (Akka.Persistence.Azure, Version=1.5.13.0, Culture=neutral, PublicKeyToken=null)
   at Akka.Persistence.Azure.Journal.AzureTableStorageJournal+<ReplayMessagesAsync>d__21.MoveNext (Akka.Persistence.Azure, Version=1.5.13.0, Culture=neutral, PublicKeyToken=null)
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
   at Akka.Persistence.Journal.AsyncWriteJournal+<>c__DisplayClass18_0+<<HandleReplayMessages>g__ExecuteHighestSequenceNr|0>d.MoveNext (Akka.Persistence, Version=1.5.14.0, Culture=neutral, PublicKeyToken=null)
Inner exception System.Runtime.Serialization.SerializationException handled at Akka.Serialization.HyperionSerializer.FromBinary:
   at Hyperion.ValueSerializers.ObjectSerializer.ReadValue (Hyperion, Version=0.12.2.0, Culture=neutral, PublicKeyToken=null)
   at Hyperion.Serializer.Deserialize (Hyperion, Version=0.12.2.0, Culture=neutral, PublicKeyToken=null)
   at Akka.Serialization.HyperionSerializer.FromBinary (Akka.Serialization.Hyperion, Version=1.5.14.0, Culture=neutral, PublicKeyToken=null)
Inner exception System.Runtime.Serialization.SerializationException handled at Hyperion.ValueSerializers.ObjectSerializer.ReadValue:
   at Hyperion.ValueSerializers.ObjectSerializer.ReadValue (Hyperion, Version=0.12.2.0, Culture=neutral, PublicKeyToken=null)
   at lambda_method180 (Anonymously Hosted DynamicMethods Assembly, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null)
   at Hyperion.ValueSerializers.ObjectSerializer.ReadValue (Hyperion, Version=0.12.2.0, Culture=neutral, PublicKeyToken=null)
Inner exception System.Runtime.Serialization.SerializationException handled at Hyperion.ValueSerializers.ObjectSerializer.ReadValue:
   at Hyperion.ValueSerializers.ObjectSerializer.ReadValue (Hyperion, Version=0.12.2.0, Culture=neutral, PublicKeyToken=null)
   at lambda_method188 (Anonymously Hosted DynamicMethods Assembly, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null)
   at Hyperion.ValueSerializers.ObjectSerializer.ReadValue (Hyperion, Version=0.12.2.0, Culture=neutral, PublicKeyToken=null)
Inner exception System.InvalidCastException handled at Hyperion.ValueSerializers.ObjectSerializer.ReadValue:
   at lambda_method190 (Anonymously Hosted DynamicMethods Assembly, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null)
   at Hyperion.ValueSerializers.ObjectSerializer.ReadValue (Hyperion, Version=0.12.2.0, Culture=neutral, PublicKeyToken=null)

This is what the StaticStateActor+NewStateMessage record does look like (it's not changed during the upgrade)
image

Is this a known thing? Are we forgetting something during upgrade?

I'm also curious why it is stating System.Text.Json in the exception as we are not using that as serializer. And also where type MetadataDb is coming from. That's nowhere mentioned in my code :-)

`ExecuteBatchAsLimitedBatches` breaks write atomicity

As of today, the ExecuteBatchAsLimitedBatches extension method wraps the SubmitTransactionAsync method call. The extension automatically chunks all batch operations into chunk of 100 items. This intentional because Azure Table Storage does not support batch transaction with more than 100 items.

Remove this in the future, if and when Azure Table Storage supports a working transaction scheme.

Persistence Recovery Failure Loop

Ran into a bug with persistence that is preventing an actor from being started successfully due to repeated persistence recovery failures, even after purging the persistence store (Azure Table/Blob Storage) via the API calls provided in the Persistent Actor.

To reproduce:

  1. Induce a persistence recovery failure by changing the properties of a persisted journaled message and attempting to play back the bad message. (I did this via extending a persisted object with new properties - I did not change any pre-existing property names or values), but the deploy to our cluster with the new code timed out and resulted in a corruption of the persistence store for actors that replayed the changed object.

This error appears over and over in the logs once the wire format is corrupted:

(2024-04-22T16:58:34.5152956Z, 2024-04-22T17:32:34.1147660Z] akka_address: akka.tcp://[email protected]:9223 cluster_role: rtf exceptiontype: Azure.Data.Tables.TableTransactionFailedException level: warning LongSum

  1. In the event handler for OnRecoveryFailure, invoke the commands to purge the persistence store of all data:
    DeleteMessages(long.MaxValue); DeleteSnapshots(new SnapshotSelectionCriteria(long.MaxValue));

  2. When the actor restarts, it still gets a persistence recovery failure, except this time it's looking for the 0th message:
    [ERROR][04/22/2024 19:59:17.542Z][Thread 0033][akka.tcp://[email protected]:9223/system/sharding/users/2/sduser-7877102] Persistence failure when replaying events for persistenceId [sduser-7877102]. Last known sequence number [0]

From that point on it's stuck in an endless loop of failing to recover, attempting to purge all its messages, and then failing on the 0th message, which shouldn't exist any more.

Possible Akka.Persistence.Azure and Akka.Cluster.Sharding configuration / compatibility issues

Version: 0.6.1

We just fixed #98, but I'm wondering if this HOCON is valid using this plugin along with Akka.Cluster.Sharding:

akka.actor {
	provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
}
akka.remote {
	dot-netty.tcp {
		hostname = "127.0.0.1"
		port = 5054
	}
}
akka.cluster {
	seed-nodes = ["akka.tcp://[email protected]:4053"]
	roles = ["sharded-node"]
	sharding {
        journal-plugin-id = "akka.persistence.journal.sharding"
        snapshot-plugin-id = "akka.persistence.snapshot-store.sharding"
    }
}

akka.persistence {
  journal {
	azure-table {
	  # qualified type name of the Azure Table Storage persistence journal actor
	  class = "Akka.Persistence.Azure.Journal.AzureTableStorageJournal, Akka.Persistence.Azure"

	  # connection string, as described here: https://docs.microsoft.com/en-us/azure/storage/common/storage-configure-connection-string
	  connection-string = "[redacted]"

	  # the name of the Windows Azure Table used to persist journal events
	  table-name = "journaldev"

	  #  Initial timeout to use when connecting to Azure Storage for the first time.
	  connect-timeout = 3s

	  # Timeouts for individual read, write, and delete requests to Azure Storage.
	  request-timeout = 3s

	  # Toggle for verbose logging. Logs individual requests to Azure.
	  # Intended only for debugging purposes. akka.loglevel must be set to DEBUG
	  # in order for this setting to be effective.
	  verbose-logging = off

	  # dispatcher used to drive journal actor
	  plugin-dispatcher = "akka.actor.default-dispatcher"
	}
	sharding {
				class = "Akka.Persistence.Azure.Journal.AzureTableStorageJournal, Akka.Persistence.Azure"
				plugin-dispatcher = "akka.actor.default-dispatcher"
				connection-string = "[redacted]"	      
				connection-timeout = 3s
				table-name = shardingdev
				auto-initialize = on
				metadata-table-name = shardingmetadata
	}
  }  

  query {
	journal {
	  azure-table {
		# Implementation class of the Azure ReadJournalProvider
		class = "Akka.Persistence.Azure.Query.AzureTableStorageReadJournalProvider, Akka.Persistence.Azure"
  
		# Absolute path to the write journal plugin configuration entry that this 
		# query journal will connect to. 
		# If undefined (or "") it will connect to the default journal as specified by the
		# akka.persistence.journal.plugin property.
		write-plugin = ""
  
		# How many events to fetch in one query (replay) and keep buffered until they
		# are delivered downstreams.
		max-buffer-size = 100

		# The Azure Table write journal is notifying the query side as soon as things
		# are persisted, but for efficiency reasons the query side retrieves the events 
		# in batches that sometimes can be delayed up to the configured `refresh-interval`.
		refresh-interval = 3s
	  }
	}
  }

  snapshot-store {
	azure-blob-store {
	  # qualified type name of the Azure Blob Storage persistence snapshot storage actor
	  class = "Akka.Persistence.Azure.Snapshot.AzureBlobSnapshotStore, Akka.Persistence.Azure"

	  # connection string, as described here: https://docs.microsoft.com/en-us/azure/storage/common/storage-configure-connection-string
	  connection-string = "[redacted]" 

	  # the name of the Windows Azure Blob Store container used to persist snapshots
	  container-name = "snapshotdev"

	  #  Initial timeout to use when connecting to Azure Storage for the first time.
	  connect-timeout = 3s

	  # Timeouts for individual read, write, and delete requests to Azure Storage.
	  request-timeout = 3s

	  # Toggle for verbose logging. Logs individual requests to Azure.
	  # Intended only for debugging purposes. akka.loglevel must be set to DEBUG
	  # in order for this setting to be effective.
	  verbose-logging = off

	  # dispatcher used to drive snapshot storage actor
	  plugin-dispatcher = "akka.actor.default-dispatcher"
	}
	sharding {
		    class = "Akka.Persistence.Azure.Snapshot.AzureBlobSnapshotStore, Akka.Persistence.Azure"
		    plugin-dispatcher = "akka.actor.default-dispatcher"
			connection-string = "[redacted]"	      
		    connection-timeout = 3s
		    table-name = "shardingsnapshotdev"
		    auto-initialize = on
	}
  }
}

So the issues are as follows:

  1. Expect Akka.Cluster.Sharding, using persistence as its storage mode, to use the akka.persistence.journal.sharding configuration - instead it appears to write data to a different table (I believe the normal akka.peristence.journal.azure-table one.)
  2. Expect Akka.Cluster.Sharding to use the akka.persistence.snapshot-store.sharding SnapshotStore - instead it appears to be saving local snapshots.

Questions are:

  1. These configurations do appear to work for Akka.Persistence.SqlServer, so what's the issue here?
  2. Is this HOCON itself buggy?

PersistentIds with "/" and ShardCoordinator

Hello,

Azure table storage does not support "/" (slash) as a partition key.
Activating the journal within a sample sharding cluster results in errors concerning this restriction.

For example the akka.net build in shard coordinator uses his path as persistence id:

Propery Value
PartitionKey "/system/sharding/areaCoordinator/singleton/coordinator"
RowKey "0000000000000000001"
SeqNo 1

Resulting in:

Cause: Microsoft.WindowsAzure.Storage.StorageException: Element 0 in the batch returned an unexpected response code.
   at Microsoft.WindowsAzure.Storage.Core.Executor.Executor.ExecuteAsyncInternal[T](RESTCommand`1 cmd, IRetryPolicy policy, OperationContext operationContext, CancellationToken token) in C:\Program Files (x86)\Jenkins\workspace\release_dotnet_master\Lib\WindowsRuntime\Core\Executor\Executor.cs:line 316
   at Akka.Persistence.Azure.Journal.AzureTableStorageJournal.WriteMessagesAsync(IEnumerable`1 atomicWrites)
Request Information
RequestID:f7cdc5bd-b002-0003-2d3a-47f6f2000000
RequestDate:Sat, 20 Jun 2020 19:40:07 GMT
StatusMessage:0:Bad Request - Error in query syntax.
ErrorCode:
ErrorMessage:0:Bad Request - Error in query syntax.

You should find a way to escape these characters to support any kind of persistence id.

Thanks in advance and best regards
Thomas

CurrentEventsByTag returns a maximum of 1,000 events

I know that there are 19,000 events in the log for the given tag. The query is only giving me the first 1,000.

Configuration:

.WithAzurePersistence(
    azureStorageConnectionString,
    true,
    containerName,
    tableName,
    builder =>
    {
        builder.AddWriteEventAdapter<TaggingEventAdapter>(
            "projection-tagger",
            new[] { typeof(IHasEntraTenantId) }
        );
    }
)

Query:

private void QueryEvents()
{
    var materializer = Context.System.Materializer();
    var projectionActorRef = Self;

    currentEventsByTagQuery
        .CurrentEventsByTag(tag, Offset.NoOffset())
        .Select(env => env.Event)
        .RunWith(
            Sink.ActorRef<object>(
                projectionActorRef,
                new ProjectionRebuildNoMoreEvents { EntraTenantId = entraTenantId },
                (ex) =>
                    new FailedEndOfStream { EntraTenantId = entraTenantId, Reason = ex.Message }
            ),
            materializer
        );
}

When I swap to the Akka.Persistence.Sql plugin I get all 19,000 events.

Thanks,

Kyle Reed

Blobs persistance needs support for DefaultAzureCredential()

Connection string isn't enough when it comes to Azure Blob Storage as for security it needs to be able to run using DefaultAzureCredential() in order to delegate access to managed identity

Maybe just adding an extra param in app.conf in the connection section as boolean which in turns this will indicate to the constructor to pass DefaultAzureCredential() as the second parameter for the connection function

Remove tag subscriber updates from inner write loop

Should we get rid of the tag subscribers stuff inside the write loop? Seems out of place given that we still have to poll for changes on the table to detect changes that happen exclusively on other nodes in the network. This is something that got copied and pasted into a lot of journals but it was only ever really appropriate for SQLite / in-memory.

Originally posted by @Aaronontheweb in #207 (comment)

Allow private access on snapshot container

At the moment the access on the snapshot container is hard coded to public (https://github.com/petabridge/Akka.Persistence.Azure/blob/dev/src/Akka.Persistence.Azure/Snapshot/AzureBlobSnapshotStore.cs#L73). We would like to set the access to private in our case, so it would be nice to have a setting for this.

We're also creating the container and table from outside of our application, so it would be nice to have a setting to disable the container creation (something like this maybe: https://github.com/akkadotnet/akka.net/blob/dev/src/contrib/persistence/Akka.Persistence.Sqlite/sqlite.conf#L29).

I'd happily make the changes required and open a PR. I'd just like to get the maintainers thoughts first.

Added regular expressions to validate table and container names

Help prevent the user from running into a runtime issue with Azure Table or Blob storage by validating the names of containers and tables before we fire up the plugin:

https://blogs.msdn.microsoft.com/jmstall/2014/06/12/azure-storage-naming-rules/

Tables

Name of the table
Table names must conform to these rules:

Table names must be unique within an account.
Table names may contain only alphanumeric characters.
Table names cannot begin with a numeric character.
Table names are case-insensitive.
Table names must be from 3 to 63 characters long.
Some table names are reserved, including "tables". Attempting to create a table with a reserved table name returns error code 404 (Bad Request).
These rules are also described by the regular expression "^[A-Za-z][A-Za-z0-9]{2,62}$".

Table names preserve the case with which they were created, but are case-insensitive when used.

Blobs

A container name must be a valid DNS name, conforming to the following naming rules:

Container names must start with a letter or number, and can contain only letters, numbers, and the dash (-) character.
Every dash (-) character must be immediately preceded and followed by a letter or number; consecutive dashes are not permitted in container names.
All letters in a container name must be lowercase.
Container names must be from 3 through 63 characters long.

Malformed log message

Cause: System.FormatException: Received a malformed formatted message. Log level: [DebugLevel], Template: [Azure table storage wrote entity with status code [{1}]], args: [204]
 ---> System.FormatException: Index (zero based) must be greater than or equal to zero and less than the size of the argument list.
   at System.Text.StringBuilder.AppendFormatHelper(IFormatProvider provider, String format, ParamsArray args)
   at System.String.FormatHelper(IFormatProvider provider, String format, ParamsArray args)
   at System.String.Format(String format, Object[] args)
   at Akka.Event.DefaultLogMessageFormatter.Format(String format, Object[] args)
   at Akka.Event.LogMessage.ToString()
   at System.Text.StringBuilder.AppendFormatHelper(IFormatProvider provider, String format, ParamsArray args)
   at System.String.FormatHelper(IFormatProvider provider, String format, ParamsArray args)
   at System.String.Format(String format, Object[] args)
   at Akka.Event.LogEvent.ToString()
   at Akka.TestKit.Xunit2.Internals.TestOutputLogger.HandleLogEvent(LogEvent e)
   --- End of inner exception stack trace ---
   at Akka.TestKit.Xunit2.Internals.TestOutputLogger.HandleLogEvent(LogEvent e)
   at lambda_method(Closure , Object , Action`1 , Action`1 , Action`1 , Action`1 , Action`1 )
   at Akka.Tools.MatchHandler.PartialHandlerArgumentsCapture`6.Handle(T value)
   at Akka.Actor.ReceiveActor.ExecutePartialMessageHandler(Object message, PartialAction`1 partialAction)
   at Akka.Actor.ReceiveActor.OnReceive(Object message)
   at Akka.Actor.UntypedActor.Receive(Object message)
   at Akka.Actor.ActorBase.AroundReceive(Receive receive, Object message)
   at Akka.Actor.ActorCell.ReceiveMessage(Object message)
   at Akka.Actor.ActorCell.Invoke(Envelope envelope)

Azure credentials throws TaskCanceledException when Actor is already stopped

We see the following exception occurring during deployment or our Akka.net:
image

It is expected that running tasks getting nicely cancelled by an (already) stopped actor, but we sometimes get couple of them and on any release so our alerts and monitors are gonna give false exception alerts now.

Any suggestions?

Add ReadJournal support for PersistenceQuery

I am in the process of creating the AzureTableStorageReadJournal. I have modeled most of it after the MongoDB journal. I can see the the AzureTableStorageJournal does not support Tags and I would like to provide that. Since Table Storage does not have an array property type I was wondering what the best approach for that would be since it would also need to be queried?

Support secret rotation

Support for secret rotation is required for production systems.
Usually, secrets for Azure storage are kept in Key Vault, but there is no need to take a hard dependency on Key Vault. Any source would do.
Instead of constructors taking a connection string, they could take a Func<Task<string>> that will return the connection string. When an Azure storage API call is made that returns 401 Not Authorized, this function is called to refresh the connection string. It might be a good idea to have a worker which the journal's supervisor policy recreates with a new connection string when it fails with a 401.
Alternatively, an IActorRef could be passed which would respond to appropriate messages for connection strings.
Or the settings object be the source of the update, but in a similar fashion to the above.

Under high load 'The batch request operation exceeds the maximum 100 changes per change set.' exceptions are thrown when using this plugin in Akka.Cluster.Sharding

When our system is under high load the following exceptions are thrown if we use Akka.Persistence.Azure as persistence for Akka.Cluster.Sharding.

Is it possible to batch this in chunks of maximum 100 items? Then we work around this limitation of Azure Tables.

Azure.Data.Tables.TableTransactionFailedException: 99:The batch request operation exceeds the maximum 100 changes per change set.
RequestId:b4783f09-5002-002e-4c1c-cf0034000000
Time:2022-09-23T07:19:30.2557589Z
The index of the entity that caused the error can be found in FailedTransactionActionIndex.
Status: 400 (Bad Request)
ErrorCode: InvalidInput

Additional Information:
FailedEntity: 99

Content:
{"odata.error":{"code":"InvalidInput","message":{"lang":"en-US","value":"99:The batch request operation exceeds the maximum 100 changes per change set.\nRequestId:b4783f09-5002-002e-4c1c-cf0034000000\nTime:2022-09-23T07:19:30.2557589Z"}}}

Headers:
X-Content-Type-Options: REDACTED
Cache-Control: no-cache
DataServiceVersion: 3.0;
Content-Type: application/json;odata=minimalmetadata;streaming=true;charset=utf-8

at Azure.Data.Tables.TableRestClient.SendBatchRequestAsync(HttpMessage message, CancellationToken cancellationToken)
at Azure.Data.Tables.TableClient.SubmitTransactionInternalAsync(IEnumerable1 transactionalBatch, Guid batchId, Guid changesetId, Boolean async, CancellationToken cancellationToken) at Azure.Data.Tables.TableClient.SubmitTransactionAsync(IEnumerable1 transactionActions, CancellationToken cancellationToken)
at Akka.Persistence.Azure.Journal.AzureTableStorageJournal.DeleteMessagesToAsync(String persistenceId, Int64 toSequenceNr)
at Akka.Util.Internal.AtomicState.CallThrough(Func1 task) at Akka.Util.Internal.AtomicState.CallThrough(Func1 task)
at Akka.Persistence.Journal.AsyncWriteJournal.<>c__DisplayClass18_0.<g__ProcessDelete|0>d.MoveNext()

The actorpath where this is happening is: akka://<my-actorsystem>/system/sharding/<my-shard-region>/5.
These are the region stats from PBM during this time:
image

The formatted message in the logging is:

[22/09/23-07:19:30.4253][akka.tcp://<my-actorsystem>@10.250.0.128:12552/system/sharding/heartbeat-shard/5][akka://<my-actorsystem>/system/sharding/<my-shard-region>/5][0022]: PersistentShard messages to [5000] deletion failure: [99:The batch request operation exceeds the maximum 100 changes per change set. RequestId:b4783f09-5002-002e-4c1c-cf0034000000 Time:2022-09-23T07:19:30.2557589Z The index of the entity that caused the error can be found in FailedTransactionActionIndex. Status: 400 (Bad Request) ErrorCode: InvalidInput Additional Information: FailedEntity: 99 Content: {"odata.error":{"code":"InvalidInput","message":{"lang":"en-US","value":"99:The batch request operation exceeds the maximum 100 changes per change set.\nRequestId:b4783f09-5002-002e-4c1c-cf0034000000\nTime:2022-09-23T07:19:30.2557589Z"}}} Headers: X-Content-Type-Options: REDACTED Cache-Control: no-cache DataServiceVersion: 3.0; Content-Type: application/json;odata=minimalmetadata;streaming=true;charset=utf-8 ]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.