GithubHelp home page GithubHelp logo

rebus-org / rebus.sqlserver Goto Github PK

View Code? Open in Web Editor NEW
43.0 8.0 42.0 4.34 MB

:bus: Microsoft SQL Server transport and persistence for Rebus

Home Page: https://mookid.dk/category/rebus

License: Other

C# 96.59% Batchfile 0.50% HTML 2.17% CSS 0.64% JavaScript 0.11%
rebus sql-server transport persistence message-queue

rebus.sqlserver's Introduction

Rebus.SqlServer

install from nuget

Provides Microsoft SQL Server implementations for Rebus for

  • transport
  • sagas
  • subscriptions
  • timeouts
  • saga snapshots


Which versions of SQL Server are supported?

Rebus' SQL package requires at least Microsoft SQL Server 2008.

A word of caution regarding the SQL transport

Microsoft SQL Server is a relational database and not a queueing system.

While it does provide the necessary mechanisms to implement queues, it's not optimized for the type of operations required to implement high-performance queues.

Therefore, please only use the SQL transport if your requirements are fairly modest (and what that means in practice probably depends a lot on the hardware available to you).

Configuration examples

The Rebus configuration spell goes either

services.AddRebus(configure => configure.(...));

or

Configure.With(...)
	.(...)
	.Start();

depending on whether you're using Microsoft DI or some other IoC container.

The following configuration examples will use the Microsoft DI-style of configuration, but the use of Rebus' configuration extensions is the same regardless of which type of configuration you are using, so it should be fairly easy to convert to the style you need.

Transport

Rebus only really requires one part of its configuration: A configuration of the "transport" (i.e. which queueing system, you're going to use).

The SQL transport is not recommended for heavier workloads, but it can be fine in cases where you do not require a super-high throughput. Here's how to configure it (in this case using the name queue-name as the name of the instance's input queue):

services.AddRebus(
	configure => configure
		.Transport(t => t.UseSqlServer(connectionString, "queue-name"))
);

Sagas

To configure Rebus to store sagas in SQL Server, you can do it like this (using the table 'Sagas' for the saga data, and 'SagaIndex' for the corresponding correlation properties):

services.AddRebus(
	configure => configure
		.(...)
		.Sagas(t => t.StoreInSqlServer(connectionString, "Sagas", "SagaIndex"))
);

Subscriptions

To configure Rebus to store subscriptions in SQL Server, you can do it like this (using the table 'Subscriptions'):

services.AddRebus(
	configure => configure
		.(...)
		.Subscriptions(t => t.StoreInSqlServer(connectionString, "Subscriptions", isCentralized: true))
);

Please note the use of isCentralized: true โ€“ it indicates that the subscription storage is "centralized", meaning that both subscribers and publishers use the same storage.

If you use the isCentralized: false option, then subscribers need to know the queue names of the publishers of the events they want to subscribe to, and then they will subscribe by sending a message to the publisher.

Using isCentralized: true makes the most sense in most scenarios, as it's easier to work with.

Timeouts

If you're using a transport that does not natively support "timeouts" (also known as "deferred messages", or "messages sent into the future" ๐Ÿ™‚), you can configure one of your Rebus instances to function as a "timeout manager". The timeout manager must have some kind of timeout storage configured, and you can use SQL Server to do that.

You configure it like this (here using RabbitMQ as the transport):

services.AddRebus(
	configure => configure
		.Transport(t => t.UseRabbitMq(connectionString, "timeout_manager"))
		.Timeouts(t => t.StoreInSqlServer(connectionString, "Timeouts"))
);

In most cases, it can be super nice to simply configure one single timeout manager with a globally known queue name (e.g. "timeout_manager") and then make use of it from all other Rebus instances by configuring them to use the timeout manager for deferred messages:

services.AddRebus(
	configure => configure
		.Transport(t => t.UseRabbitMq(connectionString, "another-eueue"))
		.Timeouts(t => t.UseExternalTimeoutManager("timeout_manager"))
);

This will cause someMessage to be sent to the timeout manager when you await bus.Defer(TimeSpan.FromMinutes(5), someMessage), which will store it in its timeouts database for 5 minutes before sending it to whoever was configured as the recipient of someMessage.

rebus.sqlserver's People

Contributors

clegendre avatar cleytonb avatar kendallb avatar larsw avatar magnus-tretton37 avatar mathiasnohall avatar mookid8000 avatar mrmdavidson avatar nativenolde avatar rsivanov avatar seankearon avatar thomasdcses avatar tompazourek avatar zlepper avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rebus.sqlserver's Issues

Make it easy to deploy SQL Schema at deploy time, not run time (to use dedicate login, separate from application login)

Typically in a web application, the SQL login used by the application will have restricted permissions, obviously for security reasons. Usually creating SQL schema is required to happen upfront, when the application is being deployed (and often performed in scripts, i.e. Powershell).

I have written some methods that make this easy to achieve for the different SQL persistence types, something like...

    namespace Rebus.SqlServer.Transport
    {
        public static class SchemaInstaller
        {
            public static void EnsureInstalled(string connectionString, string tableName)
            {
                EnsureInstalled(connectionString, tableName, LogLevel.Debug);
            }
    
            public static void EnsureInstalled(string connectionString, string tableName, LogLevel logLevel)
            {
                var loggerFactory = new ConsoleLoggerFactory(true);
                loggerFactory.MinLevel = logLevel;
    
                EnsureInstalled(connectionString, tableName, loggerFactory);
            }
    
            public static void EnsureInstalled(string connectionString, string tableName, IRebusLoggerFactory rebusLoggerFactory)
            {
                var connectionProvider = new DbConnectionProvider(connectionString, rebusLoggerFactory);
                var taskFactory = new TplAsyncTaskFactory(rebusLoggerFactory);
    
                using (var transport = new SqlServerTransport(connectionProvider, tableName, null, rebusLoggerFactory, taskFactory))
                {
                    transport.EnsureTableIsCreated();
                }
            }
        }
    }

If you think this is of value I can submit a PR.

Cheers.

Should `SqlServerTransport` use `ConfigureAwait(false)` ?

Just looking through some of the code in Rebus.Sqlserver. I know in Rebus core's ThreadPoolWorker that it's using ConfigureAwait(false) for all it's async operations. However I notice that in SqlServerTransport (and the lease transport, too, I guess) this isn't the case.

Eg.

var connection = await GetConnection(context);

Is there a reason why this isn't using ConfigureAwait(false) ?

Service broker-based transport

@jeffreyabecker made a SQL Server Service Broker-based transport for Rebus. We should see if it is fit enough to be added to the Rebus.SqlServer package as an additional transport option.

  • Create separate branch
  • Pull in code from here
  • See if it works
  • Create configuration extensions for it

Azure Microsoft.Data.SqlConnection and AccessToken API

Hi,

I am trying to get the 7.0.0-a1 prerelease to play with Azure DBs and AccessToken authentication.

Is there an API for generating and setting the AccessToken on the internally generated Microsoft.Data.SqlClient.SqlConnection of the Rebus.SqlServer library? I was n't able to find anything.

The workaround that I came up with is creating my own implementation of your IDbConnectionProvider interface that provisions for AccessToken generation, but this is a lot of extra boilerplate code that by 99% replicates your own DbConnectionProvider class. Maybe you can make the DbConnectionProvider to accept a callback delegate/DI interface for injecting an AccessToken?

Thanks!

It should be possible to configure SQL server transaction level

The Rebus.SqlServer implementation has a IsolationLevel.ReadCommitted in DbConnectionFactoryProvider that cannot be changed from the outside. In our application we need it to be IsolationLevel.Snapshot to work with the remaining code base. Please make it possible to change it.

Question: Handling Schema Updates

I just opened #49 and was thinking it'd be good to open up a discussion about this as it's something that's affected us before and I'm sure we're not the only one.

We're running Rebus in a cloud scenario with auto-scaling. At this very moment in time we have 12 web servers running Rebus as a one way queue to communicate with 5 back end workers. (Those numbers change throughout the day/week/month). This is all using the SQL Transport and running under a tightly controlled environment where the application runs with the least amount of privileges possible for our application. As such we're faced with two issues:

  1. The application cannot make schema modifications at run time - it's simply going to throw because the permissions aren't there. This is a hard requirement.
  2. When we deploy the changes are progressively rolled out. So when we click the "Deploy!" button it might be that N web servers and Y backend servers are taken out, upgraded, and put back in. This means we then have a mix of some old code and some new code running in production. This continues until eventually all code is new code. But we have a measurable moment in time (tens of minutes) where code is running side by side. Again, this is a hard requirement.

As a result of these we need to carefully plan and roll out database changes to be backwards compatible with the "old" version of the code (Eg. We'd not mark a column as not-null without it also having a default or else old code would fail to insert) as well as revertable should we need to roll back the application.

The move to table-per-queue, for instance, has been a headache for us. I wonder if there's some way we can come up with that makes these schema modifications easier to manage. Maybe it's something as simple as change log documentation for releases of Rebus.SqlServer that indicates any schema modifications required. Or an embedded resource into the schema progressively builds the "now" version of the table.
Eg.

IF NOT EXISTS (SELECT 1 FROM sys.Tables WHERE Name = '$TableName$' AND schema_id = SCHEMA_ID('$SchemaName$'))
BEGIN
  CREATE $Schema$.$TableName$ (... )
END

-- In V2 we added the Foo column
IF NOT EXISTS (SELECT 1 FROM sys.columns WHERE name = 'Foo' AND object_id = OBJECT_ID('$SchemaName$.$TableName$'))
BEGIN
  ALTER TABLE $SchemaName$.$TableName$ ADD Foo VARCHAR(MAX) NULL
END

-- In V3 we did X

There could then be a static method IList<string> SqlServerTransport.GetMigrationScripts(string schema, string table) that returns all of these snippets and could be executed. The existing EnsureTableNameIsCreated() method would just be a wrapper around this. But for people that need to manage these migrations externally they could pull in the list of migrations and execute those themselves. There are libraries that manage this for you (eg. FluentMigrator) but bringing in a whole new library for it feels a bit excessive.

Thoughts?

DbConnectionProvider does not open connection if enlistInAmbientTransaction = true

I'm getting the following error, when configuring transport with enlistInAmbientTransaction = true:

Rebus.Injection.ResolutionException : Could not resolve Rebus.Transport.ITransport with decorator depth 0 - registrations: Rebus.Injection.Injectionist+Handler
----> System.InvalidOperationException : ExecuteReader requires an open and available Connection. The connection's current state is closed
at Rebus.Injection.Injectionist.ResolutionContext.GetTService
at Rebus.Config.RebusConfigurer.<>c.b__12_12(IResolutionContext c)
at Rebus.Injection.Injectionist.ResolutionContext.GetTService
at Rebus.Config.RebusConfigurer.<>c__DisplayClass12_0.b__25(IResolutionContext c)
at Rebus.Injection.Injectionist.ResolutionContext.GetTService
at Rebus.Config.RebusConfigurer.<>c__DisplayClass12_0.b__26(IResolutionContext c)
at Rebus.Injection.Injectionist.ResolutionContext.GetTService
at Rebus.Injection.Injectionist.GetTService
at Rebus.Config.RebusConfigurer.Start()

Looking at the source, I think the problem comes from DbConnectionProvider, where connection is opened only, if enlistInAmbientTransaction = false:

    private SqlConnection CreateSqlConnectionSuppressingAPossibleAmbientTransaction()
    {
      SqlConnection sqlConnection;
      using (new TransactionScope(TransactionScopeOption.Suppress))
      {
        sqlConnection = new SqlConnection(this._connectionString);
        sqlConnection.Open();  // Connection is opened here, if enlistInAmbientTransaction = false
      }
      return sqlConnection;
    }

    private SqlConnection CreateSqlConnectionInAPossiblyAmbientTransaction()
    {
      SqlConnection sqlConnection = new SqlConnection(this._connectionString);
      // Should also open connection here?
      Transaction current = Transaction.Current;
      if (current != (Transaction) null)
        sqlConnection.EnlistTransaction(current);
      return sqlConnection;
    }

Inputqueue and destinationaddresses should be able to contain multiple '.' without enforcing to use brackets

Currently the assumption is that if an inputqueue or destinationaddress contains a single '.' then it's prefixed with a SQL schema name, like "dbo.tablename".
When there are 2 or more '.' then the assumption/suggestion/workaround/exception in the code is to enforce that all names are enclosed in brackets, like so "[some.nice.endpoint]".
This workaround implies that when changing transports all existing client code needs to be changed, or have if(transport==sql)'s all over the place if multiple transports are used (unit testing, integration testing, production).

My suggestion is that the SQLserver transport (and it should probably apply to all transports...) is to allow any character in the provided inputqueue/destinationaddress/topic's and internally in the transport/subscription just format/map to a safe name if really needed. Next to this there can be configuration options to give flexibility where it's needed, e.g. for backwards compatibility and providing schema names.

So practically for the SQLserver transport I'm thinking about:

  • Add a "UseLegacyNaming()" configuration option that internally uses a LegacyTableNameFormatter which has the current code in TableName.Parse/TableNameFromParts.
  • Add a DefaultNameFormatter that just prefixes with "dbo.", Note that it's valid to have brackets in a SQL tablename.
  • Provide a defaultSchema parameter to the DefaultNameFormatter to use instead of "dbo".

Untrusted Initialization Issue with ConnectionStrings

We have been running some security tests on our project and the 3rd party package we consume.

A security issue has been flagged on Rebus.SqlServer. The issue boils down to the extension methods that take parameter connectionStringOrConnectionStringName. As any connection string can be passed into the method (untrusted data), rather than just taking in a connection string name which only allows connection strings defined within the app.config.

CWE ID 15
This call to system_data_sqlclient_dll.System.Data.SqlClient.SqlConnection.!newinit_0_1() allows external control of system settings.
The argument to the function is constructed using untrusted input, which can disrupt service or cause an application to behave in unexpected ways.
The first argument to !newinit_0_1() contains tainted data.
The tainted data originated from earlier calls to rebus_sqlserver_dll.rebus.config.sqlservertransportconfigurationextensions.usesqlserver,
rebus_sqlserver_dll.rebus.config.sqlservertransportconfigurationextensions.usesqlserverasonewayclient,
rebus_sqlserver_dll.rebus.config.sqlservertimeoutsconfigurationextensions.storeinsqlserver,
rebus_sqlserver_dll.rebus.config.sqlserversubscriptionsconfigurationextensions.storeinsqlserver,
rebus_sqlserver_dll.rebus.config.sqlserversagasnapshotsconfigurationextensions.storeinsqlserver,
rebus_sqlserver_dll.rebus.config.sqlserversagaconfigurationextensions.storeinsqlserver,
and rebus_sqlserver_dll.rebus.config.sqlserverdatabusconfigurationextensions.storeinsqlserver.

Ability to customize column sizes for subscription storage

Because column sizes are tricky when they are NVARCHARs and they are used in an index, it should be possible to configure the sizes.

It could look like this:

Configure.With(...)
	.Transport(t => (...))
	.Subscriptions(s => {
		s.StoreInSqlServer(..., "Subscriptions")
			.SetColumnSizes(topicColumnSize: 350, addressColumnSize: 50);
	})
	.Start();

which would then create the schema accordingly.

Moreover, a warning could be logged or an error could be thrown if the schema reflection in Initialize detects that the schema does not match the sizes specified in the Customize... thing.

Add CreationTime timestamp

because it is not possible to see the age of attachments that have not yet been read and might never be, which makes it hard to come up with a sensible way of removing these attachments.

Make saga serializer configurable

if I use the configurationbuilder like so:

.Serialization(s => s.UseNewtonsoftJson(JsonInteroperabilityMode.PureJson))

the objectserializer still uses json with full type information. i.eg. when persisting saga data to the sql database in SqlServerSagaStorage.cs

the NewtonsoftJsonConfigurationExtensions should register an instance of ObjectSerializer wich honors the purejson interopability

Performance & Plan Cache Issues

I'm getting very high cpu usage using rebus pub/sub and saga in an azure database.

I've noticed a high number of sql query plan compilations which I think is due to rebus not setting the parameter length when inserting varbinary(max) column using ado.net.

Would adding -1 allow sql server to compile and reuse a single plan?

command.Parameters.Add("headers", SqlDbType.VarBinary, -1).Value = serializedHeaders;
command.Parameters.Add("body", SqlDbType.VarBinary, -1).Value = message.Body;

Also noticed that this query generates a new plan every time.

SET NOCOUNT ON

  ;WITH TopCTE AS (
    SELECT  TOP 1
        [id],
        [headers],
        [body]
    FROM  [Schema].[Table] M WITH (ROWLOCK, READPAST)
    WHERE  
                M.[visible] < getdate()
    AND    M.[expiration] > getdate()
    ORDER
    BY    [priority] DESC,
        [id] ASC
  )
  DELETE  FROM TopCTE
  OUTPUT  deleted.[id] as [id],
      deleted.[headers] as [headers],
      deleted.[body] as [body]

This article discusses the impact not setting parameter lengths can have

https://blogs.msdn.microsoft.com/psssql/2010/10/05/query-performance-and-plan-cache-issues-when-parameter-length-not-specified-correctly/

Cant seem to use rebus.sqlserver (b4.0.0-b03) in Rebus 4.0.0.b06

This was an oddity, not really something that needs to be fixed, but I just thought I would note it here in case someone else comes across it.

For some reason, in VS 2017 after I loaded the .net core version of Rebus and Rebus.SqlServer, Visual Studio did not see Rebus.SqlServer, and I couldn't hook up the transport.

After restarting VS, it all seemed to work as it should.

Different databases for transport and databus.

Hello! I like Rebus, thanks for your work.
I am developing large system using Rebus with SqlServer for transport, databus, persistance and subscriptions. There are some limitations preventing use of other technologies like RabbitMq or MongoDb. So i limited to windows only and .net only solutions. There will be about 200 microservises. So i must care about scaling transport and databus. I would like to have more then one DB to store queues and BLOBs. Maybe even one DB per microservice. But i can not find any way to send requests between databases. There is connection factory available, but i have no any message context in factory method to choose between different connections. Is there any posibility to choose connection string based on message type for example? Or mayby some queue naming convention can do the trick?

The "recipient" is missing in 5.0.5

Hey there.

I have just updated my Rebus.SqlServer from version 4.0.0 to 5.0.5.
It appears that UseSqlServer function misses the parameter that was creating recipient cloumn in corresponding table.
Old code UseSqlServer("connection string", "table name", "recipient name") is now invalid.
New code UseSqlServer("connection string", "table name") is creating table without recipient column.

Is this "recipient feature" forbidden or is it put somewhere else in configuration?

Stale subscription management

Hi,

We're using the SQL server subscription storage. When our subscriber instances failover, the old subscriptions are not cleaned up. New subscriptions are created on the newly created instances with different subscriber names. I'm interested in recommendations or existing approaches for managing subscribers in a "bad actor" scenario, when the original subscriber does not get the opportunity to unsubscribe. Is there anything already available in this space?

Bigint data type for id in the table for due messages

Hi

SqlServerTimeoutManager creates a table with id defined as int.

[id] [int] IDENTITY(1,1) NOT NULL

It seems that bigint would be a better choice for id as the table might be used in a production system with many inserts into due messages during many years. I had to make a fork of SqlServerTimeoutManager just to change data type for id.

Configure saga storage on MS SQL Server with Always Encrypted

If I try to start an Azure WebJob that uses Rebus to communicate with a message queue using SqlServer storage for the sagas it throws the following exception (this is the innermost exception):

Keyword not supported: 'column encryption setting'.

   at System.Data.Common.DbConnectionOptions.ParseInternal(Dictionary`2 parsetable, String connectionString, Boolean buildChain, Dictionary`2 synonyms, Boolean firstKey)
   at System.Data.Common.DbConnectionOptions..ctor(String connectionString, Dictionary`2 synonyms)
   at System.Data.SqlClient.SqlConnectionString..ctor(String connectionString)
   at System.Data.SqlClient.SqlConnectionFactory.CreateConnectionOptions(String connectionString, DbConnectionOptions previous)
   at System.Data.ProviderBase.DbConnectionFactory.GetConnectionPoolGroup(DbConnectionPoolKey key, DbConnectionPoolGroupOptions poolOptions, DbConnectionOptions& userConnectionOptions)
   at System.Data.SqlClient.SqlConnection.ConnectionString_Set(DbConnectionPoolKey key)
   at System.Data.SqlClient.SqlConnection.set_ConnectionString(String value)
   at System.Data.SqlClient.SqlConnection..ctor(String connectionString)
   at Rebus.SqlServer.DbConnectionProvider.CreateSqlConnectionSuppressingAPossibleAmbientTransaction()
   at Rebus.SqlServer.DbConnectionProvider.<GetConnection>d__5.MoveNext()
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()
   at Rebus.SqlServer.Sagas.SqlServerSagaStorage.<EnsureTablesAreCreatedAsync>d__14.MoveNext()
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
   at Rebus.SqlServer.AsyncHelpers.CustomSynchronizationContext.<<Run>b__7_0>d.MoveNext()
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at Rebus.SqlServer.AsyncHelpers.CustomSynchronizationContext.Run()
   at Rebus.SqlServer.AsyncHelpers.RunSync(Func`1 task)
   at Rebus.SqlServer.Sagas.SqlServerSagaStorage.EnsureTablesAreCreated()
   at Rebus.Config.SqlServerSagaConfigurationExtensions.<>c__DisplayClass0_0.<StoreInSqlServer>b__0(IResolutionContext c)
   at Rebus.Injection.Injectionist.Resolver`1.InvokeResolver(IResolutionContext context)
   at Rebus.Injection.Injectionist.ResolutionContext.Get[TService]()

And digging a little bit in the source code I see it uses System.Data.SqlClient so I understand the source of the exception but: is it possible to inject a connection factory and use Microsoft.Data.SqlClient?

Make ExpiredMessagesCleanupTask interval configurable

Hello,

I'm experiencing timeouts in ExpiredMessagesCleanupTask, so I dig into the code and noticed that ExpiredMessagesCleanupInterval is not used for ExpiredMessagesCleanupTask (which is hardcoded to 60 seconds).
Could you bind this configuration to the task, and make it more accessible via SqlServerTransportOption ?

Thanks you !

The transaction operation cannot be performed because there are pending requests working on this transaction

In a scenario where a service has been shutdown for a while, we get the following error when it starts going through the ton of messages that are waiting for it on start-up:

WARN  2017-03-31 11:03:43,604 334386ms  [15] RebusLogger            Warn               - Unhandled exception 1 while handling message with ID 9fff59ae-b132-479a-9813-da386d35b68f: System.Data.SqlClient.SqlException (0x80131904): The transaction operation cannot be performed because there are pending requests working on this transaction.
   at System.Data.SqlClient.SqlConnection.onerror (SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
   at System.Data.SqlClient.SqlInternalConnection.onerror (SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
   at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose)
   at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady)
   at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj)
   at System.Data.SqlClient.TdsParser.TdsExecuteTransactionManagerRequest(Byte[] buffer, TransactionManagerRequestType request, String transactionName, TransactionManagerIsolationLevel isoLevel, Int32 timeout, SqlInternalTransaction transaction, TdsParserStateObject stateObj, Boolean isDelegateControlRequest)
   at System.Data.SqlClient.SqlInternalConnectionTds.ExecuteTransactionYukon(TransactionRequest transactionRequest, String transactionName, IsolationLevel iso, SqlInternalTransaction internalTransaction, Boolean isDelegateControlRequest)
   at System.Data.SqlClient.SqlInternalConnectionTds.ExecuteTransaction(TransactionRequest transactionRequest, String name, IsolationLevel iso, SqlInternalTransaction internalTransaction, Boolean isDelegateControlRequest)
   at System.Data.SqlClient.SqlInternalTransaction.Commit()
   at System.Data.SqlClient.SqlTransaction.Commit()
   at Rebus.SqlServer.DbConnectionWrapper.<Complete>d__8.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Rebus.SqlServer.Transport.SqlServerTransport.<>c__DisplayClass32_1.<<GetConnection>b__1>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Rebus.Transport.TransactionContext.<Invoke>d__24.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Rebus.Transport.TransactionContext.<Commit>d__17.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Rebus.Retry.Simple.SimpleRetryStrategyStep.<DispatchWithTrackerIdentifier>d__7.MoveNext()

It does not happen for all of the messages - only very few.

A quick search for "The transaction operation cannot be performed because there are pending requests working on this transaction" reveals that it could be a "reader" somewhere that has not been closed (se for instance http://stackoverflow.com/questions/36552285/the-transaction-operation-cannot-be-performed-because-there-are-pending-requests).

Rebus.SqlServer may behave incorrectly on databases with READ_COMMITTED_SNAPSHOT option set to ON

We are currently using Rebus with Rebus.SqlServer transport on a database where READ_COMMITTED_SNAPSHOT option is set to ON. Everything works fine so long we are using only single process for consuming events, but things will broke as soon as we try to add additional processes. We think the problem is related to ROWLOCK and READPAST hints, that Rebus.SqlServer relies on dequeuing messages, but unfortunately are not working correctly with SNAPSHOT isolation level, that is used now instead of Rebus.SqlServer's default READ COMMITED isolation level (cause of READ_COMMITTED_SNAPSHOT ON database option).
One possible solution to this problem might be to add an additional table hint READCOMMITTEDLOCK to the Rebus.SqlServer dequeue SQL command. You can read more about READCOMMITTEDLOCK table hint at Microsofts own documentation:

Specifies that read operations comply with the rules for the READ COMMITTED isolation level by using locking. The Database Engine acquires shared locks as data is read and releases those locks when the read operation is completed, regardless of the setting of the READ_COMMITTED_SNAPSHOT database option. For more information about isolation levels, see SET TRANSACTION ISOLATION LEVEL (Transact-SQL).

DataBus Attachments removal strategy ?

Hello, first thanks for the good job on the Rebus project.
I am using the Sql.DataBus and discovered that my table dbo.tDataBus grew up to 50 Go.
After some researches I can not find how attachments should be cleaned/deleted.
What am I missing ?

The server failed to resume the transaction

I have come across this issue 2 days in a row. The root cause seems to be a poorly managed network (we have other network issues with this customer) however I would like to be able to recover from the issue more gracefully.

It appears the SQL connection in use has been closed unexpectedly but left in an invalid state and Rebus is attempting to reuse it continually (the final exception below repeats indefinitely until I restart the process).

Do you have any advice on where to start? I can't see much in the way of connection cleanup in the source but I may have missed it

My Rebus config is as follows, using the default connection provider, using the 4.0.0 .Net 4.5 package

            container.ConfigureRebus(c => c
                .Logging(l => l.Serilog(log))
                .Transport(t => t.UseSqlServer(Configuration.KalibreMessageQueueConnection, Configuration.KalibreMessageQueueTable, Configuration.KalibreMessageQueueInputQueue))
                .Routing(r => r.TypeBased()
                    ...
                )
                .Subscriptions(s => s.StoreInSqlServer(Configuration.KalibreMessageQueueConnection, Configuration.KalibreMessageQueueSubscriptionTable))
                .Options(o =>
                {
                    o.EnableDataBus().StoreInSqlServer(Configuration.KalibreMessageQueueConnection, Configuration.KalibreMessageQueueDataTable);
                    o.SetNumberOfWorkers(int.Parse(System.Configuration.ConfigurationManager.AppSettings["BusWorkers"]));
                    o.SetMaxParallelism(int.Parse(System.Configuration.ConfigurationManager.AppSettings["BusWorkers"]));
                    o.SetBackoffTimes(TimeSpan.FromSeconds(1));
                })
                .Start());
2018-09-21 04:28:25.754 +09:30 [Error] System.AggregateException: One or more errors occurred. ---> System.IndexOutOfRangeException: address
   at System.Data.ProviderBase.FieldNameLookup.GetOrdinal(String fieldName)
   at System.Data.SqlClient.SqlDataReader.GetOrdinal(String name)
   at System.Data.SqlClient.SqlDataReader.get_Item(String name)
   at Rebus.SqlServer.Subscriptions.SqlServerSubscriptionStorage.<GetSubscriberAddresses>d__9.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Rebus.Bus.RebusBus.<InnerPublish>d__32.MoveNext()
   --- End of inner exception stack trace ---
   at System.Threading.Tasks.Task.ThrowIfExceptional(Boolean includeTaskCanceledExceptions)
   at System.Threading.Tasks.Task.Wait(Int32 millisecondsTimeout, CancellationToken cancellationToken)
   at Kalibre.Dashboard.Shared.Handlers.EventsUpdatedHandler.<Handle>d__2.MoveNext()
---> (Inner Exception #0) System.IndexOutOfRangeException: address
   at System.Data.ProviderBase.FieldNameLookup.GetOrdinal(String fieldName)
   at System.Data.SqlClient.SqlDataReader.GetOrdinal(String name)
   at System.Data.SqlClient.SqlDataReader.get_Item(String name)
   at Rebus.SqlServer.Subscriptions.SqlServerSubscriptionStorage.<GetSubscriberAddresses>d__9.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Rebus.Bus.RebusBus.<InnerPublish>d__32.MoveNext()<---

2018-09-21 04:28:26.098 +09:30 [Warning] An error occurred when attempting to receive the next message: "System.IndexOutOfRangeException: headers
   at System.Data.ProviderBase.FieldNameLookup.GetOrdinal(String fieldName)
   at System.Data.SqlClient.SqlDataReader.GetOrdinal(String name)
   at System.Data.SqlClient.SqlDataReader.get_Item(String name)
   at Rebus.SqlServer.Transport.SqlServerTransport.<Receive>d__27.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Rebus.Workers.ThreadPoolBased.ThreadPoolWorker.<ReceiveTransportMessage>d__17.MoveNext()"


2018-09-21 04:28:34.313 +09:30 [Warning] An error occurred when attempting to receive the next message: "System.Data.SqlClient.SqlException (0x80131904): The server failed to resume the transaction. Desc:48000008ac.
   at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
   at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose)
   at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady)
   at System.Data.SqlClient.SqlDataReader.TryConsumeMetaData()
   at System.Data.SqlClient.SqlDataReader.get_MetaData()
   at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString)
   at System.Data.SqlClient.SqlCommand.CompleteAsyncExecuteReader()
   at System.Data.SqlClient.SqlCommand.InternalEndExecuteReader(IAsyncResult asyncResult, String endMethod)
   at System.Data.SqlClient.SqlCommand.EndExecuteReaderInternal(IAsyncResult asyncResult)
   at System.Data.SqlClient.SqlCommand.EndExecuteReaderAsync(IAsyncResult asyncResult)
   at System.Threading.Tasks.TaskFactory`1.FromAsyncCoreLogic(IAsyncResult iar, Func`2 endFunction, Action`1 endAction, Task`1 promise, Boolean requiresSynchronization)
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Rebus.SqlServer.Transport.SqlServerTransport.<Receive>d__27.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Rebus.Workers.ThreadPoolBased.ThreadPoolWorker.<ReceiveTransportMessage>d__17.MoveNext()
ClientConnectionId:7c4a02b7-95ed-4523-9028-3918bf7be1c6
Error Number:3971,State:1,Class:16"

Retrieve connection and transaction from IDbConnection

Hi!

I was looking at the SqlAllTheWay sample since I wanted to make sure that rebus and my data persistence would share the same db connection/transaction. Looking at IDbConnection I saw that there is a CreateCommand method. Is there a reason for just exposing CreateCommand and not SqlConnection and SqlTransaction? I guess I can get the connection and transaction from the SqlCommand but that seems a bit strange to do.

When querying for work implicit conversion is used and index could be better

A lot of time is used on sorting, i can see that previously the DESC/ASC on priority has been changed but i suspect the index was never updated. So the query to receive messages has a sort operation accounting for most of its time:
image

And have this IO stats:
Table 'applicationqueue_invoiceindexer'. Scan count 1, logical reads 3725, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

I tinkered a bit and introduced a receive index like this:
CREATE INDEX IX_Receive ON TABLENAME (priority DESC, visible ASC, id ASC, expiration)

Giving this plan:
image

And this IO:
Table 'applicationqueue_invoiceindexer'. Scan count 1, logical reads 35, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Im sure you can make an even better plan but this was enough to get us flowing a bit better for now.

Another thing that might not have a big impact but will show up as a warning in some DBA tools when analyzing the plan cache:

image

an implicit conversion is used, this could be avoided if GETDATE() is changed to SYSDATETIMEOFFSET() which i assume will require some changes elsewhere.

I performed my tests on a queue with around 1.000.000 entries on a SQL Server 2012.

Proposal: Lease Based Implementation

Just wanted run an idea past you; how do you feel about a lease based implementation in SqlServer? The current implementation of SqlServer is basically;

  1. Start transaction
  2. Delete first message and return result
  3. Run handlers
  4. Commit transaction
    If the handlers run for a long period of time you can run into issues with the transaction timing out resulting in the work being wasted. There's also the issue with transactions then potentially being promoted to distributed transactions and the overhead they cause.

I was considering an implementation where we introduce a new column to the RebusMessage table; LeasedUntilUtc. This implementation would then run as;

  1. Update the first row with a null LeasedUntilUtc to have a LeasedUntilUtc of some time in the future (say, 45s)
  2. Run handlers
  3. ITransactionContext's Commit() would then be changed to delete the row we were dealing with.
  4. Abort() can be used to release the lease (UPDATE RebusMessage SET LeasedUntilUtc = NULL WHERE... )

The above is a bit of an oversimplification. There's two issues that need to be addressed;

  1. What happens if a lease expires? Eg. If a worker is processing Message A and the machines loses power. This can be rectified by making the fetch step (simplified): SELECT * FROM RebusMessage WHERE LeasedUntilUtc IS NULL OR LeasedUntilUtc < GETUTCDATE() (Realistically you'd add some buffer in here)
  2. Now you're faced with the issue of what happens if a lease is for 45s but a worker takes 1m to process it. This can be fixed by having the implementation of ITransactionContext start a timer for, say, 1/2 the lease time. When the timer fires we extend the lease. Then if the worker machine loses power it'll stop renewing the lease and another worker can then process the message. As long as the handler is operating it'll periodically renew the lease.

This lease based approach is how, for instance, Azure Message Queues work. And I can see you even implement the auto renew lock logic for message queues (Eg. https://github.com/rebus-org/Rebus/wiki/Azure-Service-Bus-transport)

I'm yet to do any work on this as yet... but just wondering how you feel about this proposal? I don't think such an implementation would replace the current transactional semantics - which are useful especially in a local scenario - but more an extension/alternate implementation. Our specific use case is we process some large files that can take a a while to process. The nature of this processing means it's not possible to break down the process into smaller steps (Eg. process 1mb of the file at a time) due to internal state used in third party, closed source, libraries.

Deadlocks on production servers

TL; DR: our DBAs added the index below to fix a performance issue on our servers. Is this something worth adding to the Rebus.SqlServer?

IF NOT EXISTS (
    SELECT NULL
    FROM [sys].[indexes] AS [i]
    WHERE [i].[object_id] = OBJECT_ID( '[dbo].[SagaIndexes]' )
          AND [i].[name] = 'IX_SagaIndexes_saga_id'
)
BEGIN
    CREATE NONCLUSTERED INDEX [IX_SagaIndexes_saga_id]
    ON [dbo].[SagaIndexes] ( [saga_id] ASC )
    ON [PRIMARY];
END;

The Longer Version

We're starting to use Rebus against some SQL databases in our data centers to run some fairly simple sagas. We started to notice deadlock issues, which I investigated.

The deadlocks were happening during deletion attempts on rows in the SagaIndexes table. Our DBAs helped and suggested we add an index against the id field of the 'SagaIndexes' table. Their feedback was:

Deletes on the SagaIndexes table are based on the saga_id column, which has a foreign key relationship with the saga table but no index. Therefore, to find the rows that needed to be deleted requires the table to be read, holding the necessary range locks. I would typically suggest always having an index on a column this is part of a foreign key relationship.

This makes sense, but the strange thing was that I could not reproduce the deadlocks on other databases. I tried the local SQL instance on my development laptop and very small SQL Azure instance (General Purpose: Serverless, Gen5, 1 vCore) that costs less than a pint of been in London to run for a month.

The small console apps I used to test the databases is here. We loaded the system with 50 messages first, then started the bus to process the messages. This quickly gave us deadlock errors on our servers.

When the index was added, these deadlocks went away and throughput of messages on the test bus went from 1 message every 3 seconds to 2 messages per second.

Exception with connection string in 5.x.

I upgraded to Rebus 5.0.1 and all dependencies (including Castle), with Rebus.SQLServer 5.0.2 and now get an exception during bus initialization. Rolling back Rebus.SQLServer to 4.0.0 fixes the issue.

I have the following in the Rebus initialization:

.Subscriptions(s => s.StoreInSqlServer("ATSSubscriptions", "subscriptions"))

Along with a connection string named ATSSubscriptions in the app.config. This all worked fine with 4.x, but now when it starts I get the following exception on the line above:

Rebus.Injection.ResolutionException
HResult=0x80131500
Message=Could not resolve Rebus.Subscriptions.ISubscriptionStorage with decorator depth 0 - registrations: Rebus.Injection.Injectionist+Handler
Source=Rebus
StackTrace:
at Rebus.Injection.Injectionist.ResolutionContext.GetTService
at Rebus.Config.RebusConfigurer.<>c.b__12_27(IResolutionContext c)
at Rebus.Injection.Injectionist.Resolver1.InvokeResolver(IResolutionContext context) at Rebus.Injection.Injectionist.ResolutionContext.Get[TService]() at Rebus.Config.RebusConfigurer.<>c.<Start>b__12_20(IResolutionContext c) at Rebus.Injection.Injectionist.Resolver1.InvokeResolver(IResolutionContext context)
at Rebus.Injection.Injectionist.ResolutionContext.GetTService
at Rebus.Config.RebusConfigurer.<>c.b__12_21(IResolutionContext c)
at Rebus.Injection.Injectionist.Resolver1.InvokeResolver(IResolutionContext context) at Rebus.Injection.Injectionist.ResolutionContext.Get[TService]() at Rebus.Config.RebusConfigurer.<>c.<Start>b__12_10(IResolutionContext c) at Rebus.Injection.Injectionist.Resolver1.InvokeResolver(IResolutionContext context)
at Rebus.Injection.Injectionist.ResolutionContext.GetTService
at Rebus.Config.RebusConfigurer.<>c.b__12_12(IResolutionContext c)
at Rebus.Injection.Injectionist.Resolver1.InvokeResolver(IResolutionContext context) at Rebus.Injection.Injectionist.ResolutionContext.Get[TService]() at Rebus.Config.RebusConfigurer.<>c__DisplayClass12_0.<Start>b__25(IResolutionContext c) at Rebus.Injection.Injectionist.Resolver1.InvokeResolver(IResolutionContext context)
at Rebus.Injection.Injectionist.ResolutionContext.GetTService
at Rebus.Config.RebusConfigurer.<>c__DisplayClass12_0.b__26(IResolutionContext c)
at Rebus.Injection.Injectionist.Resolver1.InvokeResolver(IResolutionContext context) at Rebus.Injection.Injectionist.ResolutionContext.Get[TService]() at Rebus.Injection.Injectionist.Get[TService]() at Rebus.Config.RebusConfigurer.Start() at ATSPublisherService.MessageProcessor.Start() in C:\Users\donig\MessageProcessor.cs:line 53 at ATSPublisherService.Program.<>c.<Main>b__0_3(MessageProcessor msg) in C:\Users\donig\ATSPublisherService\Program.cs:line 15 at Topshelf.ServiceConfiguratorExtensions.<>c__DisplayClass2_01.b__0(T service, HostControl control)
at Topshelf.Builders.DelegateServiceBuilder`1.DelegateServiceHandle.Start(HostControl hostControl)
at Topshelf.Hosts.ConsoleRunHost.Run()

Inner Exception 1:
ArgumentException: Keyword not supported: 'atssubscriptions'.

Queue Ordering

We noticed recently that the queries used, both for the leased and regular, SQL transport both filter out not-yet-visible messages (good) but then ignore the visibility value when performing ordering and instead only use the insertion order (See: https://github.com/rebus-org/Rebus.SqlServer/blob/master/Rebus.SqlServer/SqlServer/Transport/SqlServerTransport.cs#L308 ). If you imagine a snapshot of the queue that looks like (note: SentAt isn't a real column... it's just for illustration purposes. But imagine it's the time at which the message was inserted into the queue)

Id Visible Priority SendAt
1000 10:00 0 8:00
1500 9:30 0 8:59
2000 9:00 0 9:00

If we imagine the system has been under heavy load and is now, at 10:05, only just getting to this point in the queue the messages will be processed as 1000, 1500, and finally 2000. However the Visible property implies that the messages will be processed as (2000, 1500, 1000). This is because the current ordering is (Id, Priority). However I think it should actually be (Visible, Priority, Id).

Any thoughts here? We're not expecting, or relying on, strict queue ordering however we are seeing issues under heavy load where long-deferred messages are jumping ahead of messages, with the same priority, that under low load levels would have been processed long ago. Eg. If we were consuming messages faster than they would be produced this queue would have been processed as (2000, 1500, 1000).

Non-optimal index for receive

Search for bus messages in SqlServerTransport are done with
WHERE M.[visible] < sysdatetimeoffset() AND M.[expiration] > sysdatetimeoffset() ORDER BY [priority] DESC, [visible] ASC, [id] ASC
and the index is created as
[priority] ASC, [visible] ASC, [expiration] ASC, [id] ASC

the following index gives about 70% higher throughput in TestSqlTransportReceivePerformance (10000 messages) with my setup ( all run on local development machine)
[priority] DESC, [visible] ASC, [id] ASC, [expiration] ASC
(have not concluded if 'expiration' is needed)

I would suggest a code change :)

SqlServerDataBusStorage share same DB connection with Transport

I'm trying out the Sql server data bus and started hitting SqlExceptions such as "The transaction operation cannot be performed because there are pending requests working on this transaction." from our message handlers.

This is because we configured the Data bus to use the same DB connection that the transport is using. I can see within SqlServerDataBusStorage that it completes the connection in several of the method calls which seems to indicate that this won't work the way we expect.

Main reasoning for this is so that we get atomic commit for messages and attachments alike.

Is this implemented in a different way? Does having two DB connections open, one for transport, one for data bus lead to potential data inconsistencies?

Support for schemas

Hi!

Just discovered Rebus and I'm loving it! In my compary, we use schemas for every set of tables. For the queue table, we wanted to use something like "msg.QueueTable" but when I set this as the table name, the table generated is "dbo.msg.QueueTable".

Is there some way I can achieve this or should be implemented?

Best regards

ExpiredMessagesCleanup runs even for a OneWayClient

This is possibly more of a design decision but we have some roles which run as a oneway client;

RebusConfigurer config;
config.Transport(x => {
  x.Register(res => new SqlServerLeaseTransport(...);
  OneWayClientBackdoor.ConfigureOnewayClient(x);
});

We've noticed in our logs that even in this scenario that the transport still runs its internal task ExpiredMessagesCleanup. To me it seems like that if the transport is one way then it should only ever put messages into the queue. Not do any other work.

Why does SqlServer transport configuration set TimeoutManager to 'disabled'

I'm trying to understand this line:

configurer.OtherService<ITimeoutManager>().Register(c => new DisabledTimeoutManager());

The consequence being, when I configure a SqlServer Transport, then configure SqlServer Timeouts using the provided config extension method, I get an exception as there is already a primary Timeout manager set.

Async deadlock in EnsureTablesAreCreated

In SqlServerSagaStorage.cs we have this little piece of code:

void EnsureTablesAreCreated()
{
  EnsureTablesAreCreatedAsync().Wait();
}

Now we have a small WPF application for initial configuring of our main application. This config app is started as the first app ever on the database and thus becomes the app that creates the database tables for Rebus indirectly through EnsureTablesAreCreated.

Unfortunately WPF + synchronous Wait() result in a deadlock which means our little config app now deadlocks without ever creating the SQL server data tables (since it deadlocks before changes are commited to the database).

Faulty DbConnectionFactoryProvider?

Can't see that the documentation and implementation of https://github.com/rebus-org/Rebus.SqlServer/blob/master/Rebus.SqlServer/SqlServer/DbConnectionFactoryProvider.cs is in sync.

  • It is DbConnctionProvider that ensures MARS, not DbConnectionFactoryProvider
  • IsolationLevel property is not used, hence 'Will use System.Data.IsolationLevel.ReadCommitted" by default on transactions, unless another isolation level is set with the "IsolationLevel" property' is not correct.

Is this class used under these assumptions in the code, and should be improved, or are this just leftovers from a copy of DbConnectionProvider that should be removed?

UpdateLease exceptions produce fail-fast exceptions

Hello,

At this line of code, the UpdateLease can failed, especially when the Bus is overloaded (databases timeout for example: we run a lot of worker - 500 - that polls the same queue).
But when it failed, it produces a fail fast exception...
Have you considered using Polly for example to manage retries over SQL calls ?

await UpdateLease(_connectionProvider, _tableName, _messageId, _leaseInterval).ConfigureAwait(false);

Exceptions when Sending messages

Hi,

I have set up the following system
var services = new ServiceCollection(); // Microsoft.Extensions.DependencyInjection

services.AddRebus(configure => configure
                    .Options(o =>
                        {
                            o.SimpleRetryStrategy(errorQueueAddress: RebusConfig.ErrorQueueAddress);
                            o.SetMaxParallelism(1);
                        }
                    )
                    .Transport(t => t.UseSqlServer(connectionString, RebusConfig.InputQueueNameWorker))
                    .Routing(r => r.TypeBased().Map<SendNotificationMessage>(RebusConfig.DestinationAddressWeb))
                    .Subscriptions(s => s.StoreInSqlServer(connectionString, RebusConfig.Subscription, true))
                );

After sending a few messages. IBus.Send I get one of the following exceptions:

-		InnerException	{System.InvalidOperationException: BeginExecuteNonQuery requires an open and available Connection. The connection's current state is open.
   at System.Data.SqlClient.SqlConnection.GetOpenTdsConnection(String method)
   at System.Data.SqlClient.SqlConnection.ValidateConnectionForExecute(String method, SqlCommand command)
   at System.Data.SqlClient.SqlCommand.ValidateCommand(Boolean async, String method)
   at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(TaskCompletionSource`1 completion, Boolean sendToPipe, Int32 timeout, Boolean asyncWrite, String methodName)
   at System.Data.SqlClient.SqlCommand.BeginExecuteNonQuery(AsyncCallback callback, Object stateObject)
   at System.Threading.Tasks.TaskFactory`1.FromAsyncImpl(Func`3 beginMethod, Func`2 endFunction, Action`1 endAction, Object state, TaskCreationOptions creationOptions)
   at System.Data.SqlClient.SqlCommand.ExecuteNonQueryAsync(CancellationToken cancellationToken)
--- End of stack trace from previous location where exception was thrown ---
   at Rebus.SqlServer.Transport.SqlServerTransport.InnerSend(String destinationAddress, TransportMessage message, IDbConnection connection)
   at Rebus.SqlServer.Transport.SqlServerTransport.Send(String destinationAddress, TransportMessage message, ITransactionContext context)}	System.Exception {System.InvalidOperationException}

OR

+		InnerException	{System.InvalidOperationException: Invalid operation. The connection is closed.
   at System.Data.SqlClient.SqlConnection.GetOpenTdsConnection()
   at System.Data.SqlClient.SqlCommand.WaitForAsyncResults(IAsyncResult asyncResult)
   at System.Data.SqlClient.SqlCommand.EndExecuteNonQueryInternal(IAsyncResult asyncResult)
   at System.Data.SqlClient.SqlCommand.EndExecuteNonQuery(IAsyncResult asyncResult)
   at System.Threading.Tasks.TaskFactory`1.FromAsyncCoreLogic(IAsyncResult iar, Func`2 endFunction, Action`1 endAction, Task`1 promise, Boolean requiresSynchronization)
--- End of stack trace from previous location where exception was thrown ---
   at Rebus.SqlServer.Transport.SqlServerTransport.InnerSend(String destinationAddress, TransportMessage message, IDbConnection connection)
   at Rebus.SqlServer.Transport.SqlServerTransport.Send(String destinationAddress, TransportMessage message, ITransactionContext context)}	System.Exception {System.InvalidOperationException}

Update to v4.0.0 storage not working anymore

After upgrading from v3 to v4 sql server storage is not working anymore

Configure
	.With(new AutofacContainerAdapter(container))
	.Subscriptions(s => s.StoreInSqlServer(settings.QueueRoutingConnectionString,settings.QueueRoutingTableName, true))
	.Serialization(s => s.UseJil(Jil.Options.IncludeInherited))
	.Logging(l => l.NLog())
	.Transport(t => t.UseMsmq(settings.EventsQueueName))
	.Routing(r => r.TypeBased().MapAssemblyOf<UserCreated>(settings.InputQueueName))
	.Options(o => o.SimpleRetryStrategy(settings.ErrorQueueName, 1, true))
	.Options(o => o.SetNumberOfWorkers(2));

While debugging it, the subscription is correctly attached to the configurer, but seems is not configuring the bus correctly

Conflicting behaviour and hangs with Entity Framework Core SqlServer

I am seeing very strange behaviour when using Rebus.SqlServer in a very specific combination with Entity Framework Core. As soon as I just add the Entity Framework Core SqlServer NuGet package and not even use it anywhere in my code I am seeing a continuous stream of thread exited messages in the Visual Studio Output window when debugging:

The thread 0x3be6 has exited with code 0 (0x0).
The thread 15335 has exited with code 0 (0x0).
The thread 0x3be7 has exited with code 0 (0x0).
The thread 15337 has exited with code 0 (0x0).
etc.

After a while the application just hangs completely, probably related to the above.

A couple of observations I made so far:

  • This behaviour appears when I run the application in Docker (Linux containers), but does not appear when I run the application outside of Docker (have not yet tried Windows containers)
  • When the Microsoft.EntityFrameworkCore.SqlServer package is removed from the project, the problem goes away
  • The issue only appears when Rebus is configured to use SqlServer in 'two-way' mode. When it is used in 'one-way' mode the problem does not occur

I have created a minimal reproduction model and uploaded it here.

Just open the solution in Visual Studio and hit F5 to see the behaviour. Note that the solution by default uses Docker to run.

High polling frequency due to parallelism configuration

It seems that the SQL transport polling frequency is max parallelism (SetMaxParallelism) for each backoff interval rather than number of workers (SetNumberOfWorkers) for each interval. The default parallelism of 5 and backoff interval of 250ms results in 20 polls per second. Is there a way to reduce the polling frequency without sacrificing parallelism or latency?

[Question] Deleteing audits past threshold

Is there any way to automatically clean up audit messages?
I specify auditing with following method when configuring rebus:

options.EnableMessageAuditing("audit-queue-name");

My problem is that I run a process to browse some audit messages. I only care about the most recent one, and all past, let's say 7-14 days is obsolete to me and I would like to have it removed. Of course I know that I can clean up audits by myself, but I'm wondering about built-in ways :)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.