GithubHelp home page GithubHelp logo

azure / go-shuttle Goto Github PK

View Code? Open in Web Editor NEW
13.0 5.0 20.0 570 KB

go-shuttle is a light wrapper around the azure servicebus sdk for go. It is aimed at providing an api more in line with service implementation in a pub-sub context

License: MIT License

Dockerfile 0.14% Makefile 1.43% Go 94.29% Shell 4.13%
pubsub servicebus azure

go-shuttle's Introduction

go-shuttle

Coverage Status

go-shuttle serves as a wrapper around the azure service-bus go SDK to facilitate the implementation of a pub-sub pattern on Azure using service bus.

NOTE: This library is in early development and should be considered experimental. The api is still moving and can change. We do have breaking changes in v0.*. Use at your own risks.

Conventions & Assumptions

We are assuming that both the publisher and the listener are using go-shuttle

Processor

The processor handles the message pump and feeds your message handler. It allows concurrent message handling and provides a message handler middleware pipeline to compose message handling behavior

Processor Options

MaxConcurrency and ReceiveInterval configures the concurrent message handling for the processor.

StartMaxAttempt and StartRetryDelayStrategy configures the retry behaviour for the processor.

// ProcessorOptions configures the processor
// MaxConcurrency defaults to 1. Not setting MaxConcurrency, or setting it to 0 or a negative value will fallback to the default.
// ReceiveInterval defaults to 2 seconds if not set.
// StartMaxAttempt defaults to 1 if not set (no retries). Not setting StartMaxAttempt, or setting it to non-positive value will fallback to the default.
// StartRetryDelayStrategy defaults to a fixed 5-second delay if not set.
type ProcessorOptions struct {
    MaxConcurrency  int
    ReceiveInterval *time.Duration
    
    StartMaxAttempt         int
    StartRetryDelayStrategy RetryDelayStrategy
}

MultiProcessor

NewMultiProcessor takes in a list of receivers and a message handler. It creates a processor for each receiver and starts them concurrently.

see Processor and MultiProcessor examples

Middlewares:

GoSHuttle provides a few middleware to simplify the implementation of the message handler in the application code

SettlementHandler

Forces the application handler implementation to return a Settlement. This prevents 2 common mistakes:

  • Exit the handler without settling the message.
  • Settling the message, but not exiting the handler (forgetting to return after calling abandon, for example)

see SettlementHandler example

ManagedSettlingHandler

Allows your handler implementation to just return error. the ManagedSettlingHandler settles the message based on your handler return value.

  • nil = CompleteMessage
  • error -> Abandon or Deadletter, depending on your configuration of the handler.

see ManagedSettlingHandler

Enable automatic message lock renewal

This middleware will renew the lock on each message every 30 seconds until the message is Completed or Abandoned.

renewInterval := 30 * time.Second
shuttle.NewRenewLockHandler(&shuttle.LockRenewalOptions{Interval: &renewInterval}, handler)

see setup in Processor example

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

go-shuttle's People

Contributors

charliedmcb avatar coip avatar dependabot[bot] avatar erinborders avatar imiller31 avatar jaiveerk avatar karenychen avatar keikumata avatar microsoftopensource avatar minhng22 avatar nacho692 avatar paulgmiller avatar pdaru avatar serbrech avatar wenxuan0923 avatar xhl873 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

go-shuttle's Issues

Provide a way to disable entities auto-creation

Go shuttle tries to simplify the setup of the client application by providing a minimal, discoverable, and extensible API to start publishing or receiving messages.

To do so, it auto-creates the topics and subscriptions when creating a Publisher/Listener.

I like this to be the default behavior, but it should be possible to disable entities operations if needed.

##This would for example allow to reduce the required rights of the user that runs the receiver or publisher.

Add per message context

Currently the context is shared across messages.
we need to create a new context for each being message handled to be able to cancel them independently on failures.

Race condition in /v2/lockrenewer.go causes unnecessary increment of MessageLockRenewedFailure counter

In NewLockRenewalHandler, when the nested handler exits, we call plr.stop. This does plr.alive.Store(false) and plr.cancelMessageCtx() to cancel the (child) message context.

func (plr *peekLockRenewer) stop(ctx context.Context) {
plr.alive.Store(false)
// don't send the stop signal to the loop if there is already one in the channel
if len(plr.stopped) == 0 {
plr.stopped <- struct{}{}
}
if plr.cancelMessageCtxOnStop {
log(ctx, "canceling message context")
plr.cancelMessageCtx()
}
log(ctx, "stopped periodic renewal")
}

In startPeriodicRenewal, usually we exit if plr.alive is false, otherwise we call RenewMessageLock. BUT it's possible for us to pass the plr.alive check, then fail RenewMessageLock because the context has been canceled.

if !plr.alive.Load() {
return
}
log(ctx, "renewing lock")
count++
err := plr.lockRenewer.RenewMessageLock(ctx, message, nil)

UT Race Failing with fakeSBRenewLockSettler

โฏ go test -race ./...
==================
WARNING: DATA RACE
Write at 0x00c000158e18 by goroutine 70:
  github.com/Azure/go-shuttle/v2_test.(*fakeSBRenewLockSettler).RenewMessageLock()
      /Users/karenchen/go/src/go-shuttle/v2/lockrenewer_test.go:34 +0x12c
  github.com/Azure/go-shuttle/v2.(*peekLockRenewer).startPeriodicRenewal()
      /Users/karenchen/go/src/go-shuttle/v2/lockrenewer.go:128 +0x2e8
  github.com/Azure/go-shuttle/v2_test.Test_RenewalHandlerStayIndependentPerMessage.NewRenewLockHandler.func4.1()
      /Users/karenchen/go/src/go-shuttle/v2/lockrenewer.go:61 +0x58

Previous read at 0x00c000158e18 by goroutine 67:
  github.com/Azure/go-shuttle/v2_test.Test_RenewalHandlerStayIndependentPerMessage.func2()
      /Users/karenchen/go/src/go-shuttle/v2/lockrenewer_test.go:96 +0x54
  runtime.call16()
      /usr/local/opt/go/libexec/src/runtime/asm_arm64.s:478 +0x74
  reflect.Value.Call()
      /usr/local/opt/go/libexec/src/reflect/value.go:380 +0x90
  github.com/onsi/gomega/internal.(*AsyncAssertion).buildActualPoller.func3()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/async_assertion.go:325 +0x178
  github.com/onsi/gomega/internal.(*AsyncAssertion).match()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/async_assertion.go:540 +0xadc
  github.com/onsi/gomega/internal.(*Assertion).ToNot()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/assertion.go:68 +0xe4
  github.com/Azure/go-shuttle/v2_test.Test_RenewalHandlerStayIndependentPerMessage.func2()
      /Users/karenchen/go/src/go-shuttle/v2/lockrenewer_test.go:97 +0x1fc
  runtime.call16()
      /usr/local/opt/go/libexec/src/runtime/asm_arm64.s:478 +0x74
  reflect.Value.Call()
      /usr/local/opt/go/libexec/src/reflect/value.go:380 +0x90
  github.com/onsi/gomega/internal.(*AsyncAssertion).buildActualPoller.func3()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/async_assertion.go:325 +0x178
  github.com/onsi/gomega/internal.(*AsyncAssertion).match()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/async_assertion.go:540 +0xadc
  github.com/onsi/gomega/internal.(*Assertion).ToNot()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/assertion.go:68 +0xe4
  github.com/Azure/go-shuttle/v2_test.Test_RenewalHandlerStayIndependentPerMessage.func2()
      /Users/karenchen/go/src/go-shuttle/v2/lockrenewer_test.go:97 +0x1fc
  runtime.call16()
      /usr/local/opt/go/libexec/src/runtime/asm_arm64.s:478 +0x74
  reflect.Value.Call()
      /usr/local/opt/go/libexec/src/reflect/value.go:380 +0x90
  github.com/onsi/gomega/internal.(*AsyncAssertion).buildActualPoller.func3()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/async_assertion.go:325 +0x178
  github.com/onsi/gomega/internal.(*AsyncAssertion).match()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/async_assertion.go:540 +0xadc
  github.com/onsi/gomega/internal.(*Assertion).ToNot()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/assertion.go:68 +0xe4
  github.com/Azure/go-shuttle/v2_test.Test_RenewalHandlerStayIndependentPerMessage.func2()
      /Users/karenchen/go/src/go-shuttle/v2/lockrenewer_test.go:97 +0x1fc
  runtime.call16()
      /usr/local/opt/go/libexec/src/runtime/asm_arm64.s:478 +0x74
  reflect.Value.Call()
      /usr/local/opt/go/libexec/src/reflect/value.go:380 +0x90
  github.com/onsi/gomega/internal.(*AsyncAssertion).buildActualPoller.func3()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/async_assertion.go:325 +0x178
  github.com/onsi/gomega/internal.(*AsyncAssertion).match()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/async_assertion.go:540 +0xadc
  github.com/onsi/gomega/internal.(*Assertion).ToNot()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/assertion.go:68 +0xe4
  github.com/Azure/go-shuttle/v2_test.Test_RenewalHandlerStayIndependentPerMessage.func2()
      /Users/karenchen/go/src/go-shuttle/v2/lockrenewer_test.go:97 +0x1fc
  runtime.call16()
      /usr/local/opt/go/libexec/src/runtime/asm_arm64.s:478 +0x74
  reflect.Value.Call()
      /usr/local/opt/go/libexec/src/reflect/value.go:380 +0x90
  github.com/onsi/gomega/internal.(*AsyncAssertion).buildActualPoller.func3()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/async_assertion.go:325 +0x178
  github.com/onsi/gomega/internal.(*AsyncAssertion).match()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/async_assertion.go:398 +0x1f4
  github.com/onsi/gomega/internal.(*AsyncAssertion).Should()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/async_assertion.go:145 +0xa8
  github.com/Azure/go-shuttle/v2_test.Test_RenewalHandlerStayIndependentPerMessage()
      /Users/karenchen/go/src/go-shuttle/v2/lockrenewer_test.go:101 +0x7fc
  testing.tRunner()
      /usr/local/opt/go/libexec/src/testing/testing.go:1595 +0x1b0
  testing.(*T).Run.func1()
      /usr/local/opt/go/libexec/src/testing/testing.go:1648 +0x40

Goroutine 70 (running) created at:
  github.com/Azure/go-shuttle/v2_test.Test_RenewalHandlerStayIndependentPerMessage.NewRenewLockHandler.func4()
      /Users/karenchen/go/src/go-shuttle/v2/lockrenewer.go:61 +0x370
  github.com/Azure/go-shuttle/v2.HandlerFunc.Handle()
      /Users/karenchen/go/src/go-shuttle/v2/processor.go:49 +0x58
  github.com/Azure/go-shuttle/v2_test.Test_RenewalHandlerStayIndependentPerMessage.func5()
      /Users/karenchen/go/src/go-shuttle/v2/lockrenewer_test.go:90 +0x70

Goroutine 67 (running) created at:
  testing.(*T).Run()
      /usr/local/opt/go/libexec/src/testing/testing.go:1648 +0x5e8
  testing.runTests.func1()
      /usr/local/opt/go/libexec/src/testing/testing.go:2054 +0x80
  testing.tRunner()
      /usr/local/opt/go/libexec/src/testing/testing.go:1595 +0x1b0
  testing.runTests()
      /usr/local/opt/go/libexec/src/testing/testing.go:2052 +0x6e4
  testing.(*M).Run()
      /usr/local/opt/go/libexec/src/testing/testing.go:1925 +0x9ec
  main.main()
      _testmain.go:143 +0x294
==================
==================
WARNING: DATA RACE
Read at 0x00c00022a540 by goroutine 67:
  runtime.evacuate_fast32()
      /usr/local/opt/go/libexec/src/runtime/map_fast32.go:374 +0x38c
  github.com/Azure/go-shuttle/v2_test.Test_RenewalHandlerStayIndependentPerMessage.func2()
      /Users/karenchen/go/src/go-shuttle/v2/lockrenewer_test.go:96 +0x74
  runtime.call16()
      /usr/local/opt/go/libexec/src/runtime/asm_arm64.s:478 +0x74
  reflect.Value.Call()
      /usr/local/opt/go/libexec/src/reflect/value.go:380 +0x90
  github.com/onsi/gomega/internal.(*AsyncAssertion).buildActualPoller.func3()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/async_assertion.go:325 +0x178
  github.com/onsi/gomega/internal.(*AsyncAssertion).match()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/async_assertion.go:540 +0xadc
  github.com/onsi/gomega/internal.(*Assertion).ToNot()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/assertion.go:68 +0xe4
  github.com/Azure/go-shuttle/v2_test.Test_RenewalHandlerStayIndependentPerMessage.func2()
      /Users/karenchen/go/src/go-shuttle/v2/lockrenewer_test.go:97 +0x1fc
  runtime.call16()
      /usr/local/opt/go/libexec/src/runtime/asm_arm64.s:478 +0x74
  reflect.Value.Call()
      /usr/local/opt/go/libexec/src/reflect/value.go:380 +0x90
  github.com/onsi/gomega/internal.(*AsyncAssertion).buildActualPoller.func3()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/async_assertion.go:325 +0x178
  github.com/onsi/gomega/internal.(*AsyncAssertion).match()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/async_assertion.go:540 +0xadc
  github.com/onsi/gomega/internal.(*Assertion).ToNot()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/assertion.go:68 +0xe4
  github.com/Azure/go-shuttle/v2_test.Test_RenewalHandlerStayIndependentPerMessage.func2()
      /Users/karenchen/go/src/go-shuttle/v2/lockrenewer_test.go:97 +0x1fc
  runtime.call16()
      /usr/local/opt/go/libexec/src/runtime/asm_arm64.s:478 +0x74
  reflect.Value.Call()
      /usr/local/opt/go/libexec/src/reflect/value.go:380 +0x90
  github.com/onsi/gomega/internal.(*AsyncAssertion).buildActualPoller.func3()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/async_assertion.go:325 +0x178
  github.com/onsi/gomega/internal.(*AsyncAssertion).match()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/async_assertion.go:540 +0xadc
  github.com/onsi/gomega/internal.(*Assertion).ToNot()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/assertion.go:68 +0xe4
  github.com/Azure/go-shuttle/v2_test.Test_RenewalHandlerStayIndependentPerMessage.func2()
      /Users/karenchen/go/src/go-shuttle/v2/lockrenewer_test.go:97 +0x1fc
  runtime.call16()
      /usr/local/opt/go/libexec/src/runtime/asm_arm64.s:478 +0x74
  reflect.Value.Call()
      /usr/local/opt/go/libexec/src/reflect/value.go:380 +0x90
  github.com/onsi/gomega/internal.(*AsyncAssertion).buildActualPoller.func3()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/async_assertion.go:325 +0x178
  github.com/onsi/gomega/internal.(*AsyncAssertion).match()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/async_assertion.go:540 +0xadc
  github.com/onsi/gomega/internal.(*Assertion).ToNot()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/assertion.go:68 +0xe4
  github.com/Azure/go-shuttle/v2_test.Test_RenewalHandlerStayIndependentPerMessage.func2()
      /Users/karenchen/go/src/go-shuttle/v2/lockrenewer_test.go:97 +0x1fc
  runtime.call16()
      /usr/local/opt/go/libexec/src/runtime/asm_arm64.s:478 +0x74
  reflect.Value.Call()
      /usr/local/opt/go/libexec/src/reflect/value.go:380 +0x90
  github.com/onsi/gomega/internal.(*AsyncAssertion).buildActualPoller.func3()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/async_assertion.go:325 +0x178
  github.com/onsi/gomega/internal.(*AsyncAssertion).match()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/async_assertion.go:540 +0xadc
  github.com/onsi/gomega/internal.(*Assertion).ToNot()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/assertion.go:68 +0xe4
  github.com/Azure/go-shuttle/v2_test.Test_RenewalHandlerStayIndependentPerMessage.func2()
      /Users/karenchen/go/src/go-shuttle/v2/lockrenewer_test.go:97 +0x1fc
  runtime.call16()
      /usr/local/opt/go/libexec/src/runtime/asm_arm64.s:478 +0x74
  reflect.Value.Call()
      /usr/local/opt/go/libexec/src/reflect/value.go:380 +0x90
  github.com/onsi/gomega/internal.(*AsyncAssertion).buildActualPoller.func3()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/async_assertion.go:325 +0x178
  github.com/onsi/gomega/internal.(*AsyncAssertion).match()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/async_assertion.go:398 +0x1f4
  github.com/onsi/gomega/internal.(*AsyncAssertion).Should()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/async_assertion.go:145 +0xa8
  github.com/Azure/go-shuttle/v2_test.Test_RenewalHandlerStayIndependentPerMessage()
      /Users/karenchen/go/src/go-shuttle/v2/lockrenewer_test.go:101 +0x7fc
  testing.tRunner()
      /usr/local/opt/go/libexec/src/testing/testing.go:1595 +0x1b0
  testing.(*T).Run.func1()
      /usr/local/opt/go/libexec/src/testing/testing.go:1648 +0x40

Previous write at 0x00c00022a540 by goroutine 70:
  runtime.mapaccess2_fast64()
      /usr/local/opt/go/libexec/src/runtime/map_fast64.go:53 +0x1cc
  github.com/Azure/go-shuttle/v2_test.(*fakeSBRenewLockSettler).RenewMessageLock()
      /Users/karenchen/go/src/go-shuttle/v2/lockrenewer_test.go:34 +0xe4
  github.com/Azure/go-shuttle/v2.(*peekLockRenewer).startPeriodicRenewal()
      /Users/karenchen/go/src/go-shuttle/v2/lockrenewer.go:128 +0x2e8
  github.com/Azure/go-shuttle/v2_test.Test_RenewalHandlerStayIndependentPerMessage.NewRenewLockHandler.func4.1()
      /Users/karenchen/go/src/go-shuttle/v2/lockrenewer.go:61 +0x58

Goroutine 67 (running) created at:
  testing.(*T).Run()
      /usr/local/opt/go/libexec/src/testing/testing.go:1648 +0x5e8
  testing.runTests.func1()
      /usr/local/opt/go/libexec/src/testing/testing.go:2054 +0x80
  testing.tRunner()
      /usr/local/opt/go/libexec/src/testing/testing.go:1595 +0x1b0
  testing.runTests()
      /usr/local/opt/go/libexec/src/testing/testing.go:2052 +0x6e4
  testing.(*M).Run()
      /usr/local/opt/go/libexec/src/testing/testing.go:1925 +0x9ec
  main.main()
      _testmain.go:143 +0x294

Goroutine 70 (running) created at:
  github.com/Azure/go-shuttle/v2_test.Test_RenewalHandlerStayIndependentPerMessage.NewRenewLockHandler.func4()
      /Users/karenchen/go/src/go-shuttle/v2/lockrenewer.go:61 +0x370
  github.com/Azure/go-shuttle/v2.HandlerFunc.Handle()
      /Users/karenchen/go/src/go-shuttle/v2/processor.go:49 +0x58
  github.com/Azure/go-shuttle/v2_test.Test_RenewalHandlerStayIndependentPerMessage.func5()
      /Users/karenchen/go/src/go-shuttle/v2/lockrenewer_test.go:90 +0x70
==================
==================
WARNING: DATA RACE
Read at 0x00c0001b20d8 by goroutine 67:
  github.com/Azure/go-shuttle/v2_test.Test_RenewalHandlerStayIndependentPerMessage.func2()
      /Users/karenchen/go/src/go-shuttle/v2/lockrenewer_test.go:97 +0x158
  runtime.call16()
      /usr/local/opt/go/libexec/src/runtime/asm_arm64.s:478 +0x74
  reflect.Value.Call()
      /usr/local/opt/go/libexec/src/reflect/value.go:380 +0x90
  github.com/onsi/gomega/internal.(*AsyncAssertion).buildActualPoller.func3()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/async_assertion.go:325 +0x178
  github.com/onsi/gomega/internal.(*AsyncAssertion).match()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/async_assertion.go:540 +0xadc
  github.com/onsi/gomega/internal.(*Assertion).ToNot()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/assertion.go:68 +0xe4
  github.com/Azure/go-shuttle/v2_test.Test_RenewalHandlerStayIndependentPerMessage.func2()
      /Users/karenchen/go/src/go-shuttle/v2/lockrenewer_test.go:97 +0x1fc
  runtime.call16()
      /usr/local/opt/go/libexec/src/runtime/asm_arm64.s:478 +0x74
  reflect.Value.Call()
      /usr/local/opt/go/libexec/src/reflect/value.go:380 +0x90
  github.com/onsi/gomega/internal.(*AsyncAssertion).buildActualPoller.func3()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/async_assertion.go:325 +0x178
  github.com/onsi/gomega/internal.(*AsyncAssertion).match()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/async_assertion.go:540 +0xadc
  github.com/onsi/gomega/internal.(*Assertion).ToNot()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/assertion.go:68 +0xe4
  github.com/Azure/go-shuttle/v2_test.Test_RenewalHandlerStayIndependentPerMessage.func2()
      /Users/karenchen/go/src/go-shuttle/v2/lockrenewer_test.go:97 +0x1fc
  runtime.call16()
      /usr/local/opt/go/libexec/src/runtime/asm_arm64.s:478 +0x74
  reflect.Value.Call()
      /usr/local/opt/go/libexec/src/reflect/value.go:380 +0x90
  github.com/onsi/gomega/internal.(*AsyncAssertion).buildActualPoller.func3()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/async_assertion.go:325 +0x178
  github.com/onsi/gomega/internal.(*AsyncAssertion).match()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/async_assertion.go:540 +0xadc
  github.com/onsi/gomega/internal.(*Assertion).ToNot()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/assertion.go:68 +0xe4
  github.com/Azure/go-shuttle/v2_test.Test_RenewalHandlerStayIndependentPerMessage.func2()
      /Users/karenchen/go/src/go-shuttle/v2/lockrenewer_test.go:97 +0x1fc
  runtime.call16()
      /usr/local/opt/go/libexec/src/runtime/asm_arm64.s:478 +0x74
  reflect.Value.Call()
      /usr/local/opt/go/libexec/src/reflect/value.go:380 +0x90
  github.com/onsi/gomega/internal.(*AsyncAssertion).buildActualPoller.func3()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/async_assertion.go:325 +0x178
  github.com/onsi/gomega/internal.(*AsyncAssertion).match()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/async_assertion.go:540 +0xadc
  github.com/onsi/gomega/internal.(*Assertion).ToNot()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/assertion.go:68 +0xe4
  github.com/Azure/go-shuttle/v2_test.Test_RenewalHandlerStayIndependentPerMessage.func2()
      /Users/karenchen/go/src/go-shuttle/v2/lockrenewer_test.go:97 +0x1fc
  runtime.call16()
      /usr/local/opt/go/libexec/src/runtime/asm_arm64.s:478 +0x74
  reflect.Value.Call()
      /usr/local/opt/go/libexec/src/reflect/value.go:380 +0x90
  github.com/onsi/gomega/internal.(*AsyncAssertion).buildActualPoller.func3()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/async_assertion.go:325 +0x178
  github.com/onsi/gomega/internal.(*AsyncAssertion).match()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/async_assertion.go:540 +0xadc
  github.com/onsi/gomega/internal.(*Assertion).ToNot()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/assertion.go:68 +0xe4
  github.com/Azure/go-shuttle/v2_test.Test_RenewalHandlerStayIndependentPerMessage.func2()
      /Users/karenchen/go/src/go-shuttle/v2/lockrenewer_test.go:97 +0x1fc
  runtime.call16()
      /usr/local/opt/go/libexec/src/runtime/asm_arm64.s:478 +0x74
  reflect.Value.Call()
      /usr/local/opt/go/libexec/src/reflect/value.go:380 +0x90
  github.com/onsi/gomega/internal.(*AsyncAssertion).buildActualPoller.func3()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/async_assertion.go:325 +0x178
  github.com/onsi/gomega/internal.(*AsyncAssertion).match()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/async_assertion.go:398 +0x1f4
  github.com/onsi/gomega/internal.(*AsyncAssertion).Should()
      /Users/karenchen/go/pkg/mod/github.com/onsi/[email protected]/internal/async_assertion.go:145 +0xa8
  github.com/Azure/go-shuttle/v2_test.Test_RenewalHandlerStayIndependentPerMessage()
      /Users/karenchen/go/src/go-shuttle/v2/lockrenewer_test.go:101 +0x7fc
  testing.tRunner()
      /usr/local/opt/go/libexec/src/testing/testing.go:1595 +0x1b0
  testing.(*T).Run.func1()
      /usr/local/opt/go/libexec/src/testing/testing.go:1648 +0x40

Previous write at 0x00c0001b20d8 by goroutine 70:
  github.com/Azure/go-shuttle/v2_test.(*fakeSBRenewLockSettler).RenewMessageLock()
      /Users/karenchen/go/src/go-shuttle/v2/lockrenewer_test.go:34 +0xf0
  github.com/Azure/go-shuttle/v2.(*peekLockRenewer).startPeriodicRenewal()
      /Users/karenchen/go/src/go-shuttle/v2/lockrenewer.go:128 +0x2e8
  github.com/Azure/go-shuttle/v2_test.Test_RenewalHandlerStayIndependentPerMessage.NewRenewLockHandler.func4.1()
      /Users/karenchen/go/src/go-shuttle/v2/lockrenewer.go:61 +0x58

Goroutine 67 (running) created at:
  testing.(*T).Run()
      /usr/local/opt/go/libexec/src/testing/testing.go:1648 +0x5e8
  testing.runTests.func1()
      /usr/local/opt/go/libexec/src/testing/testing.go:2054 +0x80
  testing.tRunner()
      /usr/local/opt/go/libexec/src/testing/testing.go:1595 +0x1b0
  testing.runTests()
      /usr/local/opt/go/libexec/src/testing/testing.go:2052 +0x6e4
  testing.(*M).Run()
      /usr/local/opt/go/libexec/src/testing/testing.go:1925 +0x9ec
  main.main()
      _testmain.go:143 +0x294

Goroutine 70 (running) created at:
  github.com/Azure/go-shuttle/v2_test.Test_RenewalHandlerStayIndependentPerMessage.NewRenewLockHandler.func4()
      /Users/karenchen/go/src/go-shuttle/v2/lockrenewer.go:61 +0x370
  github.com/Azure/go-shuttle/v2.HandlerFunc.Handle()
      /Users/karenchen/go/src/go-shuttle/v2/processor.go:49 +0x58
  github.com/Azure/go-shuttle/v2_test.Test_RenewalHandlerStayIndependentPerMessage.func5()
      /Users/karenchen/go/src/go-shuttle/v2/lockrenewer_test.go:90 +0x70
==================
--- FAIL: Test_RenewalHandlerStayIndependentPerMessage (0.12s)
    testing.go:1465: race detected during execution of test
FAIL
FAIL    github.com/Azure/go-shuttle/v2  4.673s
ok      github.com/Azure/go-shuttle/v2/metrics  (cached)
ok      github.com/Azure/go-shuttle/v2/metrics/processor        (cached)
ok      github.com/Azure/go-shuttle/v2/metrics/sender   (cached)
ok      github.com/Azure/go-shuttle/v2/otel     (cached)
FAIL

Expose ScheduleAt publisher option

In addition to "SetMessageDelay", can we expose a "ScheduleAt" option in publisheropts (https://github.com/Azure/go-shuttle/blob/main/common/options/publisheropts/options.go)?

We want to schedule a message at a future timestamp. With this existing "SetMessageDelay" option, we need to calculate the the time offset first, and then inside the "SetMessageDelay", it adds this offset back to time.Now() to get the future timestamp. If we can expose the ScheduleAt option, we don't need this double conversion.

Handle errors on listener/publisher creation

In real world applications, there is more than 1 replica of the service listening or publishing messages.

Currently, when concurrent PUT request are sent to Azure to create the same topic, the first request succeeds, but the subsequent ones will return 409 Conflict http response until the topic is successfully created.

We also hit this problem when starting all integration tests in parallel, which forced us to select which ones can run concurrently.

The library should reasonably retry these requests to allow multiple replicas of a service to start simultaneously

Suggestion: add simple retries on management api requests

Handle Message Expiration (TTL)

Messages that have been on the queue longer than the defined TTL reach expiration. They cannot be completed anymore.
When using RetryLater, they we could stay in memory without noticing that the message has expired.
This has to be handled in the listener, so that we stop delaying the in-memory message handling once the message expiration has been reached.

i.e : periodically check that time.Now() < msg.SystemProperties.EnqueuedTime + message.TTL and abandon the message when reached.

handler ctor should panic if next handler is nil

the servicebus receiver closes the connection when the handler returns an error.
The concurrent handler breaks that idiom and ignores downstream errors, assuming handlers don't return error on message handling.

we do return an error when the handler is not well configured (next == nil).

This error occurs only when the handler is invoked.
The api to construct a handler middleware pipeline would be unusable if the ctor was to return an error.

If you misconfigure a handler by providing nil to the ctor, there is no way to reccover, we might as well panic.

Support for Queues (alongside topics)

unsure if topics would be plug-n-play for use-cases in mind, but I currently depend on the queue servicebus primitive and would like to move to a pkg like this instead of the vanilla azure-service-bus-go lib

topics might complicate it, might not. Im uninitiated beyond the perspective of topics being a means of fanning out to N subscribers (where services I have in mind are currently point-to-point, only consuming queues in the azure-service-bus-go lib)

appreciate any thoughts, TIA

ability to ignore certain error from being count into IncSendMessageFailureCount()

i.e. any contextError that not related servicebus client/server.

go-shuttle/v2/sender.go

Lines 81 to 92 in d03264e

select {
case <-ctx.Done():
sender.Metric.IncSendMessageFailureCount()
return fmt.Errorf("failed to send message: %w", ctx.Err())
case err := <-errChan:
if err == nil {
sender.Metric.IncSendMessageSuccessCount()
} else {
sender.Metric.IncSendMessageFailureCount()
}
return err
}

My initial thought is to add a ErrorFilter func(error)bool on type SenderOptions struct, we can run any error through that filter, and only IncSendMessageFailureCount() if filter return false.

go-shuttle/v2/sender.go

Lines 35 to 45 in d03264e

type SenderOptions struct {
// Marshaller will be used to marshall the messageBody to the azservicebus.Message Body property
// defaults to DefaultJSONMarshaller
Marshaller Marshaller
// EnableTracingPropagation automatically applies WithTracePropagation option on all message sent through this sender
EnableTracingPropagation bool
// SendTimeout is the timeout value used on the context that sends messages
// Defaults to 30 seconds if not set or 0
// Disabled when set to a negative value
SendTimeout time.Duration
}

Setup CI

Build
Lint
Run unit tests
Code metrics
Run integration tests
ReleaseNotes generation on labels

Parallel message handling does not work

Which version of the SDK was used?

github.com/Azure/go-shuttle v0.5.5
https://github.com/Azure/go-shuttle/tree/8da14c485b239d55fa6b75dd41704e7d3bd0e55f

Which platform are you using? (ex: Windows, Linux, Debian)

Linux

What problem was encountered?

How things are currently set up in go-shuttle I am only able to process one message from the service bus at a time, and don't get to move onto the next message until the first is completed. I have been unable to find any exposed feature that would allow me to handle concurrent messages.

From digging through the code base I found that there are a few locations in the dependency azure-service-bus-go that I believe are resulting in this behavior, with modification to go-shuttle I expect that either exposing a setting, a new AsyncListener or something similar could allow for concurrent handling of messages:

  1. where handle message is called we get a blocking call until the handler completes the message:
    https://github.com/Azure/azure-service-bus-go/blob/705d23958eb9c000582c03ba30413dc7e35eb25d/receiver.go#L235
  2. PeakAndLock mode is the default, and not exposed to the user. From my understanding this means that the message will not be removed from the queue until it is completed meaning that we would not even be able to create additional Listeners to receive more messages, as we would still be blocked until the locked message was completed and removed (I expect this should not be changed, and most likely does not want to be exposed either):
    https://github.com/Azure/azure-service-bus-go/blob/705d23958eb9c000582c03ba30413dc7e35eb25d/receiver.go#L108
  3. The prefetchCount/linkCredit is set to 1 as default, and there is no setting exposed from go-shuttle to set it:
    https://github.com/Azure/azure-service-bus-go/blob/705d23958eb9c000582c03ba30413dc7e35eb25d/receiver.go#L109

An additional note: as RetryLater simply pauses the process in memory, but doesn't remove the message from the queue we are unable to progress onto the next message until we actually complete it. I expect this in intended behavior, but wanted to make note.

How can we reproduce the problem in the simplest way?

Create a Azure Service Bus namespace, and a topic
Setup a go-shuttle Listener for the topic
Enqueue multiple messages
Only one message will be handled at a time with no parallelism

Have you found a mitigation/solution?

While working with a few others we have not found a solution yet, but have run some informative tests and have some ideas:

  1. I modified code location 1. to go r.handleMessage(ctx, msg, handler), and code location 2. to mode: ReceiveAndDeleteMode,. When running a repro test with this I was successfully able to handle the messages in parallel. I expect modifying the first code location is probably not ok as I believe it possibly changes the current behavior, but might be an alright change. I believe it would need more knowledgeable voices in the mix and isn't even for this repo. However, I highly expect that modifying ReceiveAndDeleteMode is not an acceptable solution as it changes the default behavior. I believe even exposing the option to use it in go-shuttle is not an acceptable final solution as in my understanding in the event that the consuming service crashes the message would be lost.
  2. Working with a few other people Xiahe found the following PR that gives an example of how to implement concurrency using the azure-service-bus-go lib: https://github.com/Azure/azure-service-bus-go/pull/117/files. Based of this PR, Paul Miller created a strawman PR in go-shuttle to emulate what the example PR showed: #9. I ran a repro test using this PR and was able to confirm that it worked correctly; although, it was limited to handling the number of messages equal to Paul's concurrency value which is used to set the prefetchCount/linkCredit.

I believe the most likely solution will involve something similar to Paul's PR exposed as a new AsyncListener with an option to adjust what the concurrency value is set to, or modify the current Listener and add an exposed option to adjust what the concurrency is set to with the default staying at 1.

Messages are not removed from topic after calling Complete()

Which version of the SDK was used?

github.com/Azure/go-shuttle v0.5.5
https://github.com/Azure/go-shuttle/tree/8da14c485b239d55fa6b75dd41704e7d3bd0e55f

Which platform are you using? (ex: Windows, Linux, Debian)

Linux

What problem was encountered?

I found the following strange behavior:
When more than one message is enqueued and the first one is received, if I call RetryLater one or more times and then call Complete on the message than the message will not be removed from the topic in Azure.

Similar cases that work:
If I enqueue only one message and run RetryLater multiple times and then Complete the message is removed correctly.

If I enqueue multiple messages, but call Complete on them the first time they are received than they are able to be removed correctly.

Additional note, I tried running this when receiving more than one message concurrently with prefetch as shown in this PR (The link goes to the changes I tested. there have been more since):
https://github.com/Azure/go-shuttle/pull/9/files/4c15768c4d48dc9835ecf1dc0a82e04fa0da1ec0
I ran the same style of test with running RetryLater multiple times before calling Complete; however, with 6 messages on the topic (one more than the prefetch/currency value). In this test the messages were not removed for the topic after calling Complete.

When running the concurrent version there was one instance where the messages were removed from the topic which doesn't fit the calling RetryLater multiple times before calling Complete not removing messages formula. However, it did require calling Complete more than once 3-5 times depending upon the message, which is also a rather big issue. The 6 messages which you would expect to get 6 Completed calls for got 21 in total.

I have also noticed other messages getting Completed multiple times as well, which makes me suspect that they are not being removed from the topic in Azure. However, they do still eventually get removed, the Complete call is just called multiple times. I expect it is a related, but may or may not be solved in the same way.

How can we reproduce the problem in the simplest way?

I am not sure exactly why this error is occurring, so I can't say for sure.

To repro the RetryLater Complete blocking bug I repro it by doing the following:

  1. Create a Azure Service Bus Topic
  2. Enqueue 2+ messages (at least one more than you will be handling concurrently)
  3. Receive one or more of the messages with a Listener (depends if you are running concurrently)
  4. RetryLater for 5+ minutes (one or more times for each message)
  5. Complete the message(s)
  6. Look in Azure portal if the message(s) has been removed from the topic (in all but one of my tests this resulted in the message/messages never being removed. The one time they were removed Complete ended up being called multiple times)

Sadly I do not have a repro for messages being unexpectedly Completed multiple times.

Have you found a mitigation/solution?

While working with a few others we have not found a solution yet, but have some ideas:

Xiahe found this:
https://github.com/Azure/azure-service-bus-go/blob/c6cb351fc4abe639e1fc100b0d82040062a49fd0/subscription_manager.go#L110
It should be the lockduration PeekLockMode is using. She is suspecting that RetryLater might cause the lock expired, and complete on previous delivery may not work. Xiahe was able to test with the lockduration at 5 minutes and only the messages with a RetryLater longer than 5 minutes had duplicate Completes

Paul found this:
https://github.com/Azure/azure-service-bus-go/blob/c6cb351fc4abe639e1fc100b0d82040062a49fd0/lockrenewal_test.go
What contains code that is renewing the lock:
https://github.com/Azure/azure-service-bus-go/blob/c6cb351fc4abe639e1fc100b0d82040062a49fd0/lockrenewal_test.go#L78
It looked promising. It seems like RenewLocks should be called inside of RetryLater. However, we need to have access to a receivingEntity entry at that point, which sub which is created in Listener is:

sub, err := topic.NewSubscription(l.subscriptionEntity.Name)

Xiahe and Paul are currently working on changes that would be able to implement RenewLocks for the messages. I expect this will solve the multiple Complete issue; however, am unsure if this will solve the RetryLater Complete blocking bug (depends if the source of the bug is the same).

NewPanicHandler is quite broken

v2/NewPanicHandler has several issues:

go-shuttle/v2/processor.go

Lines 137 to 146 in 39a2722

func NewPanicHandler(handler Handler) HandlerFunc {
defer func() {
if err := recover(); err != nil {
panic(fmt.Sprintf("failed to recover panic: %s", err))
}
}()
return func(ctx context.Context, settler MessageSettler, message *azservicebus.ReceivedMessage) {
handler.Handle(ctx, settler, message)
}
}

  1. The defer statement is in the wrong place - it needs to be inside the return func to have any effect. At the moment it's a no-op.
  2. No unit test coverage.
  3. A panic handler which recovers a panic and then panics again doesn't seem very useful to me.

I'd send a fix for (1/2), but I'm not at all sure what this code is intended to do, so am leaving this issue for the maintainers to decide.

feat: Add DefaultSendTimeout on SenderOptions

When sending a message to servicebus, it's good practice to ensure there is a timeout set on the context.
We've observed the Send call to hang indefinitely due to the SDK not recovering a connection correctly.

allowing to set a Default Timeout on creation of the sender will remove this concern from all code using the sender.

Tasks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.