Comments (12)
@lebaptiste there is no requirement that Pub/Sub implementations are limited with PubSub interface, we can add some extra methods :) I'm now going to the holiday and I will be back on Monday. If you have some time you can try to experiment with adding extra methods to the existing Kafka Pub/Sub for getting these offsets.
from watermill.
@lebaptiste I added Proof of Concept in kafka-offsets-info
branch (#71).
There are:
kafka.MessagePartitionFromCtx
method to get msg partition from ctxkafka.MessagePartitionOffsetFromCtx
to get offset from ctxkafka.Subscriber.PartitionOffsets
to get partition informations for the topic
What do you think? :)
You can try it with using go get -u github.com/ThreeDotsLabs/watermill@kafka-offsets-info
(of course it is not prod ready!).
from watermill.
Some questions:
- This check of offsets is relatively rare? I mean it will be not done like 10x/s?
- You can have multiple partitions and consumer groups? Consumer group per service? Or no consumer groups?
Possible solutions:
- We can add message offset to message context and method to get topic last offset (per partition). You can get offsets at the beginning and wait until you reach them. Does it sound good?
- If you are using consumer group per service, in theory we can try to try to get information about consumer group and wait until it offset lag is 0. But it sounds a bit tricky for me ;)
from watermill.
In my scenario
- The check of the offset is to know if the service is still catching up, so one call would suffice at the subscription time
- The global service has its own consumer group, however the service could be split in several containers consuming each a subset of topic partitions, (e.g. 1 topic, 12 partitions, 3 consumers consuming 4 partitions each)
I would say solution 1 is preferable and makes sense to me however I don't know how much Watermill should try to be consistent across all pub/sub implementations as this change would have to be Kafka specific.
from watermill.
@lebaptiste : I don't know if current kafka Go drivers allow you to query how many messages are still not consumed: IBM/sarama#489
I would suggest to keep the offset by your consumer app, if you are not using consumer group, and use this offset to start consume from specific offset (instead of beginning).
from watermill.
@lebaptiste : I don't know if current kafka Go drivers allow you to query how many messages are still not consumed: Shopify/sarama#489
I would suggest to keep the offset by your consumer app, if you are not using consumer group, and use this offset to start consume from specific offset (instead of beginning).
As commented by the Sarama author in Shopify/sarama#489 it depends on how you store offsets. IIRC Sarama does not do this out of the box.
I store the offsets in Kafka __consumer_offsets manually, then calcualte a "lag" by periodically compare the offset per consumer_group:topic:partition against the OffsetNewest. I run Sarama by disabling its auto-offset commit, manually curating my offsets.
from watermill.
I have an API service which needs to consume a Kafka compacted topic before to be considered ready to handle any request traffic. How can I determine its readiness?
@lebaptiste I have a service which loads a compacted topic from beginning upon start. The service blocks while loading all events from Kafka. I determine the completion of this loading process by comparing the HighWaterMarkOffsets given by a partition against the offset state in my app for the same partition. When all offsets in all partitions align, the service has caught up.
In the background I run a go routine which periodically basically does the same, to keep the app in-sync with any new events from the compacted topic. I control the internal access of this mechanism with a RWLock.
from watermill.
I did Proof of Concept already: #71
If someone is interested you can improve it (and move to https://github.com/ThreeDotsLabs/watermill-kafka repository).
from watermill.
Any news on this?
We have exactly the same scenario, what I tried to do was that I tried to add offset values to the message context in our custom Marshaler, and then try to retrieve them in a middleware which is responsible to determine the readiness of the service. but the problem is that after Unmarshaling the Kafka message you are replacing the message context with the subscriber context:
So it didn't work. I can also use metadata but that involved multiple type-castings from int to string and back.
Any advice? Can we work on the context to make sure we pass around any values being set on the context in the Unmarshaling phase? are you going to provide built-in functions to retrieve Kafka offset information from the context? I'd be happy to work on it we can agree on something concrete.
from watermill.
Hello @eafzali, I already started to implement it here:
https://github.com/ThreeDotsLabs/watermill/pull/71/files
but I don't know when I will have time to finish that :)
But if you want it would not require a lot of work to finish it. Basically it requires to move it to https://github.com/ThreeDotsLabs/watermill-kafka repository add some tests (as I remember it was working more or less).
from watermill.
OK nice, I think I can do that :)
I would also need to add MessageTopicFromCtx
, or is there any other way to get the message topic in a middleware?
from watermill.
@eafzali probably the best idea to make it universal, would be adding it somewhere in the router :) PR is also welcome for that! :)
from watermill.
Related Issues (20)
- follow-up handler will be broke while before msg handler error, what is the best practice for this situation?
- `msg.Nack` causes infinite loop with gochannel pubsub HOT 4
- Publish message processing delay in metrics Middleware?
- How to subscribe to all gochannel ? HOT 2
- Any plans to add support of RabbitMQ Streaming?
- Generated topic names for PubSub tests does not conform with Azure guidelines
- How to delete message after ack? HOT 1
- Context not being propagated through command bus HOT 2
- Is it possible to get google pubsub server generated unique message-id? HOT 3
- Data race on Publish for Google pubsub
- SQLite driver HOT 4
- Log custom fields inside message router
- Invalid publisher name in metrics
- Resume after shutdown always fetch last interrupted message again
- How to process a batch of received messages instead of a single piece?
- Can you output the message metadata to the logs?
- Only post messages after subscribing
- Watermill tests fail on windows
- The publish signature does not allow for a context.Context to be passed HOT 2
- Router closes publisher before message processing is done
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from watermill.