GithubHelp home page GithubHelp logo

morganstanley / modern-cpp-kafka Goto Github PK

View Code? Open in Web Editor NEW
335.0 335.0 86.0 5.79 MB

A C++ API for Kafka clients (i.e. KafkaProducer, KafkaConsumer, AdminClient)

License: Apache License 2.0

CMake 3.19% C++ 94.66% Python 1.76% Starlark 0.40%
kafka kafkaconsumer kafkaproducer librdkafka

modern-cpp-kafka's People

Contributors

andrewchellis avatar bingenito avatar jm4r avatar kbaur avatar kenneth-jia avatar kennethjia avatar kiillthemall avatar kitattyor avatar mgxnzhu avatar niyue avatar rockwood-openai avatar stertingen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

modern-cpp-kafka's Issues

"/usr/bin/ld: cannot find -lmodern-cpp-kafka-api" error message while building examples

Hi. Thanks for your awesome repository.

I tried to build your examples by include related libraries to CMakeList.txt

include_directories(../include)
link_directories(../include)

after run cmake CMakeList.txt and make command it gives me this error:


[ 10%] Building CXX object CMakeFiles/kafka_async_producer_copy_payload.dir/kafka_async_producer_copy_payload.o
[ 20%] Linking CXX executable kafka_async_producer_copy_payload
/usr/bin/ld: cannot find -lmodern-cpp-kafka-api
collect2: error: ld returned 1 exit status
CMakeFiles/kafka_async_producer_copy_payload.dir/build.make:94: recipe for target 'kafka_async_producer_copy_payload' failed
make[2]: *** [kafka_async_producer_copy_payload] Error 1
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/kafka_async_producer_copy_payload.dir/all' failed
make[1]: *** [CMakeFiles/kafka_async_producer_copy_payload.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2

especially this line:
/usr/bin/ld: cannot find -lmodern-cpp-kafka-api

How could I address modern cpp kafka api library?
Thanks in advance

Building error when trying to include modern-cpp-kafka in my project

I am trying to use modern-cpp-kafka in my project, but I couldn't pass compile.
I am following the install step in README:
Setting C++ 17 with set(CMAKE_CXX_STANDARD 17) or add_definitions(-std=c++17) in CMakeList

Include the include/kafka directory in your project
by adding
include_directories(deps/modern-cpp-kafka/modern-cpp-kafka/include/kafka)
in CMakeList

I include this files in main.cpp

#include "ProducerCommon.h"

But, when compile, I got this
In file included from /path/to/my/project/src/main.cpp:28:0: /path/to/my/project/deps/modern-cpp-kafka/modern-cpp-kafka/include/kafka/ProducerCommon.h:3:27: fatal error: kafka/Project.h: No such file or directory compilation terminated.

I check ProducerCommon.h:3:27 and it include "kafka/Project.h"
so I add
include_directories(deps/modern-cpp-kafka/modern-cpp-kafka/include)
before
include_directories(deps/modern-cpp-kafka/modern-cpp-kafka/include/kafka)

Change include files in main.cpp

#include "kafka/KafkaProducer.h"
#include "kafka/ProducerCommon.h"
#include "kafka/KafkaConsumer.h"

then I got lots of error, such as
"not a member of" or "is not declared",
"error: ‘struct kafka::kafka::kafka::kafka::kafka::kafka::kafka::kafka::kafka::kafka::logger_impl::streambuf_entry’ has no member named ‘streambuffer’"
(yes, so many "kafka::". logger_impl::streambuf_entry is my class, but kafka is your namespace, why did I get this error?)
and lots of "error: expected ‘}’ before end of line"
I could provide whole compile error log if you need, log may be thousands lines
I think it may be some incorrect path in my setting or wrong define in your project?

what I am missing?

modern-cpp-kafka version: release v2021.12.08
librdkafka-dev version: Installed: 0.8.6-1.1
OS version: ubuntu 16.04.2
Cmake version: 3.5.1
Gcc version: gcc (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609

Add to Conan-center

First of all thank you guys for the wrapper for modern C++!
Any chance you plan to add it to Conan center?

[librdkafka issue] rd_kafka_consumer_close() would keep retrying the "coordinator query" if there's no response from brokers -- thus the consumer would stuck there forever

  • A placeholder
    librdkafka issue: confluentinc/librdkafka#3740

  • Description
    We found the PR confluentinc/librdkafka#3416 has changed the retry behavior for "offset commit".
    If a consumer try to commit offsets with rd_kafka_commit_queue(...) before closing (with rd_kafka_consumer_close(...)), but there's no response from brokers, the consumer would be keep retrying the "coordinator query" forever, thus the consumer would stuck there (and user's application could never quit)!

  • How to reproduce

    • Robustness test: TEST(KafkaConsumer, DISABLED_AlwaysFinishClosing_ManuallyPollEvents)
    • Robustness test: TEST(KafkaConsumer, DISABLED_CommitOffsetWhileBrokersStop)

cooperative rebalance support

Hi there,

Thank you for your great work.

When I'm using consumer with property: {"partition.assignment.strategy", "cooperative-sticky"}, I found it failed with message:

WARNING KafkaConsumer[c3f918a8-7561f2c7] ASSIGN | [thrd:main]: Group "test": application *assign() call failed: Changes to the current assignment must be made using incremental_assign() or incremental_unassign() when rebalance protocol type is COOPERATIVE

However, cooperative seems to be default assigner in future: https://issues.apache.org/jira/browse/KAFKA-12473 . I think it's necessary to make client support cooperative assignment strategy.

I found an example in librdkafka for cooperative rebalance callback:

switch (err) {
       case RD_KAFKA_RESP_ERR__ASSIGN_PARTITIONS:
             // application may load offets from arbitrary external
             // storage here and update \p partitions
             if (!strcmp(rd_kafka_rebalance_protocol(rk), "COOPERATIVE"))
                     rd_kafka_incremental_assign(rk, partitions);
             else // EAGER
                     rd_kafka_assign(rk, partitions);
             break;

        case RD_KAFKA_RESP_ERR__REVOKE_PARTITIONS:
             if (manual_commits) // Optional explicit manual commit
                 rd_kafka_commit(rk, partitions, 0); // sync commit

             if (!strcmp(rd_kafka_rebalance_protocol(rk), "COOPERATIVE"))
                     rd_kafka_incremental_unassign(rk, partitions);
             else // EAGER
                     rd_kafka_assign(rk, NULL);
             break;

        default:
             handle_unlikely_error(err);
             rd_kafka_assign(rk, NULL); // sync state
             break;
         }
    }

Seems KafkaConsumer::_assign can be changed to align with incremental assignment

Multiple Consumers in same Consumer Group but different Topics

I have currently started 2 Consumers. Each of them in there own thread. Both of them are subscribed to different topics. If i use the same group id for both of them i got the following Message when the second Consumer is subscribing to his topic:

[2022-06-22 12:25:16.197626]NOTICE KafkaConsumer[31a94d84-bbbb1476] initialized with properties[auto.commit.interval.ms=0|auto.offset.reset=earliest|bootstrap.servers=kafka_node_1:9093,kafka_node_2:9093,kafka_node_3:9093|client.id=31a94d84-bbbb1476|debug=all|enable.auto.commit=true|enable.auto.offset.store=true|group.id=dummy_test_kafka_consumer|log_level=5|max.poll.records=5000|queued.min.messages=1000000|security.protocol=ssl|ssl.ca.location=D:\cert\ca.cert.pem|ssl.certificate.location=D:\cert\cert.pem|ssl.key.location=D:\cert\key.pem|ssl.key.password=pass]
[2022-06-22 12:25:51.316662]NOTICE KafkaConsumer[31a94d84-bbbb1476] REQTMOUT | [thrd:GroupCoordinator]: GroupCoordinator/17: Timed out LeaveGroupRequest in flight (after 5012ms, timeout #0): possibly held back by preceeding blocking JoinGroupRequest with timeout in 271572ms
[2022-06-22 12:25:51.317861]WARNING KafkaConsumer[31a94d84-bbbb1476] REQTMOUT | [thrd:GroupCoordinator]: GroupCoordinator/17: Timed out 1 in-flight, 0 retry-queued, 0 out-queue, 0 partially-sent requests
[2022-06-22 12:25:51.318404]NOTICE KafkaConsumer[31a94d84-bbbb1476] closed

If i use a different Group Id per Consumer they both running.

Only searches for static library

The CMakeLists.txt file only searches for the static compiled librdkafka library.
There should be the possibility to use both, static and shared libraries.

confusion about using head only file

hi dear authors, thanks for your job!
I got some error when I use include/kafka's head only file.
I have a cmake project and copy a example file as my main.cpp, then I write the below

add_executable(kafka main.cpp)

target_include_directories(kafka PUBLIC ../../../third_party/modern-cpp-kafka/include)

target_link_libraries(kafka INTERFACE rdkafka pthread)

to my CMakeLists.txt.
But when I run my main.cpp I got some error like undefined reference to 'rd_kafka_conf_destroy'.
I think it's a link not found problem so I debug cmake but it show that there are rdkafka and pthread lib in my env.
I m confused about it and I wanna asking for your help~
Or Could you tell me how to use it correctly?

`doLog` causes pointer being freed was not allocated

I find a pointer being freed was not allocated issue for the following case:

  1. Start a consumer
  2. Bring down Kafka
  3. Consumer tries to log some message like KafkaConsumer[redacted] ERROR | [thrd:localhost:9092/bootstrap]: 3/3 brokers are down
  4. Several messages are logged
  5. Bring Kafka back online, the program crashes. The crash log looks like this:
Thread 21 Crashed:: rdk:broker1
0   libsystem_kernel.dylib        	0x00007fff205b892e __pthread_kill + 10
1   libsystem_pthread.dylib       	0x00007fff205e75bd pthread_kill + 263
2   libsystem_c.dylib             	0x00007fff2053c411 abort + 120
3   libsystem_malloc.dylib        	0x00007fff2041c1f5 malloc_vreport + 548
4   libsystem_malloc.dylib        	0x00007fff2041f34a malloc_report + 151
5   libc++.1.dylib                	0x00007fff205804ae std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::__grow_by(unsigned long, unsigned long, unsigned long, unsigned long, unsigned long, unsigned long) + 216
6   libc++.1.dylib                	0x00007fff20580e54 std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::push_back(char) + 92
7   kafka-tests                 	0x000000010b92c8e1 std::__1::basic_stringbuf<char, std::__1::char_traits<char>, std::__1::allocator<char> >::overflow(int) + 113
8   libc++.1.dylib                	0x00007fff205531fb std::__1::basic_streambuf<char, std::__1::char_traits<char> >::xsputn(char const*, long) + 109
9   kafka-tests                 	0x000000010769b6a9 std::__1::basic_streambuf<char, std::__1::char_traits<char> >::sputn(char const*, long) + 41 (streambuf:229)
10  kafka-tests                 	0x000000010769b4f0 std::__1::ostreambuf_iterator<char, std::__1::char_traits<char> > std::__1::__pad_and_output<char, std::__1::char_traits<char> >(std::__1::ostreambuf_iterator<char, std::__1::char_traits<char> >, char const*, char const*, char const*, std::__1::ios_base&, char) + 448 (locale:1411)
11  kafka-tests                 	0x000000010769b216 std::__1::basic_ostream<char, std::__1::char_traits<char> >& std::__1::__put_character_sequence<char, std::__1::char_traits<char> >(std::__1::basic_ostream<char, std::__1::char_traits<char> >&, char const*, unsigned long) + 262 (ostream:730)
12  kafka-tests                 	0x000000010769b0c1 std::__1::basic_ostream<char, std::__1::char_traits<char> >& std::__1::operator<<<char, std::__1::char_traits<char>, std::__1::allocator<char> >(std::__1::basic_ostream<char, std::__1::char_traits<char> >&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) + 65 (ostream:1052)
13  kafka-tests                 	0x000000010785d34d kafka::DefaultLogger(int, char const*, int, char const*) + 141 (Log.h:78)
14  kafka-tests                 	0x000000010785f4f7 decltype(std::__1::forward<void (*&)(int, char const*, int, char const*)>(fp)(std::__1::forward<int>(fp0), std::__1::forward<char const*>(fp0), std::__1::forward<int>(fp0), std::__1::forward<char const*>(fp0))) std::__1::__invoke<void (*&)(int, char const*, int, char const*), int, char const*, int, char const*>(void (*&)(int, char const*, int, char const*), int&&, char const*&&, int&&, char const*&&) + 119 (type_traits:3747)
15  kafka-tests                 	0x000000010785f451 void std::__1::__invoke_void_return_wrapper<void>::__call<void (*&)(int, char const*, int, char const*), int, char const*, int, char const*>(void (*&)(int, char const*, int, char const*), int&&, char const*&&, int&&, char const*&&) + 113 (__functional_base:348)
16  kafka-tests                 	0x000000010785f3c1 std::__1::__function::__alloc_func<void (*)(int, char const*, int, char const*), std::__1::allocator<void (*)(int, char const*, int, char const*)>, void (int, char const*, int, char const*)>::operator()(int&&, char const*&&, int&&, char const*&&) + 113 (functional:1553)
17  kafka-tests                 	0x000000010785e0d2 std::__1::__function::__func<void (*)(int, char const*, int, char const*), std::__1::allocator<void (*)(int, char const*, int, char const*)>, void (int, char const*, int, char const*)>::operator()(int&&, char const*&&, int&&, char const*&&) + 114 (functional:1727)
18  kafka-tests                 	0x000000010785ff47 std::__1::__function::__value_func<void (int, char const*, int, char const*)>::operator()(int&&, char const*&&, int&&, char const*&&) const + 151 (functional:1880)
19  kafka-tests                 	0x000000010785fddd std::__1::function<void (int, char const*, int, char const*)>::operator()(int, char const*, int, char const*) const + 109 (functional:2555)
20  kafka-tests                 	0x0000000107860182 void kafka::KafkaClient::doLog<char const*, char const*>(int, char const*, int, char const*, char const*, char const*) const + 450 (KafkaClient.h:120)
21  kafka-tests                 	0x000000010785ffba kafka::KafkaClient::onLog(int, char const*, char const*) const + 74 (KafkaClient.h:440)
22  kafka-tests                 	0x000000010785c833 kafka::KafkaClient::logCallback(rd_kafka_s const*, int, char const*, char const*) + 51 (KafkaClient.h:446)
23  kafka-tests                 	0x000000010bb576c6 rd_kafka_log_buf + 278 (rdkafka.c:253)
24  kafka-tests                 	0x000000010bb5757a rd_kafka_log0 + 938 (rdkafka.c:293)
25  kafka-tests                 	0x000000010bb748a4 rd_kafka_broker_set_state + 660 (rdkafka_broker.c:316)
26  kafka-tests                 	0x000000010bb750a9 rd_kafka_broker_fail + 825 (rdkafka_broker.c:569)
27  kafka-tests                 	0x000000010bb7619b rd_kafka_broker_conn_closed + 363 (rdkafka_broker.c:679)
28  kafka-tests                 	0x000000010bb78883 rd_kafka_recv + 3347 (rdkafka_broker.c:1932)
29  kafka-tests                 	0x000000010bc59473 rd_kafka_transport_io_event + 691 (rdkafka_transport.c:759)
30  kafka-tests                 	0x000000010bc58fe5 rd_kafka_transport_io_serve + 261 (rdkafka_transport.c:818)
31  kafka-tests                 	0x000000010bb867c5 rd_kafka_broker_ops_io_serve + 229 (rdkafka_broker.c:3425)
32  kafka-tests                 	0x000000010bb86068 rd_kafka_broker_producer_serve + 456 (rdkafka_broker.c:4032)
33  kafka-tests                 	0x000000010bb84e2e rd_kafka_broker_serve + 318 (rdkafka_broker.c:5164)
34  kafka-tests                 	0x000000010bb7eb00 rd_kafka_broker_thread_main + 1568 (rdkafka_broker.c:5328)
35  kafka-tests                 	0x000000010bce72d1 _thrd_wrapper_function + 65 (tinycthread.c:576)
36  libsystem_pthread.dylib       	0x00007fff205e78fc _pthread_start + 224
37  libsystem_pthread.dylib       	0x00007fff205e3443 thread_start + 15

NOTES:

  1. I ran into this issue on macOS.
  2. and I remember this happens if I turn on logging by setting "debug" property to true as well (aka. I am not able to turn on debug logging on macOS. But it did work on Linux in my testing). So I guess this is probably an issue related with logging.

Since I will run into this issue when turning on debug logging, I assume this is not a problem for my program, but I am not sure if this a problem for modern-cpp-kafka or librdkafka yet. I tried to work around this issue by setting a customized logger like this:

_consumer->setLogger(KafkaSource::_do_log);
...
void KafkaSource::_do_log(int level, const char *filename, int lineno, const char *msg) {
     if (filename) {
        _logger->info(msg, "filename", filename, "lineno", lineno);
      } else {
        _logger->info(msg);
      }
  }

The _logger here is a spdlog logger (https://github.com/gabime/spdlog), and it crashed as well, reporting similar error like above (in the end, abort() called, malloc: tiny_free_list_remove_ptr: Internal invariant broken (next ptr of prev): ptr=0x7fb2f79279d0, prev_next=0x0)

Thread 25 Crashed:: rdk:broker-1
0   libsystem_kernel.dylib        	0x00007fff205b7ae2 __semwait_signal_nocancel + 10
1   libsystem_c.dylib             	0x00007fff204c371b nanosleep$NOCANCEL + 185
2   libsystem_c.dylib             	0x00007fff204e9459 usleep$NOCANCEL + 53
3   libsystem_c.dylib             	0x00007fff2053c41b abort + 130
4   libsystem_malloc.dylib        	0x00007fff2041c1f5 malloc_vreport + 548
5   libsystem_malloc.dylib        	0x00007fff20430809 malloc_zone_error + 183
6   libsystem_malloc.dylib        	0x00007fff204143ce tiny_free_list_remove_ptr + 735
7   libsystem_malloc.dylib        	0x00007fff204136c1 tiny_free_no_lock + 1014
8   libsystem_malloc.dylib        	0x00007fff20413189 free_tiny + 442
9   kafka-tests                 	0x000000010b563e35 std::__1::_DeallocateCaller::__do_call(void*) + 21 (new:334)
10  kafka-tests                 	0x000000010b563de9 std::__1::_DeallocateCaller::__do_deallocate_handle_size(void*, unsigned long) + 25 (new:292)
11  kafka-tests                 	0x000000010b5741b5 std::__1::_DeallocateCaller::__do_deallocate_handle_size_align(void*, unsigned long, unsigned long) + 85 (new:268)
12  kafka-tests                 	0x000000010b574155 std::__1::__libcpp_deallocate(void*, unsigned long, unsigned long) + 37 (new:340)
13  kafka-tests                 	0x000000010b58ee66 std::__1::allocator<char>::deallocate(char*, unsigned long) + 38 (memory:1673)
14  kafka-tests                 	0x000000010b5b199d fmt::v7::basic_memory_buffer<char, 250ul, std::__1::allocator<char> >::deallocate() + 109 (format.h:665)
15  kafka-tests                 	0x000000010b5b1916 fmt::v7::basic_memory_buffer<char, 250ul, std::__1::allocator<char> >::~basic_memory_buffer() + 38 (format.h:679)
16  kafka-tests                 	0x000000010b58eaa5 fmt::v7::basic_memory_buffer<char, 250ul, std::__1::allocator<char> >::~basic_memory_buffer() + 21 (format.h:679)
17  kafka-tests                 	0x000000010c315173 spdlog::sinks::stdout_sink_base<spdlog::details::console_mutex>::log(spdlog::details::log_msg const&) + 275 (stdout_sinks-inl.h:74)
18  kafka-tests                 	0x000000010c2e1048 spdlog::logger::sink_it_(spdlog::details::log_msg const&) + 168 (logger-inl.h:186)
19  kafka-tests                 	0x000000010c2e0f6f spdlog::logger::log_it_(spdlog::details::log_msg const&, bool, bool) + 63 (logger-inl.h:170)
20  kafka-tests                 	0x000000010b5bbd55 void spdlog::logger::log_<fmt::v7::basic_string_view<char>, char const*&>(spdlog::source_loc, spdlog::level::level_enum, fmt::v7::basic_string_view<char> const&, char const*&) + 549 (logger.h:342)
21  kafka-tests                 	0x000000010b5bbb20 void spdlog::logger::log<char const*&>(spdlog::source_loc, spdlog::level::level_enum, fmt::v7::basic_string_view<char>, char const*&) + 128 (logger.h:87)
22  kafka-tests                 	0x000000010b5bba8a void spdlog::logger::log<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, char const*&>(spdlog::level::level_enum, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, char const*&) + 154 (logger.h:93)
23  kafka-tests                 	0x000000010b5bb9e5 void spdlog::logger::debug<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, char const*&>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, char const*&) + 69 (logger.h:105)
24  kafka-tests                 	0x000000010b5bb1cc redacted::debug(char const*) + 44 (msg_logger.h:19)
25  kafka-tests                 	0x000000010bc19981 redacted:KafkaSource::_do_log(int, char const*, int, char const*) + 209 (kafka_source.cpp:44)
26  kafka-tests                 	0x000000010b72d4f7 decltype(std::__1::forward<void (*&)(int, char const*, int, char const*)>(fp)(std::__1::forward<int>(fp0), std::__1::forward<char const*>(fp0), std::__1::forward<int>(fp0), std::__1::forward<char const*>(fp0))) std::__1::__invoke<void (*&)(int, char const*, int, char const*), int, char const*, int, char const*>(void (*&)(int, char const*, int, char const*), int&&, char const*&&, int&&, char const*&&) + 119 (type_traits:3747)
27  kafka-tests                 	0x000000010b72d451 void std::__1::__invoke_void_return_wrapper<void>::__call<void (*&)(int, char const*, int, char const*), int, char const*, int, char const*>(void (*&)(int, char const*, int, char const*), int&&, char const*&&, int&&, char const*&&) + 113 (__functional_base:348)
28  kafka-tests                 	0x000000010b72d3c1 std::__1::__function::__alloc_func<void (*)(int, char const*, int, char const*), std::__1::allocator<void (*)(int, char const*, int, char const*)>, void (int, char const*, int, char const*)>::operator()(int&&, char const*&&, int&&, char const*&&) + 113 (functional:1553)
29  kafka-tests                 	0x000000010b72c0d2 std::__1::__function::__func<void (*)(int, char const*, int, char const*), std::__1::allocator<void (*)(int, char const*, int, char const*)>, void (int, char const*, int, char const*)>::operator()(int&&, char const*&&, int&&, char const*&&) + 114 (functional:1727)
30  kafka-tests                 	0x000000010b72df47 std::__1::__function::__value_func<void (int, char const*, int, char const*)>::operator()(int&&, char const*&&, int&&, char const*&&) const + 151 (functional:1880)
31  kafka-tests                 	0x000000010b72dddd std::__1::function<void (int, char const*, int, char const*)>::operator()(int, char const*, int, char const*) const + 109 (functional:2555)
32  kafka-tests                 	0x000000010b72e182 void kafka::KafkaClient::doLog<char const*, char const*>(int, char const*, int, char const*, char const*, char const*) const + 450 (KafkaClient.h:120)
33  kafka-tests                 	0x000000010b72dfba kafka::KafkaClient::onLog(int, char const*, char const*) const + 74 (KafkaClient.h:440)
34  kafka-tests                 	0x000000010b72a833 kafka::KafkaClient::logCallback(rd_kafka_s const*, int, char const*, char const*) + 51 (KafkaClient.h:446)
35  kafka-tests                 	0x000000010fa256c6 rd_kafka_log_buf + 278 (rdkafka.c:253)
36  kafka-tests                 	0x000000010fa2557a rd_kafka_log0 + 938 (rdkafka.c:293)
37  kafka-tests                 	0x000000010fa43cff rd_kafka_broker_set_error + 2143 (rdkafka_broker.c:511)
38  kafka-tests                 	0x000000010fa4303f rd_kafka_broker_fail + 719 (rdkafka_broker.c:556)
39  kafka-tests                 	0x000000010fa4836b rd_kafka_broker_connect_done + 91 (rdkafka_broker.c:2469)
40  kafka-tests                 	0x000000010fb25b91 rd_kafka_transport_connect_done + 65 (rdkafka_transport.c:356)
41  kafka-tests                 	0x000000010fb2736d rd_kafka_transport_io_event + 429 (rdkafka_transport.c:697)
42  kafka-tests                 	0x000000010fb26fe5 rd_kafka_transport_io_serve + 261 (rdkafka_transport.c:818)
43  kafka-tests                 	0x000000010fa547c5 rd_kafka_broker_ops_io_serve + 229 (rdkafka_broker.c:3425)
44  kafka-tests                 	0x000000010fa544ca rd_kafka_broker_consumer_serve + 1050 (rdkafka_broker.c:5023)
45  kafka-tests                 	0x000000010fa52e58 rd_kafka_broker_serve + 360 (rdkafka_broker.c:5166)
46  kafka-tests                 	0x000000010fa4ca7a rd_kafka_broker_thread_main + 1434 (rdkafka_broker.c:5313)
47  kafka-tests                 	0x000000010fbb52d1 _thrd_wrapper_function + 65 (tinycthread.c:576)
48  libsystem_pthread.dylib       	0x00007fff205e78fc _pthread_start + 224
49  libsystem_pthread.dylib       	0x00007fff205e3443 thread_start + 15

During my testing, the filename pointer is always nullptr, and I guess there is some life time issue with the msg pointer. Could any one shed some light on this? Any help is appreciated. Thanks.

std::min - issue when using Windows.h

I am developing crossplatform microservice for Windows/Linux. In my code I use sleep timer from Windows.h (for delay when db connection fails). Windows.h is colliding with std::min, which is used in include/kafka/Log.h.
Problem is probably caused by '#define min' in Windows.h.
To have Modern C++ Kafka working properly in this case, there is a need to add #undef clause In include/kafka/Log.h:

#pragma once

#include <kafka/Project.h>

#include <kafka/Utility.h>

#include <algorithm>
#include <array>
#include <cassert>
#include <functional>
#include <iostream>

#undef min

I have suggested a pull request for this issue. Please review it.

Windows Support

Could this wrapper be used on Windows?
Because in the CMakeLists it links the Library against the pthread lib which is not available in Windows

Producing message from std::string or std::ostringstream does not give the same result

Hey there, I'm new to Kafka and I have been playing around with the
examples of librdkafka (https://github.com/edenhill/librdkafka/tree/master/examples)
and modern-cpp-kafka (https://github.com/morganstanley/modern-cpp-kafka/tree/main/examples)

One observation doesn't make sense to me. Maybe you can help to untangle my thoughts:

Looking at this example: https://github.com/morganstanley/modern-cpp-kafka/blob/main/examples/kafka_async_producer_copy_payload.cc
I have exchanged the CLI input to this:

[...]
        // Create a producer instance.
        KafkaProducer producer(props);

        std::ostringstream oss;
        oss << "I am a very long text and I will see if it can be transferred.";
		
        std::cout << "'" << oss.str() << "'" << std::endl;

        auto record = producer::ProducerRecord(topic,
                                               kafka::NullKey,
                                               kafka::Value(oss.str().c_str(), oss.str().size()));
[...]

The producer prints

[...]
'I am a very long text and I will see if it can be transferred.'
[...]

The outcome on the consumer side is

[...]
Value [[0x00][0x00][0x00][0x00][0x00][0x00][0x00][0x00]ery long text and I will see if it can be transferred.]

Changing the producer to

[...]
        // Create a producer instance.
        KafkaProducer producer(props);

        std::ostringstream oss;
        oss << "I am a very long text and I will see if it can be transferred.";

        std::cout << "'" << oss.str() << "'" << std::endl;
        std::string testString = oss.str();
        std::cout << "'" << testString << "'" << std::endl;

        auto record = producer::ProducerRecord(topic,
                                               kafka::NullKey,
                                               kafka::Value(testString.c_str(), testString.size()));
[...]

gives me

[...]
'I am a very long text and I will see if it can be transferred.'
'I am a very long text and I will see if it can be transferred.'
[...]

on the producer side and

[...]
Value [I am a very long text and I will see if it can be transferred.]

on the consumer side.

How does that make sense? Why is the outcome different?

Windows support

Hi,

thank you for your work. I was wondering whether the C++wrapper has been tested on Windows. I tried to build a project using just the headers and ran into several compilation issues (missing syslog.h...).

Thank you, Martin.

Visual Studio 2019, toolset v142, C++17

C++14 compilation is broken with latest release

Unable to compile with C++14 compiler and boost, getting below error -

include/kafka/Error.h:90:13: error: no template named 'optional' in namespace 'std'; did you mean 'Optional'?
virtual std::optional isFatal() const { return rd_kafka_error_is_fatal(_rkError.get()); }
^~~~~~~~~~~~~
Optional

/include/kafka/Error.h:112:5: error: no template named 'optional' in namespace 'std'; did you mean 'Optional'?
std::optional isFatal() const override { return {}; }
^~~~~~~~~~~~~
Optional

/include/kafka/Types.h:25:1: note: 'Optional' declared here
using Optional = boost::optional;
^

/include/kafka/Error.h:113:5: error: no template named 'optional' in namespace 'std'; did you mean 'Optional'?
std::optional isRetriable() const override { return {}; }
^~~~~~~~~~~~~
Optional

Nested namespace not allowed in C++14

I'm trying to use the library with C++14 and boost::optional. However, the usage of nested namespaces prevents me from compiling, because it's a C++17 feature. I'm using Visual Studio 2022 v17.1.0. The compiler error is this one.

The fix is quite easy: every occurrence of

namespace first::second {
...
}

should be replaced with

namespace first {
  namespace second {
  ...
  }
}

Compilation - Sign conversion errors in log.h

./kafka/Log.h:34:63: error: conversion to ‘std::vector<std::__cxx11::basic_string<char> >::size_type’ {aka ‘long unsigned int’} from ‘int’ may change the sign of the result [-Werror=sign-conversion]
   34 |         return (level >= 0 && level < numLevels) ? levelNames[level] : invalid;

should static_cast level - guaranteed not to lose information due to conditional

error: conversion to ‘long unsigned int’ from ‘int’ may change the sign of the result [-Werror=sign-conversion]
   71 |     char _buf[MAX_CAPACITY];

It seems like https://github.com/morganstanley/modern-cpp-kafka/blob/main/include/kafka/Log.h#L38-L40

template <int MAX_CAPACITY>
class LogBuffer
{

should be adjusted to use size_t MAX_CAPACITY as there is no value in a buffer size < 0

This means LOG_BUFFER_SIZE should probably also be adjusted to size_t from int

unit test compilation issue with gtest

Unable to build unit tests, getting below errors.

/tests/unit/TestProperties.cc:20:5: error: no matching conversion for functional-style cast from 'boost::optional<std::1::basic_string >' to '::testing::AssertionResult'
EXPECT_TRUE(getBootstrapServers);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
googletest/dist/include/gtest/gtest.h:1920:3: note: expanded from macro 'EXPECT_TRUE'
GTEST_TEST_BOOLEAN
(condition, #condition, false, true,
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
googletest/dist/include/gtest/internal/gtest-internal.h:1111:7: note: expanded from macro 'GTEST_TEST_BOOLEAN
'
::testing::AssertionResult(expression))
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
googletest/dist/include/gtest/gtest.h:261:3: note: candidate constructor not viable: no known conversion from 'boost::optional<std::__1::basic_string >' to 'const testing::AssertionResult' for 1st argument
AssertionResult(const AssertionResult& other);
^
googletest/dist/include/gtest/gtest.h:263:12: note: candidate constructor not viable: no known conversion from 'boost::optional<std::_1::basic_string >' to 'bool' for 1st argument
explicit AssertionResult(bool success) : success
(success) {}
^
1 error generated.
*** Error code 1

how to use SASL credentials?

Hi,

In producer example how can i implement the sasl parameters? like sasl_mecahnism. sasl_plain_user/pass etc?
i tried -
export KAFKA_CLIENT_ADDITIONAL_SETTINGS="security.protocol=SASL_PLAINTEXT;sasl.kerberos.service.name=username;sasl.kerberos.service.password=password;sasl.mechanism=PLAIN"
But no result.

Thanks

Undefined behavior in `KafkaConsumer::pollMessages()`

The following line uses std::vector::operator[] to access one past the last element of a vector:

std::for_each(&msgPtrArray[0], &msgPtrArray[msgReceived], [&output](rd_kafka_message_t* rkMsg) { output.emplace_back(rkMsg); });

This is undefined behavior and causes a failed assertion in debug mode when compiling with Visual C++.

The following pull request fixes the issue by using .begin()/.end() instead:
061f5eb

Single Message Consuming

Currently only the Batch Consuming of Messages is possible with the consumer.
I could consume one Message at each call when i set the max.pool.records. But internally it still does a batch poll.
Is it possible to do a single polling of Messages like in the RDKafka High Level Consumer?

Integrate asynchronous producers with Boost.Asio

Is there any canonical way to integrate asynchronous callbacks with boost::asio, eg. io_context? Of course we may forward callbacks by calling io_context::post with a deep copy of parameters but maybe there is more proficient way?

Debug Assertion Failed: vector subscript out of range when _maxPollRecords reaches the value of 500

Subscript out of range when _maxPollRecords reaches the value of 500

Changing the default value using "max.poll.records" property fixes the issue if you don't reach the limit again.

// Wrap messages with ConsumerRecord
output.clear();
output.reserve(msgReceived);
=here===> std::for_each(&msgPtrArray[0], &msgPtrArray[msgReceived], [&output](rd_kafka_message_t* rkMsg) { output.emplace_back(rkMsg); });

Consumer example only polls if sending occurs later than receiving

HI,
I am using the consumer under: modern-cpp-kafka/examples/kafka_auto_commit_consumer.cc

I added this code into a function, it reads the message that is sent on the broker only if sending occurs later than calling the function. it doesn't find the messages that are already on the broker.

Do you have any idea why this is happening or how to change this behavior?

Thankx

You can find the method below:

std::string KafkaWrapper::receive_message(const char *p_broker,
                                          const char *p_topic, std::string log_level) {
// Create configuration object
        kafka::Properties props ({
            {"bootstrap.servers", p_broker},
            {"log_level", log_level},
        });

        // Create a consumer instance.
        kafka::KafkaAutoCommitConsumer consumer(props);

        // Subscribe to topics
        consumer.subscribe({p_topic});

        // Read messages from the topic.
        std::cout << "% Reading messages from topic: " << p_topic << std::endl;
        while (true) {
            auto records = consumer.poll(std::chrono::milliseconds(100));
            std::cout << "% Polled from topic: " << records.size() << std::endl;
            for (const auto& record: records) {
                // In this example, quit on empty message
                //if (record.value().size() == 0) return 0;

                if (!record.error()) {

                    return (record.value().toString());

                } else {
                    std::cerr << record.toString() << std::endl;
                }
            }
        }
   return  NULL;

}

Integration tests are failing

Hi,
I am seeing below test cases are failing. I believe all these test cases are failing because "Topics" are not created automatically.

I am using latest modern-cpp-kafka with librdkafka ver 1.7.0 and C++14 standard.

[----------] Global test environment tear-down
[==========] 41 tests from 7 test cases ran. (843153 ms total)
[ PASSED ] 24 tests.
[ FAILED ] 17 tests, listed below:
[ FAILED ] KafkaAutoCommitConsumer.OffsetCommitAndPosition
[ FAILED ] KafkaManualCommitConsumer.NoOffsetCommitCallback
[ FAILED ] KafkaManualCommitConsumer.OffsetCommitCallback
[ FAILED ] KafkaManualCommitConsumer.OffsetCommitCallbackTriggeredBeforeClose
[ FAILED ] KafkaManualCommitConsumer.OffsetCommitCallback_ManuallyPollEvents
[ FAILED ] KafkaManualCommitConsumer.OffsetCommitAndPosition
[ FAILED ] KafkaManualCommitConsumer.OffsetsForTime
[ FAILED ] KafkaManualCommitConsumer.RecoverByTime
[ FAILED ] KafkaSyncProducer.SendMessagesWithAcks1
[ FAILED ] KafkaSyncProducer.SendMessagesWithAcksAll
[ FAILED ] KafkaSyncProducer.DefaultPartitioner
[ FAILED ] KafkaSyncProducer.TryOtherPartitioners
[ FAILED ] KafkaAsyncProducer.MessageDeliveryCallback
[ FAILED ] KafkaAsyncProducer.DeliveryCallback_ManuallyPollEvents
[ FAILED ] KafkaAsyncProducer.NoBlockSendingWhileQueueIsFull_ManuallyPollEvents
[ FAILED ] KafkaAsyncProducer.TooLargeMessageForBroker
[ FAILED ] KafkaAsyncProducer.CopyRecordValueWithinSend

Error Log Snippet from test failure

[ RUN ] KafkaAutoCommitConsumer.OffsetCommitAndPosition
[2021-05-23 15:38:27.602742] Topic[28ff0948-13ab659a] would be used
unknown file: Failure
C++ exception with description "2021-05-23 15:38:57.606166: Broker: Unknown topic or partition [3] (modern-cpp-kafka//include/kafka/KafkaProducer.h:547)" thrown in the test body.
[ FAILED ] KafkaAutoCommitConsumer.OffsetCommitAndPosition (30005 ms)

ALL other test cases are throwing exception at same place as mentioned in above log.

Ubuntu 20.04 - Issue linking -lgtest

When running make run into:

[ 33%] Linking CXX executable kafka-unit-test
/usr/bin/ld: cannot find -lgtest
/usr/bin/ld: cannot find -lgtest_main
collect2: error: ld returned 1 exit status
make[2]: *** [tests/unit/CMakeFiles/kafka-unit-test.dir/build.make:264: tests/unit/kafka-unit-test] Error 1
make[1]: *** [CMakeFiles/Makefile2:1013: tests/unit/CMakeFiles/kafka-unit-test.dir/all] Error 2
make: *** [Makefile:141: all] Error 2

Edited CmakeLists.txt manually to googletest folder:

#---------------------------
# googletest
#---------------------------
if (DEFINED ENV{GTEST_ROOT})
    set(GTEST_ROOT $ENV{GTEST_ROOT})
else ()
    set(GTEST_ROOT /usr/src/googletest/googletest)
endif ()

message(STATUS "googletest root directory: ${GTEST_ROOT}")

include_directories(SYSTEM ${GTEST_ROOT}/include)
link_directories(${GTEST_ROOT}/lib ${GTEST_ROOT}/bin)

Ubuntu 20.04 - undefined reference to `rd_kafka_incremental_assign'

When running make and attempting to link to executable kafka-unit-test

It cannot seem to find:

undefined reference to `rd_kafka_incremental_assign'

undefined reference to `rd_kafka_rebalance_protocol'

~/dev/modern-cpp-kafka/build$ make
Scanning dependencies of target kafka-unit-test
[  2%] Building CXX object tests/unit/CMakeFiles/kafka-unit-test.dir/TestBrokerMetadata.cc.o
[  4%] Building CXX object tests/unit/CMakeFiles/kafka-unit-test.dir/TestConsumerRecord.cc.o
[  7%] Building CXX object tests/unit/CMakeFiles/kafka-unit-test.dir/TestError.cc.o
[  9%] Building CXX object tests/unit/CMakeFiles/kafka-unit-test.dir/TestHeader.cc.o
[ 11%] Building CXX object tests/unit/CMakeFiles/kafka-unit-test.dir/TestKafkaClientDefaultProperties.cc.o
[ 14%] Building CXX object tests/unit/CMakeFiles/kafka-unit-test.dir/TestKafkaException.cc.o
[ 16%] Building CXX object tests/unit/CMakeFiles/kafka-unit-test.dir/TestKafkaMetrics.cc.o
[ 19%] Building CXX object tests/unit/CMakeFiles/kafka-unit-test.dir/TestProducerRecord.cc.o
[ 21%] Building CXX object tests/unit/CMakeFiles/kafka-unit-test.dir/TestProperties.cc.o
[ 23%] Building CXX object tests/unit/CMakeFiles/kafka-unit-test.dir/TestTimestamp.cc.o
[ 26%] Building CXX object tests/unit/CMakeFiles/kafka-unit-test.dir/TestTypes.cc.o
[ 28%] Building CXX object tests/unit/CMakeFiles/kafka-unit-test.dir/TestUnorderedOffsetCommitQueue.cc.o
[ 30%] Building CXX object tests/unit/CMakeFiles/kafka-unit-test.dir/TestUtility.cc.o
[ 33%] Linking CXX executable kafka-unit-test
/usr/bin/ld: CMakeFiles/kafka-unit-test.dir/TestKafkaClientDefaultProperties.cc.o: in function `kafka::clients::KafkaConsumer::changeAssignment(kafka::clients::KafkaConsumer::PartitionsRebalanceEvent, std::set<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, int>, std::less<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, int> >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, int> > > const&)':
TestKafkaClientDefaultProperties.cc:(.text._ZN5kafka7clients13KafkaConsumer16changeAssignmentENS1_24PartitionsRebalanceEventERKSt3setISt4pairINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEiESt4lessISB_ESaISB_EE[_ZN5kafka7clients13KafkaConsumer16changeAssignmentENS1_24PartitionsRebalanceEventERKSt3setISt4pairINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEiESt4lessISB_ESaISB_EE]+0x1cf): undefined reference to `rd_kafka_incremental_assign'
/usr/bin/ld: TestKafkaClientDefaultProperties.cc:(.text._ZN5kafka7clients13KafkaConsumer16changeAssignmentENS1_24PartitionsRebalanceEventERKSt3setISt4pairINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEiESt4lessISB_ESaISB_EE[_ZN5kafka7clients13KafkaConsumer16changeAssignmentENS1_24PartitionsRebalanceEventERKSt3setISt4pairINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEiESt4lessISB_ESaISB_EE]+0x38f): undefined reference to `rd_kafka_incremental_unassign'
/usr/bin/ld: CMakeFiles/kafka-unit-test.dir/TestKafkaClientDefaultProperties.cc.o: in function `kafka::clients::KafkaConsumer::onRebalance(rd_kafka_resp_err_t, rd_kafka_topic_partition_list_s*)':
TestKafkaClientDefaultProperties.cc:(.text._ZN5kafka7clients13KafkaConsumer11onRebalanceE19rd_kafka_resp_err_tP31rd_kafka_topic_partition_list_s[_ZN5kafka7clients13KafkaConsumer11onRebalanceE19rd_kafka_resp_err_tP31rd_kafka_topic_partition_list_s]+0xfe): undefined reference to `rd_kafka_rebalance_protocol'
collect2: error: ld returned 1 exit status
make[2]: *** [tests/unit/CMakeFiles/kafka-unit-test.dir/build.make:264: tests/unit/kafka-unit-test] Error 1
make[1]: *** [CMakeFiles/Makefile2:1013: tests/unit/CMakeFiles/kafka-unit-test.dir/all] Error 2
make: *** [Makefile:141: all] Error 2

Password configuration property values should not be logged as plaintext

The librdkafka library provides multiple configuration properties for passwords. For example:

  • ssl.keystore.password;
  • ssl.key.password.

In some scenarios, password property values get logged as plaintext for log levels that are required for production. For example, KafkaClient.h logs passwords on log level "Err" when librdkafka fails to set them:

image

Passwords should only be logged plaintext with log level "Debug", otherwise i suggest to mask the password values with "*" symbols. Maybe could use masking if prop.first contains the substring "password", at the very least.

consumer: blocking poll which returns as soon as a message arrives?

Is it possible to perform a blocking poll with KafkaConsumer and have it return as soon as at least 1 message is available?

I noticed that KafkaConsumer::poll converts the provided std::chrono::milliseconds timeout argument to an int via convertMsDurationToInt

inline std::size_t
KafkaConsumer::poll(std::chrono::milliseconds timeout, std::vector<ConsumerRecord>& output)
{
    pollMessages(convertMsDurationToInt(timeout), output);
    return output.size();
}

Looking at the definition of convertMsDurationToInt, it seems it is possible to provide an "infinite" argument:

    static constexpr int TIMEOUT_INFINITE  = -1;

    static int convertMsDurationToInt(std::chrono::milliseconds ms)
    {
        return ms > std::chrono::milliseconds(INT_MAX) ? TIMEOUT_INFINITE : static_cast<int>(ms.count());
    }

However, passing a milliseconds value which will evaluate to TIMEOUT_INFINITE causes the polling call to never return.

Example:

    std::vector<kafkapp::ConsumerRecord> records;
    while (_running)
    {
        DBG("polling");

        _consumer.poll(std::chrono::milliseconds::max(), records);

        for (kafkapp::ConsumerRecord& record : records)
        {
            DBG("msg: \"{}\"", record.value().toString());
        }
    }

Output:

2021-05-25 13:46:13.070915694 [DEBUG] [Consumer::onPoll] polling
... never returns

Changing the poll timeout to 10s yields the following (note that I have a producer sending a message every second):

    std::vector<kafkapp::ConsumerRecord> records;
    while (_running)
    {
        DBG("polling");

        _consumer.poll(std::chrono::milliseconds(10), records); // <----- this line changed

        for (kafkapp::ConsumerRecord& record : records)
        {
            DBG("msg: \"{}\"", record.value().toString());
        }
    }

Output:

2021-05-25 13:47:16.084563295 [DEBUG] [Consumer::onPoll] polling
2021-05-25 13:47:26.084583347 [DEBUG] [Consumer::onPoll] msg: "2021-05-25 11:47:16.657031750 hello world"
2021-05-25 13:47:26.084590661 [DEBUG] [Consumer::onPoll] msg: "2021-05-25 11:47:17.657024937 hello world"
2021-05-25 13:47:26.084591542 [DEBUG] [Consumer::onPoll] msg: "2021-05-25 11:47:18.657009027 hello world"
2021-05-25 13:47:26.084592284 [DEBUG] [Consumer::onPoll] msg: "2021-05-25 11:47:19.657011962 hello world"
2021-05-25 13:47:26.084593045 [DEBUG] [Consumer::onPoll] msg: "2021-05-25 11:47:20.656996502 hello world"
2021-05-25 13:47:26.084594127 [DEBUG] [Consumer::onPoll] msg: "2021-05-25 11:47:21.657000009 hello world"
2021-05-25 13:47:26.084594850 [DEBUG] [Consumer::onPoll] msg: "2021-05-25 11:47:22.657011962 hello world"
2021-05-25 13:47:26.084595551 [DEBUG] [Consumer::onPoll] msg: "2021-05-25 11:47:23.657131494 hello world"
2021-05-25 13:47:26.084596262 [DEBUG] [Consumer::onPoll] msg: "2021-05-25 11:47:24.657005909 hello world"
2021-05-25 13:47:26.084596963 [DEBUG] [Consumer::onPoll] msg: "2021-05-25 11:47:25.656997414 hello world"
2021-05-25 13:47:26.084597062 [DEBUG] [Consumer::onPoll] polling
2021-05-25 13:47:36.085101564 [DEBUG] [Consumer::onPoll] msg: "2021-05-25 11:47:26.657004969 hello world"
2021-05-25 13:47:36.085108868 [DEBUG] [Consumer::onPoll] msg: "2021-05-25 11:47:27.657008827 hello world"
2021-05-25 13:47:36.085110231 [DEBUG] [Consumer::onPoll] msg: "2021-05-25 11:47:28.657413582 hello world"
2021-05-25 13:47:36.085111342 [DEBUG] [Consumer::onPoll] msg: "2021-05-25 11:47:29.657006573 hello world"
2021-05-25 13:47:36.085112184 [DEBUG] [Consumer::onPoll] msg: "2021-05-25 11:47:30.656998417 hello world"
2021-05-25 13:47:36.085113025 [DEBUG] [Consumer::onPoll] msg: "2021-05-25 11:47:31.656989911 hello world"
2021-05-25 13:47:36.085114268 [DEBUG] [Consumer::onPoll] msg: "2021-05-25 11:47:32.656990102 hello world"
2021-05-25 13:47:36.085115331 [DEBUG] [Consumer::onPoll] msg: "2021-05-25 11:47:33.656995181 hello world"
2021-05-25 13:47:36.085116212 [DEBUG] [Consumer::onPoll] msg: "2021-05-25 11:47:34.656990783 hello world"
2021-05-25 13:47:36.085117185 [DEBUG] [Consumer::onPoll] msg: "2021-05-25 11:47:35.657013187 hello world"
2021-05-25 13:47:36.085118123 [DEBUG] [Consumer::onPoll] polling
2021-05-25 13:47:46.085337462 [DEBUG] [Consumer::onPoll] msg: "2021-05-25 11:47:36.656993038 hello world"
2021-05-25 13:47:46.085343233 [DEBUG] [Consumer::onPoll] msg: "2021-05-25 11:47:37.656993849 hello world"
2021-05-25 13:47:46.085343994 [DEBUG] [Consumer::onPoll] msg: "2021-05-25 11:47:38.656993638 hello world"
2021-05-25 13:47:46.085344666 [DEBUG] [Consumer::onPoll] msg: "2021-05-25 11:47:39.656995181 hello world"
2021-05-25 13:47:46.085345167 [DEBUG] [Consumer::onPoll] msg: "2021-05-25 11:47:40.657007154 hello world"
2021-05-25 13:47:46.085345548 [DEBUG] [Consumer::onPoll] msg: "2021-05-25 11:47:41.656994631 hello world"
2021-05-25 13:47:46.085345938 [DEBUG] [Consumer::onPoll] msg: "2021-05-25 11:47:42.657003288 hello world"
2021-05-25 13:47:46.085346459 [DEBUG] [Consumer::onPoll] msg: "2021-05-25 11:47:43.656994030 hello world"
2021-05-25 13:47:46.085346830 [DEBUG] [Consumer::onPoll] msg: "2021-05-25 11:47:44.656987427 hello world"
2021-05-25 13:47:46.085347220 [DEBUG] [Consumer::onPoll] msg: "2021-05-25 11:47:45.656967960 hello world"

Note that the message sent includes the timestamp it was sent, and for each block, each message's send timestamp is 1 second later, however the consumer only returns after 10 seconds.

This leads me to believe that the TIMEOUT_INFINITE option is broken, as it will never return.

Also - is it possible to do a blocking poll which returns when at least 1 message has been received?

the source project compile have some problems in my loca env;(mac os+vscode idea+ C)

hi, i have some problems when i compile the source code, those error like the blow. Maybe have some gammer error?
can you give me some help or advise, Thanks!

Compile Error:
In file included from /Users/lijian/Software/kafka_cpptest/modern-cpp-kafka-main/include/kafka/RdKafkaHelper.h:5:
/Users/lijian/Software/kafka_cpptest/modern-cpp-kafka-main/include/kafka/Types.h:141:32: error: 'auto' not
allowed in lambda parameter
[&ret](const auto& topic) {
^~~~
/Users/lijian/Software/kafka_cpptest/modern-cpp-kafka-main/include/kafka/Types.h:162:32: error: 'auto' not
allowed in lambda parameter
[&ret](const auto& tp) {
^~~~
/Users/lijian/Software/kafka_cpptest/modern-cpp-kafka-main/include/kafka/Types.h:183:32: error: 'auto' not
allowed in lambda parameter
[&ret](const auto& tp_o) {

In file included from /Users/lijian/Software/kafka_cpptest/modern-cpp-kafka-main/include/kafka/Error.h:5:
/Users/lijian/Software/kafka_cpptest/modern-cpp-kafka-main/include/kafka/RdKafkaHelper.h:42:49: error:
unknown type name 'rd_kafka_DeleteRecords_t'; did you mean 'rd_kafka_DeleteTopic_t'?
struct RkDeleteRecordsDeleter { void operator()(rd_kafka_DeleteRecords_t* p) { rd_kafka_DeleteReco...
^~~~~~~~~~~~~~~~~~~~~~~~
rd_kafka_DeleteTopic_t
/usr/local/include/librdkafka/rdkafka.h:5996:39: note: 'rd_kafka_DeleteTopic_t' declared here
typedef struct rd_kafka_DeleteTopic_s rd_kafka_DeleteTopic_t;

image

Producer Config should include Properties

Hi, I am seeing compiler errors when including kafka/ProducerConfig.h as it cannot resolve the Properties class for

class ProducerConfig : public Properties {

Similar to kafka/ConsumerConfig.h and kafka/AdminClientConfig.h, ProducerConfig.h should probably not include kafka/Types.h and should instead include kafka/Properties.h

DEBUG confliction

it looks like below code conflicts with system DEBUG macro and produces a compile error with gcc10.1.0. should we use a different name or remove it given it is not used at all in code anyway?

: error: expected unqualified-id before numeric constant
kafka/KafkaClient.h:198:34: note: in expansion of macro ‘DEBUG’
198 | static const constexpr char* DEBUG = "debug";

Make CMake ExternalProject-friendly

The good practice when writing modern CMake is make it friendly for ExternalProject and therefore FetchContent
Typical usage:

FetchContent_Declare(cppkafka
    URL      https://github.com/morganstanley/modern-cpp-kafka/archive/refs/tags/v2021.12.08.zip
    URL_HASH SHA256=3eef58d076ccaebd75670826b7ff8866b619e2085f4b59e96dc235695d3bc166
)
FetchContent_MakeAvailable(cppkafka)

target_link_libraries(mytarget PRIVATE
    fmt::fmt
    some_other_lib::core
    cppkafka::core
)

When I tried, I meet following issues:

  • The rapidjson dependency was added to my project as it is required by non-optional tests/unit/CMakeLists.txt module. The common practice is to disable tests by default when PROJECT_IS_TOP_LEVEL variable is false. This requires newer CMake version but there is workaround for older.
  • Includes inside the library's headers doesn't work as CMake installs headers path as SYSTEM (target_include_directories(${PROJECT_NAME} SYSTEM INTERFACE ${LIBRDKAFKA_INCLUDE_DIR})) but code uses quotes syntax. Elegant fix would be to change quotes to angle brackets.
  • Minor issue is that target modern-cpp-kafka-api has no elegant alias with modular syntax like cppkafka::core. But we can live without it.

fetchBrokerMetadata API with `all_topics` parameter exposed

Currently, the fetchBrokerMetadata API in KafkaClient can be used to retrieve broker metadata. In my case, I know up front some topic is changed and would like to retrieve the latest topic information and trigger a rebalance by calling this API. Internally, it calls librdkafka's rd_kafka_metadata API, but so far the all_topics parameter of rd_kafka_metadata API is not exposed yet and is hard coded to be false in fetchBrokerMetadata.

rd_kafka_resp_err_t err = rd_kafka_metadata(getClientHandle(), false, rkt.get(), &rk_metadata, convertMsDurationToInt(timeout));
                                                                 |__ `all_topics` is hard coded here

Here's librdkafka's description about this parameter:

all_topics  if non-zero: request info about all topics in cluster,
                  if zero: only request info about locally known topics.

@remark Consumer: If \p all_topics is non-zero the Metadata response
information may trigger a re-join if any subscribed topics
have changed partition count or existence state.

I wonder if there is any chance we could expose this parameter as an optional parameter for the fetchBrokerMetadata API so that client can control some behavior here using this parameter.

Ubuntu 20.04 - `make` command causes "has not been declared" errors

I was able to run cmake ..

-- librdkafka include directory: /usr/include
-- librdkafka library directory: /usr/lib/x86_64-linux-gnu/
-- pthread library: /usr/lib/x86_64-linux-gnu/libpthread.so
-- With NO clang-tidy build option
-- Found Boost: /usr/lib/x86_64-linux-gnu/cmake/Boost-1.71.0/BoostConfig.cmake (found version "1.71.0")  
-- boost include directory: /usr/include
-- boost library directory: 
-- googletest root directory: /usr/src/googletest/googletest
-- rapidjson include directory: /usr/include
-- Found Boost: /usr/lib/x86_64-linux-gnu/cmake/Boost-1.71.0/BoostConfig.cmake (found version "1.71.0") found components: program_options 
-- boost include directory: /usr/include
-- boost library directory: /usr/lib/x86_64-linux-gnu
-- Configuring done
-- Generating done
-- Build files have been written to: /home/adam/dev/modern-cpp-kafka/build

make install doesn't work

[  2%] Building CXX object tests/unit/CMakeFiles/kafka-unit-test.dir/TestBrokerMetadata.cc.o
In file included from /home/adam/dev/modern-cpp-kafka/include/kafka/Error.h:5,
                 from /home/adam/dev/modern-cpp-kafka/include/kafka/KafkaException.h:5,
                 from /home/adam/dev/modern-cpp-kafka/include/kafka/BrokerMetadata.h:5,
                 from /home/adam/dev/modern-cpp-kafka/tests/unit/TestBrokerMetadata.cc:1:
/home/adam/dev/modern-cpp-kafka/include/kafka/RdKafkaHelper.h:42:49: error: ‘rd_kafka_DeleteRecords_t’ has not been declared
   42 | struct RkDeleteRecordsDeleter { void operator()(rd_kafka_DeleteRecords_t* p) { rd_kafka_DeleteRecords_destroy(p); } };
      |                                                 ^~~~~~~~~~~~~~~~~~~~~~~~
/home/adam/dev/modern-cpp-kafka/include/kafka/RdKafkaHelper.h: In member function ‘void kafka::RkDeleteRecordsDeleter::operator()(int*)’:
/home/adam/dev/modern-cpp-kafka/include/kafka/RdKafkaHelper.h:42:80: error: ‘rd_kafka_DeleteRecords_destroy’ was not declared in this scope; did you mean ‘rd_kafka_DeleteTopic_destroy’?
   42 | struct RkDeleteRecordsDeleter { void operator()(rd_kafka_DeleteRecords_t* p) { rd_kafka_DeleteRecords_destroy(p); } };
      |                                                                                ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      |                                                                                rd_kafka_DeleteTopic_destroy
/home/adam/dev/modern-cpp-kafka/include/kafka/RdKafkaHelper.h: At global scope:
/home/adam/dev/modern-cpp-kafka/include/kafka/RdKafkaHelper.h:43:59: error: ‘rd_kafka_DeleteRecords_t’ was not declared in this scope; did you mean ‘rd_kafka_DeleteTopic_t’?
   43 | using rd_kafka_DeleteRecords_unique_ptr = std::unique_ptr<rd_kafka_DeleteRecords_t, RkDeleteRecordsDeleter>;
      |                                                           ^~~~~~~~~~~~~~~~~~~~~~~~
      |                                                           rd_kafka_DeleteTopic_t
/home/adam/dev/modern-cpp-kafka/include/kafka/RdKafkaHelper.h:43:107: error: template argument 1 is invalid
   43 | using rd_kafka_DeleteRecords_unique_ptr = std::unique_ptr<rd_kafka_DeleteRecords_t, RkDeleteRecordsDeleter>;

C++-17/boost

The tests needs "boost" too. There is also an issue with the "boost" environment variable, "boost" must be installed, even with C++-17.

[librdkafka issue, fixed in master] Kafka consumer seek (before pause) doesn't work

  • A placeholder
    librdkafka issue: confluentinc/librdkafka#3471

  • Description
    If Kafka consumer seeks to some offsets and then pause the partition, after resuming the partition, we could find the previous seeks to some offsets doesn't work at all!
    (however, if we pause the partition before the seek, everything is fine, just as expected.)

  • How to reproduce
    Integration test: TEST(KafkaConsumer, DISABLED_SeekBeforePause)

specified generation id is not valid when committing offset during rebalance

I used KafkaManualCommitConsumer with a customized RebalanceCallback, and commit the offset when a partition is revoked during rebalance. Currently, when I have 3 consumers in the consumer group (joining in slightly different time), the program will report errors like below during committing offset:

due to unexpected exception with message:
  2021-07-08 12:59:33.898233: Broker: Specified group generation id is not
  valid [22] (/.../vcpkg/installed/x64-osx/include/kafka/KafkaConsumer.h:588)

The program is the same as the sample program (https://github.com/morganstanley/modern-cpp-kafka/blob/main/doc/KafkaConsumerQuickStart.md#kafkamanualcommitconsumer) with additional RebalanceCallback added. In the RebalanceCallback, I call consumer.commitSync for revoke rebalance event.

After some investigation, I found in KafkaConsumer.h, the customized rebalance callback is called after assign/unassign partitions. This seems incorrect from what I read in librdkafka's documentation (https://docs.confluent.io/2.0.0/clients/librdkafka/classRdKafka_1_1RebalanceCb.html#a490a91c52724382a72380af621958741):

void rebalance_cb (RdKafka::KafkaConsumer *consumer,
             RdKafka::ErrorCode err,
                std::vector<RdKafka::TopicPartition*> &partitions) {
       if (err == RdKafka::ERR__ASSIGN_PARTITIONS) {
         // application may load offets from arbitrary external
         // storage here and update \p partitions
         consumer->assign(partitions);
       } else if (err == RdKafka::ERR__REVOKE_PARTITIONS) {
         // Application may commit offsets manually here
         // if auto.commit.enable=false
         consumer->unassign();
       } else {
         std::cerr << "Rebalancing error: <<
                      RdKafka::err2str(err) << std::endl;
       }
   }

It seems application callback should be called before assign/unassign, could you please confirm if it is a bug?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.