GithubHelp home page GithubHelp logo

spring-integration-kafka's Introduction

This repository is no longer actively maintained by VMware, Inc.

Spring Integration Adapter for Apache Kafka

INTEXT KAFKA

The Spring Integration for Apache Kafka extension project provides inbound and outbound channel adapters and gateways for Apache Kafka. Apache Kafka is a distributed publish-subscribe messaging system that is designed for high throughput (terabytes of data) and low latency (milliseconds). For more information on Kafka and its design goals, see the Apache Kafka main page.

Note
Starting with Spring Integration 5.4.0, this project has been absorbed by the core one under respective spring-integration-kafka module. This repository is archived and remain in read-only mode for informational purpose.

Starting from version 2.0 version this project is a complete rewrite based on the new spring-kafka project which uses the pure java "new" Producer and Consumer clients provided by Kafka.

Quick Start

See the Spring Integration kafka Sample for a simple Spring Boot application that sends and receives messages.

Checking out and building

In order to build the project:

./gradlew build

In order to install this into your local maven cache:

./gradlew install

Documentation

Documentation for this extension is contained in a chapter of the Spring Integration Reference Manual

Migrating from 3.0 to 3.1

Producer record metadata for sends performed on the outbound channel adapter are now sent only to the successChannel. With earlier versions, it was sent to the outputChannel if no successChannel was provided.

Contributing

Pull requests are welcome. Please see the contributor guidelines for details.

spring-integration-kafka's People

Contributors

artembilan avatar az-qbradley avatar bellwethr avatar bijukunjummen avatar bjoernhaeuser avatar cameronjmayfield avatar garyrussell avatar ghillert avatar gmateo avatar heyjustin avatar ilayaperumalg avatar jmax01 avatar marknorkin avatar mbogoevici avatar mikadev avatar moonkev avatar ocristian avatar safetytrick avatar sobychacko avatar spring-builds avatar spring-operator avatar tomvandenberge avatar trevormarshall avatar ukeller avatar walliee avatar wilkinsona avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spring-integration-kafka's Issues

Update kafkaVersion to 0.8.2.2

Hi team,

First thanks for all the good work you do in spring-integration-kafka project and in spring-integration at all.
I've faced this issue (https://issues.apache.org/jira/browse/KAFKA-2308) and I would kindly ask you to update the kafkaVersion from '0.8.2.1' to '0.8.2.2'. It is not casing issues if I explicitly define the kafka dependency to 0.8.2.2, which I do, so it is more like nice to have.

Thanks again

rawHeaders are unusued in KafkaMessageDrivenChannelAdapter.toMessage()

I mean that this code snippet in line 197 is dead.

Map<String, Object> rawHeaders = kafkaMessageHeaders.getRawHeaders();
rawHeaders.put(KafkaHeaders.MESSAGE_KEY, key);
rawHeaders.put(KafkaHeaders.TOPIC, metadata.getPartition().getTopic());
rawHeaders.put(KafkaHeaders.PARTITION_ID, metadata.getPartition().getId());
rawHeaders.put(KafkaHeaders.OFFSET, metadata.getOffset());
rawHeaders.put(KafkaHeaders.NEXT_OFFSET, metadata.getNextOffset());

if (!this.autoCommitOffset) {
    rawHeaders.put(KafkaHeaders.ACKNOWLEDGMENT, acknowledgment);
}

Spring-Kafka-integration: XSD error when integrating it with spring mvc

Hi,

I am trying to integrate spring mvc with spring-integration-kaka and while doing so I am getting a XSD error:

Error: ct:props-correct:4: Error for type '#Anon Type_outbound-channel-adapter'. Duplicate attribute uses with the same name and target namespace are specified. Name of duplicate attribute is order.

ct:props-correct:4: Error for type 'innerPollerType'. Duplicate attribute uses with he same name and target namespace use specified. Name of duplicate attribute is ref.

XSD:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:int="http://www.springframework.org/schema/integration"
    xmlns:stream="http://www.springframework.org/schema/integration/stream"
    xmlns:int-kafka="http://www.springframework.org/schema/integration/kafka"
    xmlns:util="http://www.springframework.org/schema/util"
    xsi:schemaLocation="http://www.springframework.org/schema/integration/stream http://www.springframework.org/schema/integration/stream/spring-integration-stream-4.0.xsd
        http://www.springframework.org/schema/integration/kafka http://www.springframework.org/schema/integration/kafka/spring-integration-kafka.xsd
        http://www.springframework.org/schema/integration http://www.springframework.org/schema/integration/spring-integration-4.0.xsd
        http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-4.0.xsd
        http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-4.0.xsd">
    <int:channel id="inputFromKafka" />
    <stream:stdout-channel-adapter id="stdout"
        channel="inputFromKafka" append-newline="true" />
    <int-kafka:inbound-channel-adapter
        id="kafkaInboundChannelAdapter" kafka-consumer-context-ref="consumerContext"
        auto-startup="false" channel="inputFromKafka">
        <int:poller fixed-delay="1" time-unit="MILLISECONDS" />
    </int-kafka:inbound-channel-adapter>
    <int-kafka:consumer-context id="consumerContext"
        consumer-timeout="1000" zookeeper-connect="zookeeperConnect"
        consumer-properties="consumerProperties">
        <int-kafka:consumer-configurations>
            <int-kafka:consumer-configuration
                group-id="default1" max-messages="5000" key-decoder="deccoder"
                value-decoder="deccoder">
                <int-kafka:topic id="suchittopic" streams="4" />
            </int-kafka:consumer-configuration>
        </int-kafka:consumer-configurations>
    </int-kafka:consumer-context>
    <int-kafka:zookeeper-connect id="zookeeperConnect"
        zk-connect="localhost:2181" zk-connection-timeout="6000"
        zk-session-timeout="6000" zk-sync-time="2000" />
    <bean id="consumerProperties"
        class="org.springframework.beans.factory.config.PropertiesFactoryBean">
        <property name="properties">
            <props>
                <prop key="auto.offset.reset">smallest</prop>
                <prop key="socket.receive.buffer.bytes">10485760</prop> <!-- 10M -->
                <prop key="fetch.message.max.bytes">5242880</prop>
                <prop key="auto.commit.interval.ms">1000</prop>
            </props>
        </property>
    </bean>
    <bean id="deccoder"
        class="org.springframework.integration.kafka.serializer.common.StringDecoder" />
</beans>

Pom:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>com.happystudio</groupId>
    <artifactId>spring</artifactId>
    <packaging>war</packaging>
    <version>0.0.1-SNAPSHOT</version>
    <name>Spring MVC Tutorial</name>
    <url>http://maven.apache.org</url>
    <properties>
        <spring.version>4.0.0.RELEASE</spring.version>
    </properties>
    <dependencies>
        <dependency>
            <groupId>javax.servlet</groupId>
            <artifactId>javax.servlet-api</artifactId>
            <version>3.1.0</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-core</artifactId>
            <version>${spring.version}</version>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-web</artifactId>
            <version>${spring.version}</version>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-webmvc</artifactId>
            <version>${spring.version}</version>
        </dependency>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.11</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework.integration</groupId>
            <artifactId>spring-integration-kafka</artifactId>
            <version>1.0.1.BUILD-SNAPSHOT</version>
            <exclusions>
                <exclusion>
                    <artifactId>jms</artifactId>
                    <groupId>javax.jms</groupId>
                </exclusion>
                <exclusion>
                    <artifactId>jmxri</artifactId>
                    <groupId>com.sun.jmx</groupId>
                </exclusion>
                <exclusion>
                    <artifactId>jmxtools</artifactId>
                    <groupId>com.sun.jdmk</groupId>
                </exclusion>
                <exclusion>
                    <artifactId>log4j</artifactId>
                    <groupId>log4j</groupId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>org.springframework.integration</groupId>
            <artifactId>spring-integration-stream</artifactId>
            <version>${spring.version}</version>
            <scope>compile</scope>
        </dependency>
        <dependency>
            <groupId>commons-logging</groupId>
            <artifactId>commons-logging</artifactId>
            <version>1.1.1</version>
        </dependency>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-log4j12</artifactId>
            <version>1.7.5</version>
        </dependency>
    </dependencies>
    <build>
        <finalName>Spring MVC Tutorial</finalName>
    </build>
</project>

The problem is coming when I am trying to integrate spring mvc in the project else it works fine.

Notify producer message delivery status to caller

Not sure if this technically qualify as an issue, but I am trying to figure out what's the best way to incorporate the new ProducerListener in my project. In my use case, I need to send out a bunch of messages to Kafka broker in strict order. If one of those messages failed to be delivered to the broker, the caller needs to be notified and stops all subsequent requests. ProducerListener looks like a great candidate to solve my problem and guarantee the delivery order.

I created a ProducerListener and assigned it to the producer configuration. The caller sends these messages to a MessageChannel which is wired up to a outbound-channel-adapter. Ideally, the caller should be notified of whether the message delivery to the broker is successful or not. However, because the ProducerListener and its callbacks are pre-defined with the producer configuration, I found myself stuck in a place where I can't get the call back to properly notify the caller.

To go around this, I added a ConcurrentHashSet in the the ProducerListner and keep track of all the failed delivery attempt. This way the caller will be able to check it next time it tries to send messages. This is far from ideal though.

Any suggestions on this? I am just curious if I am using it as it's intended to be. Thanks.

Reinstate the ability to send messages synchronously

In the 1.3 version we used to have a setting for the KafkaProducingMessasgeHandler that allowed opting between asynchronous (default) and synchronous sending of messages (via the old ProducerConfiguration).

While asynchronous sending is preferred for performance reasons, synchronous sending allows the sender to block until a message is received and to ensure that any sending exception is thrown in the caller's thread

With the migration to the KafkaTemplate, the ability of sending messages synchronously seems to have been lost.

failed to shutdown tomcat

Tomcat is not getting shut down when using spring-integration-kafka with message-driven-channel-adapter.
I used spring-integration-kafka-1.3.0, tomcat-8.0.30, kafka 0.8.2
Upon a shutdown request, I got a lot of "memory leak" warning and errors:

SEVERE [localhost-startStop-2] org.apache.catalina.loader.WebappClassLoaderBase.checkThreadLocalMapForLeaks The web application [si-kafka-shutdown] created a ThreadLocal with key of type [scala.util.DynamicVariable$$anon$1] (value [scala.util.DynamicVariable$$anon$1@280cb0ad]) and a value of type [scala.None$] (value [None]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.
SEVERE [localhost-startStop-2] org.apache.catalina.loader.WebappClassLoaderBase.checkThreadLocalMapForLeaks The web application [si-kafka-shutdown] created a ThreadLocal with key of type [scala.util.DynamicVariable$$anon$1] (value [scala.util.DynamicVariable$$anon$1@18a20278]) and a value of type [scala.Some] (value [Some([1.87] failure: end of input

{"jmx_port":-1,"timestamp":"1450784942805","host":"localhost","version":1,"port":9092}
                                                                                      ^)]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.

See example project https://github.com/mbolgari/si-kafka-shutdown which also contains catalina.out.

How to Override the topic Name

I am providing the topic name as below but i see the message still posted to the topic configure in the XML file

Please let me know what is missing

@Inject
    MessageChannel inputToKafka;


    public void sendMessageToKafka(String message) {
        inputToKafka.send(
                MessageBuilder.withPayload(message)
                        .setHeader(KafkaHeaders.MESSAGE_KEY, "1")
                        .setHeader(KafkaHeaders.TOPIC, "newtopic")
                        .build());
   }

XML File

<int-kafka:producer-context id="kafkaProducerContext" >
        <int-kafka:producer-configurations>
            <int-kafka:producer-configuration broker-list="host:6667"                      
                       key-class-type="java.lang.String"
                       value-class-type="java.lang.String"
                       topic="sample"
                       value-serializer="kafkaSerializer"
                       key-serializer="kafkaSerializer"
                       compression-type="none"/>
        </int-kafka:producer-configurations>
    </int-kafka:producer-context>

Add an id field to the consumer-configuration element

There are places in my code where I'd like to be able to refer to an individual consumer configuration (i.e. in a custom splitter or in an advice).
With 1.0.0.M3a I need two things to be able to gain access to an individual consumer-group:

  1. Pointer to consumer-context (easily done as you provide an "id" attribute for this)
  2. String field that stores the group-id of the consumer-group. This is slightly more challenging since either I have to create a Spring bean with this name and share it with these other classes and the consumer-configuration or I just hard-code the string in multiple places of the config files.

I have the following configuration:

    <int-kafka:consumer-context id="kafkaConsumerContext" consumer-timeout="4000" zookeeper-connect="zookeeperConnect" consumer-properties="kafkaConsumerProperties">
        <int-kafka:consumer-configurations>
            <int-kafka:consumer-configuration group-id="#{bofConsumerGroupId}" max-messages="20000">
                <int-kafka:topic id="bofTopic" streams="3"/>
            </int-kafka:consumer-configuration>
        </int-kafka:consumer-configurations>
    </int-kafka:consumer-context>

    <bean id="bofConsumerGroupId" class="java.lang.String">
        <constructor-arg value="bofConsumerGroup" />
    </bean>
    <int:splitter ref="kafkaMessageSplitter" input-channel="exchangeKafkaFusionInboundSpringExecutorChannel" output-channel="channel3" />
    <bean class="com.xxx.common.util.springintegration.KafkaMessageSplitter" id="kafkaMessageSplitter">
        <property name="consumerContext" ref="kafkaConsumerContext" />
        <property name="consumerGroupId" value="#{bofConsumerGroupId}" />
    </bean>
    <int-kafka:inbound-channel-adapter id="kafkaInboundChannelAdapter" kafka-consumer-context-ref="kafkaConsumerContext" auto-startup="true" channel="exchangeKafkaFusionInboundSpringExecutorChannel">
        <int:poller fixed-delay="10" time-unit="MILLISECONDS" max-messages-per-poll="5">
            <int:advice-chain>
                <bean id="kafkaConsumerAfterAdvice" class="com.xxx.common.util.springintegration.KafkaConsumerAfterAdvice">
                    <property name="consumerContext" ref="kafkaConsumerContext" />
                    <property name="consumerGroupId" value="#{bofConsumerGroupId}" />
                </bean>
            </int:advice-chain>
        </int:poller>
    </int-kafka:inbound-channel-adapter>

Internally in those custom classes I then need to do something like this to get the indivdual consumer configuration:

    ConsumerConfiguration<String, Object> consumerConfig = consumerContext.getConsumerConfiguration(consumerGroupId);

Ideally what I'd like to have is this:

    <int-kafka:consumer-context id="kafkaConsumerContext" consumer-timeout="4000" zookeeper-connect="zookeeperConnect" consumer-properties="kafkaConsumerProperties">
        <int-kafka:consumer-configurations>
            <int-kafka:consumer-configuration id="bofConsumerConfiguration" group-id="bofConsumerGroupId" max-messages="20000">
                <int-kafka:topic id="bofTopic" streams="3"/>
            </int-kafka:consumer-configuration>
        </int-kafka:consumer-configurations>
    </int-kafka:consumer-context>

    <int:splitter ref="kafkaMessageSplitter" input-channel="exchangeKafkaFusionInboundSpringExecutorChannel" output-channel="channel3" />
    <bean class="com.xxx.common.util.springintegration.KafkaMessageSplitter" id="kafkaMessageSplitter">
        <property name="consumerContext" ref="kafkaConsumerContext" />
    </bean>
    <int-kafka:inbound-channel-adapter id="kafkaInboundChannelAdapter" kafka-consumer-context-ref="kafkaConsumerContext" auto-startup="true" channel="exchangeKafkaFusionInboundSpringExecutorChannel">
        <int:poller fixed-delay="10" time-unit="MILLISECONDS" max-messages-per-poll="5">
            <int:advice-chain>
                <bean id="kafkaConsumerAfterAdvice" class="com.xxx.common.util.springintegration.KafkaConsumerAfterAdvice">
                    <property name="consumerConfiguration" ref="bofConsumerConfiguration" />
                </bean>
            </int:advice-chain>
        </int:poller>
    </int-kafka:inbound-channel-adapter>

Would it be possible for you to allow an id attribute on the consumer-configuration (which would be used as an id for a Spring Bean)?

spring unittest problem

hi there,
i got some problems today when i use spring kafka integration.
"normal_test_code" below runs well and do the kafka recieve messages, but "spring_unit_test_code" show send successfully but actually kafka got nothing.
can u help me to fix it?.

image

########### test.xml start
<int:channel id="common-message.producer">
        <int:queue/>
    </int:channel>
    <int-kafka:outbound-channel-adapter id="kafkaOutboundChannelAdapter"
                                        kafka-producer-context-ref="kafkaProducerContext"
                                        auto-startup="false"
                                        channel="common-message.producer"
                                        order="3">
        <int:poller fixed-delay="10" time-unit="MILLISECONDS" receive-timeout="0" task-executor="taskExecutor"/>
    </int-kafka:outbound-channel-adapter>
    <task:executor id="taskExecutor" pool-size="5" keep-alive="120" queue-capacity="500"/>
    <bean id="producerProperties"
          class="org.springframework.beans.factory.config.PropertiesFactoryBean">
        <property name="properties">
            <props>
                <prop key="topic.metadata.refresh.interval.ms">3600000</prop>
                <prop key="message.send.max.retries">5</prop>
                <prop key="serializer.class">kafka.serializer.StringEncoder</prop>
                <prop key="request.required.acks">1</prop>
            </props>
        </property>
    </bean>
    <!--    <bean id="kafkaEncoder"
          class="org.springframework.integration.kafka.serializer.avro.AvroReflectDatumBackedKafkaEncoder">
        <constructor-arg value="java.lang.String"/>
    </bean>-->    
    <bean id="kafkaEncoder" class="org.springframework.integration.kafka.serializer.common.StringEncoder" />
    <int-kafka:producer-context id="kafkaProducerContext" producer-properties="producerProperties">
        <int-kafka:producer-configurations>
            <int-kafka:producer-configuration broker-list="${mq.broker_list}" 
                                              key-class-type="java.lang.String"
                                              value-class-type="java.lang.String"
                                              topic="test"
                                              value-encoder="kafkaEncoder"
                                              key-encoder="kafkaEncoder"
                                              compression-type="none"/>
        </int-kafka:producer-configurations>
    </int-kafka:producer-context>

    <context:component-scan base-package="org.lottery.common.message.impl.kafka" />
########### test.xml end
########### spring unit test code start
@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(locations = {"classpath*:/common-message-test.xml"})
public class MessageProducerTest {

    @Autowired
    MessageProducer messageProducer;

    @Test
    public void testSend() throws InterruptedException {
        String payload = "11111111111";
        messageProducer.send("test", payload);
        Thread.sleep(10000);
        System.out.println("passed");
    }
}
########### spring unit test code end
########### normal test code start
private static final String CONFIG = "classpath*:/common-message-test.xml";
    private static final Random rand = new Random();

    @Test
    public void testSend() {
        final ClassPathXmlApplicationContext ctx = new ClassPathXmlApplicationContext(CONFIG);
        ctx.start();

        final MessageChannel channel = ctx.getBean("common-message.producer", MessageChannel.class);

        channel.send(MessageBuilder.withPayload("from messageChannel" + System.currentTimeMillis())
                .setHeader(KafkaHeaders.MESSAGE_KEY, "key")
                .setHeader(KafkaHeaders.TOPIC, "test")
                .build());

        MessageProducer messageProducer = ctx.getBean(MessageProducer.class);
        messageProducer.send("test", "from messageProducer" + System.currentTimeMillis());
        try {
            Thread.sleep(10000);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        ctx.close();
    }
########### normal test code end
########### test result start

spring unit test result

2015-09-17 17:41:03
[DEBUG]-[Thread: main]-[org.springframework.integration.channel.AbstractMessageChannel.send()]: postSend (sent=true) on channel 'common-message.producer', message: GenericMessage [payload=11111111111, headers={timestamp=1442482863729, id=b30d5275-5b16-aacb-2fb3-25c4242f12a1, kafka_messageKey=key, kafka_topic=test}]

normal unit test result

[DEBUG]-[Thread: main]-[org.springframework.integration.channel.AbstractMessageChannel.send()]: postSend (sent=true) on channel 'common-message.producer', message: GenericMessage [payload=from messageChannel1442483971691, headers={timestamp=1442483971697, id=29c3a705-eb16-34b8-1be7-d5400d89a03d, kafka_messageKey=key, kafka_topic=test}]

2015-09-17 17:59:31
[DEBUG]-[Thread: main]-[org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean()]: Returning cached instance of singleton bean 'messageProducerKafkaImpl'

2015-09-17 17:59:31
[DEBUG]-[Thread: main]-[org.springframework.integration.channel.AbstractMessageChannel.send()]: preSend on channel 'common-message.producer', message: GenericMessage [payload=from messageProducer1442483971697, headers={timestamp=1442483971697, id=e32b4087-5629-cf91-7d64-4f68b606a86b, kafka_messageKey=key, kafka_topic=test}]

2015-09-17 17:59:31
[DEBUG]-[Thread: main]-[org.springframework.integration.channel.AbstractMessageChannel.send()]: postSend (sent=true) on channel 'common-message.producer', message: GenericMessage [payload=from messageProducer1442483971697, headers={timestamp=1442483971697, id=e32b4087-5629-cf91-7d64-4f68b606a86b, kafka_messageKey=key, kafka_topic=test}]

2015-09-17 17:59:31
[DEBUG]-[Thread: taskExecutor-2]-[org.springframework.integration.channel.AbstractPollableChannel.receive()]: postReceive on channel 'common-message.producer', message: GenericMessage [payload=from messageChannel1442483971691, headers={timestamp=1442483971697, id=29c3a705-eb16-34b8-1be7-d5400d89a03d, kafka_messageKey=key, kafka_topic=test}]

2015-09-17 17:59:31
[DEBUG]-[Thread: taskExecutor-2]-[org.springframework.integration.endpoint.AbstractPollingEndpoint.doPoll()]: Poll resulted in Message: GenericMessage [payload=from messageChannel1442483971691, headers={timestamp=1442483971697, id=29c3a705-eb16-34b8-1be7-d5400d89a03d, kafka_messageKey=key, kafka_topic=test}]
![image](https://cloud.githubusercontent.com/assets/5570216/9930399/09a70432-5d66-11e5-81ac-afcbcb67d85f.png)
########### test result end

A spring boot application server should start up regardless of whether Apache Kafka server is available tor not

I have a spring boot application then I need to run regardless of whether Apache Kafka server is available or not. At present what happens If Apache Kafka server is down I am not able to start my spring boot application and I am not able to find out a way by which I can make sure my spring boot application start up without the availability of Apache Kafka server.

Basically I need a way by which I can override KafkaMessageDrivenChannelAdapter.doStart()
I am using spring-kafka-integration: spring-integration-java-dsl

Version 1.0.0.RC1 does not work with Kafka 0.8.2 on consumer side

I have 2 projects

https://github.com/mohitarora/es-cqrs-example/tree/master/cart-command-services - This one produces messages and publishes to Kafka and works perfectly fine.

https://github.com/mohitarora/es-cqrs-example/tree/master/cart-query-services - This one consumers messages published by previous project.

Both these projects uses Spring Boot. There is no error but the messages are never consumed. I also don't see any error but I am sure beans are not initialized.

Am i doing something wrong?

KafkaMessageDrivenChannelAdapter does not connect to new unknown leaders

We have noticed that the KafkaMessageDrivenChannelAdapter never refresh the list of brokers that it is connected to, which means that if:

  • A Kafka broker is down when the Spring app starts, or
  • A Kafka broker is added to the cluster after the Spring app starts, or
  • A Kafka broker is not a leader of any partition when the Spring app starts

and the new Kafka broker become the leader of a partition, the Spring app will no consume messages from that partition.

Probably org.springframework.integration.kafka.listener.KafkaMessageListenerContainer.FetchTask.UpdateLeadersTask should create new FetchTasks for unknown/new brokers.

The easiest way to reproduce:

  • Create a Kafka cluster with 2 brokers
  • Create a topic with 1 partition
  • Configure Spring to consume that topic
  • Kill the leader broker.
  • Spring stops consuming messages

Spring Kafka Integration: Consumer code is not getting invoked.

Hi,

I am building up a code using Spring MVC and Spring Kafka Integration. The code on a URL hit produces the messages and once a message is send I want to invoke my consumer code so that It can absorb the message. While first part that is sending the message to broker is working fine but the remaining part that is to consume the message automatically is not working.

  1. Spring MVC / Spring Integration version is 4.1.5.Release.
  2. spring-integration-kafka is 1.2.0.Release.

Pom.xml:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>com</groupId>
    <artifactId>KafKaWithSpringMVC</artifactId>
    <packaging>war</packaging>
    <version>0.0.1-SNAPSHOT</version>
    <name>Spring MVC Tutorial</name>
    <url>http://maven.apache.org</url>
    <properties>
        <spring.version>4.1.5.RELEASE</spring.version>
    </properties>
    <dependencies>
        <dependency>
            <groupId>javax.servlet</groupId>
            <artifactId>javax.servlet-api</artifactId>
            <version>3.1.0</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-core</artifactId>
            <version>${spring.version}</version>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-web</artifactId>
            <version>${spring.version}</version>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-webmvc</artifactId>
            <version>${spring.version}</version>
        </dependency>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.11</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework.integration</groupId>
            <artifactId>spring-integration-kafka</artifactId>
            <version>1.2.0.RELEASE</version>
            <exclusions>
                <exclusion>
                    <artifactId>jms</artifactId>
                    <groupId>javax.jms</groupId>
                </exclusion>
                <exclusion>
                    <artifactId>jmxri</artifactId>
                    <groupId>com.sun.jmx</groupId>
                </exclusion>
                <exclusion>
                    <artifactId>jmxtools</artifactId>
                    <groupId>com.sun.jdmk</groupId>
                </exclusion>
                <exclusion>
                    <artifactId>log4j</artifactId>
                    <groupId>log4j</groupId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>org.springframework.integration</groupId>
            <artifactId>spring-integration-stream</artifactId>
            <version>${spring.version}</version>
            <scope>compile</scope>
        </dependency>
        <dependency>
            <groupId>commons-logging</groupId>
            <artifactId>commons-logging</artifactId>
            <version>1.1.1</version>
        </dependency>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-log4j12</artifactId>
            <version>1.7.5</version>
        </dependency>
        <!-- <dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka-clients</artifactId> 
            <version>0.8.2.1</version> </dependency> -->
    </dependencies>
    <build>
        <finalName>
        KafKaWithSpringMVC
    </finalName>
    </build>
</project>

Producer Code:

package com;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.integration.kafka.support.KafkaHeaders;
import org.springframework.integration.support.MessageBuilder;
import org.springframework.messaging.MessageChannel;
import org.springframework.stereotype.Controller;
import org.springframework.ui.Model;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.servlet.config.annotation.EnableWebMvc;
@Controller
@EnableWebMvc
public class ProducerController {
    private static final Log LOG = LogFactory.getLog(ProducerController.class);
    @Autowired
    private MessageChannel inputToKafka;
    @RequestMapping("/producer")
    public String sendMessage(@RequestParam(value = "name", required = false, defaultValue = "World") String name,
            Model model) {
        model.addAttribute("name", name);
        for (int i = 30; i < 53; i++) {
            boolean flag = inputToKafka.send(
                    MessageBuilder.withPayload("Message: " + i).setHeader(KafkaHeaders.TOPIC, "testtopic").setHeader(KafkaHeaders.PARTITION_ID, 3).build());
            System.out.println("Message send: " + flag);
            LOG.info("message sent " + i + "--");
        }
        return "producer";
    }
}

Consumer Code:

package com;
import java.util.Collection;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.ConcurrentHashMap;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.messaging.MessageChannel;
import org.springframework.stereotype.Component;
@Component
public class ConsumerController {
    private static final Log LOG = LogFactory.getLog(ConsumerController.class); 
    public void processMessage(Map<String, Map<Integer, List<byte[]>>> msgs) {
        for (Map.Entry<String, Map<Integer, List<byte[]>>> entry : msgs.entrySet()) {
            LOG.debug("Topic:" + entry.getKey());
            ConcurrentHashMap<Integer, List<byte[]>> messages = (ConcurrentHashMap<Integer, List<byte[]>>) entry.getValue();
            LOG.debug("\n**** Partition: \n");
            Set<Integer> keys = messages.keySet();
            for (Integer i : keys)
                LOG.debug("p:" + i);
            LOG.debug("\n**************\n");
            Collection<List<byte[]>> values = messages.values();
            for (Iterator<List<byte[]>> iterator = values.iterator(); iterator.hasNext();) {
                List<byte[]> list = iterator.next();
                for (byte[] object : list) {
                    String message = new String(object);
                    LOG.debug("Message: " + message);
                    try {
                        System.out.println("Message received: " + message);
                    } catch (Exception e) {
                        LOG.error(String.format("Failed to process message %s", message));
                    }
                }
            }
        }
    }
}

Kafka Producer configuration:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:int="http://www.springframework.org/schema/integration"
       xmlns:int-kafka="http://www.springframework.org/schema/integration/kafka"
       xmlns:task="http://www.springframework.org/schema/task"
       xsi:schemaLocation="http://www.springframework.org/schema/integration/kafka http://www.springframework.org/schema/integration/kafka/spring-integration-kafka.xsd
        http://www.springframework.org/schema/integration http://www.springframework.org/schema/integration/spring-integration.xsd
        http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
        http://www.springframework.org/schema/task http://www.springframework.org/schema/task/spring-task.xsd">
    <int:channel id="inputToKafka">
        <int:queue />
    </int:channel>
    <int-kafka:outbound-channel-adapter
        id="kafkaOutboundChannelAdapter" kafka-producer-context-ref="kafkaProducerContext"
        channel="inputToKafka" >
        <int:poller fixed-delay="1000" time-unit="MILLISECONDS"
            receive-timeout="0" task-executor="taskExecutor" />
    </int-kafka:outbound-channel-adapter>
    <task:executor id="taskExecutor" pool-size="5"
        keep-alive="120" queue-capacity="10000" />
    <int-kafka:producer-context id="kafkaProducerContext">
        <int-kafka:producer-configurations>
            <int-kafka:producer-configuration
                broker-list="localhost:9092" key-class-type="java.lang.String"
                key-encoder="encoder" value-class-type="java.lang.String" 
                value-encoder="encoder" partitioner="partitioner" topic="testtopic"/>
        </int-kafka:producer-configurations>
    </int-kafka:producer-context>
    <bean id="encoder"
        class="org.springframework.integration.kafka.serializer.common.StringEncoder" />
    <bean id="partitioner"
        class="org.springframework.integration.kafka.support.DefaultPartitioner" />
</beans>

Kafka Consumer configuration:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:int="http://www.springframework.org/schema/integration"
    xmlns:stream="http://www.springframework.org/schema/integration/stream"
    xmlns:int-kafka="http://www.springframework.org/schema/integration/kafka"
    xmlns:task="http://www.springframework.org/schema/task"
    xsi:schemaLocation="http://www.springframework.org/schema/integration/stream http://www.springframework.org/schema/integration/stream/spring-integration-stream.xsd
        http://www.springframework.org/schema/integration/kafka http://www.springframework.org/schema/integration/kafka/spring-integration-kafka.xsd
        http://www.springframework.org/schema/integration http://www.springframework.org/schema/integration/spring-integration.xsd
        http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
        http://www.springframework.org/schema/task http://www.springframework.org/schema/task/spring-task.xsd">
    <int:channel id="inputFromKafka">
        <int:dispatcher task-executor="kafkaMessageExecutor" />
    </int:channel>
    <task:executor id="kafkaMessageExecutor" pool-size="0-10"
        keep-alive="120" queue-capacity="500" />
    <int-kafka:zookeeper-connect id="zookeeperConnect"
        zk-connect="localhost:2181" zk-connection-timeout="6000"
        zk-session-timeout="80" zk-sync-time="2000" />
    <int-kafka:inbound-channel-adapter
        id="kafkaInboundChannelAdapter" kafka-consumer-context-ref="consumerContext"
        auto-startup="true" channel="inputFromKafka">
        <int:poller fixed-delay="2000" time-unit="MILLISECONDS" />
    </int-kafka:inbound-channel-adapter>
    <bean id="consumerProperties"
        class="org.springframework.beans.factory.config.PropertiesFactoryBean">
        <property name="properties">
            <props>
                <prop key="auto.offset.reset">smallest</prop>
                <prop key="socket.receive.buffer.bytes">10485760</prop> <!-- 10M -->
                <prop key="fetch.message.max.bytes">5242880</prop>
                <prop key="auto.commit.interval.ms">1000</prop>
            </props>
        </property>
    </bean>
    <int:outbound-channel-adapter channel="inputFromKafka"
        ref="consumerController" method="processMessage" />
    <int-kafka:consumer-context id="consumerContext"
        consumer-timeout="1000" zookeeper-connect="zookeeperConnect"
        consumer-properties="consumerProperties">
        <int-kafka:consumer-configurations>
            <int-kafka:consumer-configuration
                group-id="default1" max-messages="5000" key-decoder="deccoder"
                value-decoder="deccoder">
                <int-kafka:topic id="testtopic" streams="3" />
            </int-kafka:consumer-configuration>
            <!-- <int-kafka:consumer-configuration group-id="default2" max-messages="50"> 
                <int-kafka:topic id="test2" streams="4"/> </int-kafka:consumer-configuration> 
                <int-kafka:consumer-configuration group-id="default3" max-messages="10"> 
                <int-kafka:topic-filter pattern="regextopic.*" streams="4" exclude="false"/> 
                </int-kafka:consumer-configuration> -->
        </int-kafka:consumer-configurations>
    </int-kafka:consumer-context>
    <bean id="deccoder" class="org.springframework.integration.kafka.serializer.common.StringDecoder" />
</beans>

Messages don't appear to be enqueueing

First off, thanks so much for building this! It looks terrific, but unfortunately thus far I have been unable to get messages to actually enqueue. I'm running ZK and Kafka locally, and didn't see anything reaching Kafka, so I stepped through the code and at this point:

if (this.queue instanceof BlockingQueue) {
                BlockingQueue<Message<?>> blockingQueue = (BlockingQueue<Message<?>>) this.queue;
                if (timeout > 0) {
                    return blockingQueue.offer(message, timeout, TimeUnit.MILLISECONDS);
                }
                if (timeout == 0) {
                    return blockingQueue.offer(message);
                }
                blockingQueue.put(message);
                return true;
            }

I've got a break after blockingQueue.put(message), and the Queue shows as empty, but it returns true. Is it possible for the message to be handled immediately even when debugging? I had expected to see a single entry.

This is my xml:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:int="http://www.springframework.org/schema/integration"
       xmlns:int-kafka="http://www.springframework.org/schema/integration/kafka"
       xmlns:stream="http://www.springframework.org/schema/integration/stream"
       xmlns:context="http://www.springframework.org/schema/context"
       xmlns:util="http://www.springframework.org/schema/util"
       xmlns:task="http://www.springframework.org/schema/task"
       xsi:schemaLocation="http://www.springframework.org/schema/integration/kafka http://www.springframework.org/schema/integration/kafka/spring-integration-kafka.xsd
        http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd 
        http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd 
        http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util.xsd
        http://www.springframework.org/schema/integration/stream http://www.springframework.org/schema/integration/stream/spring-integration-stream.xsd
        http://www.springframework.org/schema/integration http://www.springframework.org/schema/integration/spring-integration.xsd
        http://www.springframework.org/schema/task http://www.springframework.org/schema/task/spring-task.xsd">


    <int:channel id="inputToKafka">
        <int:queue/>
    </int:channel>

    <int-kafka:outbound-channel-adapter id="kafkaOutboundChannelAdapter"
                                        kafka-producer-context-ref="kafkaProducerContext"
                                        auto-startup="true"
                                        channel="inputToKafka"
                                        order="3"
                                        topic="fanflow">
        <int:poller fixed-delay="1000" time-unit="MILLISECONDS" receive-timeout="0" task-executor="taskExecutor"/>
    </int-kafka:outbound-channel-adapter>

    <task:executor id="taskExecutor" pool-size="5" keep-alive="120" queue-capacity="500" />

    <int-kafka:producer-context id="kafkaProducerContext">
        <int-kafka:producer-configurations>
            <int-kafka:producer-configuration broker-list="localhost:9092"
                                              topic="fanflow"
                                              compression-codec="default"/>
        </int-kafka:producer-configurations>
    </int-kafka:producer-context>
</beans>

And my java:

package com.fanatics.fanflow.collector.store;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.ApplicationContext;
import org.springframework.integration.kafka.support.KafkaHeaders;
import org.springframework.integration.support.MessageBuilder;
import org.springframework.messaging.Message;
import org.springframework.messaging.MessageChannel;
import org.springframework.stereotype.Service;

import com.fanatics.fanflow.event.Event;

@Service
public class KafkaStore implements Store {

    @Autowired
    ApplicationContext ctx;

    public SaveResult storeEvent(Event event) {
        SaveResult ot = new SaveResult();
        final MessageChannel channel = ctx.getBean("inputToKafka", MessageChannel.class);
        final Message<Event> message = MessageBuilder.withPayload(event).setHeader(KafkaHeaders.TOPIC, "fanflow").build();
        boolean sent = channel.send(message, -1);
        if (!sent) {
            ot.addErrorMessage("Event not logged to kafka.");
        }
        return ot;
    }
}

I've basically copied from the test xml and documentation in the readme - please let me know what I've missed!

JMX (JConsole) shows all spring-integration-kafka components with double-quotes around them

I'm using spring-integration-kafka 1.1.0.RELEASE (from the tag named as-such) with Spring Integration 4.1.0.RELEASE. Looking at what is published in JMX (JConsole) it shows all spring-integration-kafka components with double-quotes around them (see attached). Note that this also occurs when using Spring Integration 4.0.3.RELEASE.

In case it's needed this is how I'm exporting to JMX and the RMI server.

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns:aop="http://www.springframework.org/schema/aop"
    xmlns:context="http://www.springframework.org/schema/context"
    xmlns:util="http://www.springframework.org/schema/util"
    xmlns:int-jmx="http://www.springframework.org/schema/integration/jmx"
    xsi:schemaLocation="http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop.xsd
        http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
        http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util.xsd
        http://www.springframework.org/schema/integration/jmx http://www.springframework.org/schema/integration/jmx/spring-integration-jmx.xsd
        http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd">

    <!-- ################################################################################################################################### -->
    <!-- Start JMX registry and client connector -->

    <!-- The port that this will listen on is defined via standard Java property name (can't be changed) -->
    <!-- Name is "rmi.jmx.server.port" and we set that in the rmi.properties file and Spring loads it in application-properties-context.xml -->
    <context:mbean-server id="mbeanServer" />

    <!-- http://theholyjava.wordpress.com/2010/09/16/exposing-a-pojo-as-a-jmx-mbean-easily-with-spring/ -->
    <bean id="rmiRegistry" class="org.springframework.remoting.rmi.RmiRegistryFactoryBean" >
        <property name="port" value="${rmi.jmx.server.port}"/>
    </bean>

    <!-- Dynamically get the name rather than hardcoding to 127.0.0.1 so that we can export this to a file in the Main class for remote JMX connections -->
    <bean id="localhost" class="java.net.InetAddress" factory-method="getLocalHost"/>   
    <bean id="hostname" factory-bean="localhost" factory-method="getHostName"/>
    <bean id="jmxServiceUrl" class="java.lang.String">
        <constructor-arg value="service:jmx:rmi://#{hostname}/jndi/rmi://#{hostname}:${rmi.jmx.server.port}/JMXConnectorServer" />
    </bean>

    <!-- Expose JMX over JMXMP -->
    <!-- http://www.springbyexample.org/examples/spring-jmx.html -->
    <!-- http://docs.spring.io/spring/docs/current/spring-framework-reference/html/jmx.html -->
    <bean id="jmxServerConnector" class="org.springframework.jmx.support.ConnectorServerFactoryBean" depends-on="rmiRegistry">
        <property name="objectName" value="connector:name=rmi" />
        <property name="serviceUrl" ref="jmxServiceUrl" />
    </bean>



    <!-- ################################################################################################################################### -->
    <!-- Create team-specific bean instances that will be exported to JMX -->

    <!-- http://docs.spring.io/spring/docs/current/spring-framework-reference/html/jmx.html -->
    <bean id="controlBusJmxService" class="com.xxx.common.controlbus.service.ControlBusJmxServiceImpl" >
        <property name="processorName" value="${processor.name}"/>
        <!-- http://docs.spring.io/spring-integration/reference/htmlsingle/#jmx-shutdown        -->
        <property name="integrationMBeanExporter" ref="integrationMBeanExporter"/>
    </bean>

    <!-- This bean implements org.springframework.context.SmartLifecycle to be able to set the -->
    <!-- "phase" of this bean to be the very last bean to be created.  This allows us to query -->
    <!-- this object to know if server is fully started.                                       -->
    <!-- You'll see this in the log when this is started...                                    -->
    <!--   2014-08-28T10:39:39,818 [main] [INFO ] (org.springframework.context.support.DefaultLifecycleProcessor) - Starting beans in phase 2147483647 -->
    <bean id="serverStartedJmxMonitor" class="com.xxx.common.controlbus.service.ServerStartedJmxMonitorImpl" />



    <!-- ################################################################################################################################### -->
    <!-- Export JMX beans -->

    <int-jmx:mbean-export id="integrationMBeanExporter" default-domain="spring.application" server="mbeanServer"/>
    <!-- Under the covers this one really uses class org.springframework.jmx.export.annotation.AnnotationMBeanExporter -->
    <context:mbean-export  default-domain="spring.application" />
    <context:component-scan base-package="org.springframework.integration.service" />
    <bean id="mbeanExporter" class="org.springframework.jmx.export.MBeanExporter">
        <property name="server" ref="mbeanServer"/>
        <property name="beans">
            <map>
                <!-- For creating folders in JMX see http://stackoverflow.com/questions/20669928/is-it-possible-to-create-jmx-subdomains -->
                <entry key="TMG:00=CmeToFusion,name=ServerStartedJmxMonitor" value-ref="serverStartedJmxMonitor"/>
                <!-- This one is special, it lets us gracefully shutdown our server: http://docs.spring.io/spring-integration/reference/htmlsingle/#jmx-mbean-shutdown -->
                <entry key="TMG:00=CmeToFusion,name=SpringIntegrationMBeanExporter" value-ref="integrationMBeanExporter"/>
            </map>
        </property>
    </bean>

    <!-- ======================= BEGIN ================================== -->
    <!-- Customize exact methods and descriptions used for some JMX beans -->
    <!-- ================================================================ -->
    <!-- See http://docs.spring.io/spring/docs/current/spring-framework-reference/html/jmx.html -->
    <bean id="exporter" class="org.springframework.jmx.export.MBeanExporter">
        <property name="assembler" ref="assembler"/>
        <property name="namingStrategy" ref="namingStrategy"/>
        <property name="autodetect" value="true"/>
        <property name="beans">
            <map>
                <!-- For creating folders in JMX see http://stackoverflow.com/questions/20669928/is-it-possible-to-create-jmx-subdomains -->
                <entry key="TMG:00=CmeToFusion,name=ControlBusJmxService" value-ref="controlBusJmxService"/>
            </map>
        </property>
    </bean>

    <bean id="jmxAttributeSource" class="org.springframework.jmx.export.annotation.AnnotationJmxAttributeSource"/>

    <!-- will create management interface using annotation metadata -->
    <bean id="assembler" class="org.springframework.jmx.export.assembler.MetadataMBeanInfoAssembler">
        <property name="attributeSource" ref="jmxAttributeSource"/>
    </bean>

    <!-- will pick up the ObjectName from the annotation -->
    <bean id="namingStrategy" class="org.springframework.jmx.export.naming.MetadataNamingStrategy">
        <property name="attributeSource" ref="jmxAttributeSource"/>
    </bean>
    <!-- ======================= END ================================== -->
</beans>

spring-integration-kafka-jconsole

Update version 1.1.0.RELEASE to allow using Spring Integration version 4.1.0.RELEASE

In spring-integration-extensions\spring-integration-kafka\src\main\resources\org\springframework\integration\config\xml\spring-integration-kafka-1.0.xsd
you have this:

<xsd:import namespace="http://www.springframework.org/schema/integration" schemaLocation="http://www.springframework.org/schema/integration/spring-integration-4.0.xsd"/>

Spring Integration 4.1.0.RELEASE was released a few weeks ago and it has this in the jar:

META-INF/spring.schemas
http\://www.springframework.org/schema/integration/spring-integration-4.1.xsd=org/springframework/integration/config/xml/spring-integration-4.1.xsd
http\://www.springframework.org/schema/integration/spring-integration.xsd=org/springframework/integration/config/xml/spring-integration-4.1.xsd

The spring-integration-kafka XSD file however points to spring-integration-4.0.xsd not spring-integration-4.1.xsd (as the build.gradle was pointing to springIntegrationVersion = '4.0.3.RELEASE'.
Is it possible to create a new branch named like 1.1.1.RELEASE?
If you do that please also update the spring-integration-extensions/spring-integration-kafka/gradle.properties to have version=1.1.1.RELEASE (that looks to have been missed in version 1.1.0.RELEASE since the artifacts are building with "SNAPSHOT" name even when pulled from tag 1.1.0.RELEASE).

KafkaMessageListenerContainer cache fetchOffsets having issue with circuit breaker

Hey,

I had an issue when apply circuit breaker around my consumer runnable. With circuit breaker, when my consumer logic throws any exception, the circuit will be open and the request will be short circuited. The problem I had is when the circuit is back from open to close KafkaMessageListenerContainer will not fetch from the last failure log. I dig into the code and found KafkaMessageListenerContainer caching the fetchOffsets after it is init. I see why it caches the offsets and it does make sense.

My current work around is to stop the container, wait a few seconds and restart the container in my circuit breaker fallback logic. Since every time it is init, it will ask OffsetManager for all the offsets.

My question is whether there is other good solutions that I did not catch. Also, I am using the topic offset manager and I do not care to read the offset every time. Can somehow that I can turn off the offset cache in KafkaMessageListenerContainer?

Thank you.

Support OffsetManagment "kafka"

Since 0.8.2, Kafka servers are offering offset management feature via OffsetRequest/OffsetCommit messages.

Consumer should have an option to use this service instead of emulating it.

How to configure group id on message driven channel adapter

Hi, I'm trying to migrate to message driven channel adapter, currently I'm using consumer context and inbound channel adapter. But I can't get it working the same way it is now. I need listeners (on different machines) to use the same group id, so kafka can distribute accordingly the messages between these listeners, our producer send messages with message key on the header.

What I'm getting with message driven channel adapter is all listeners receving all the messages. Is there a way to configure the same behavior when using consumer context and inbound channel adapter?

Versions:

        <dependency>
            <groupId>org.springframework.integration</groupId>
            <artifactId>spring-integration-kafka</artifactId>
            <version>1.3.0.RELEASE</version>
        </dependency>
        <dependency>
            <groupId>org.springframework.integration</groupId>
            <artifactId>spring-integration-core</artifactId>
            <version>4.1.2.RELEASE</version>
        </dependency>

KafkaMessageListenerContainer fetchTaskExecutor

It looks like FetchTask.run is done 1 per Broker, so by default the maximum # of threads you would need for fetch tasks is the # of brokers you're consuming from.

However, in KMLC.start() a default fetchTaskExecutor is created using partitionsByBrokerMap.size() which would yield a thread pool the size of the # of partitions.

Did you really mean something like partitionsByBrokerMap.keys().size()?

KafkaMessageListenerContainer does not connect to new unknown parttions

Similarly to #70 if topic will increase count of partitions at runtime Spring dose not consuming messages from new partitions.

Probably
KafkaMessageListenerContainer use newFixedThreadPool for FetchTasks and new tasks never runs, maybe just newCachedThreadPool can fix this problem

The easiest way to reproduce:

  • Create a Kafka cluster with 2 brokers
  • Create a topic with 1 partition
  • Configure Spring to consume that topic
  • Alter topic: --alter --partitions 4
  • Spring lose some messages and then stop consuming

Issue with ConsumerConfiguration.receive method.

Hi,

I see an major issue on ConsumerConfiguration.recieve() method. If maxMessages is set to a very high value and no of messages in kafka topic is less, then while loop in this method will keep running until maxMessages value is hit. It will never exit until that many number of messages arrived. For ex: I have set maxMessages value as 5000 and there are 500 messages put in that topic. I never get control back since this while loop keeps looping. The only way I could get back the control is to keep consumer time out very less. Please let me know if I am missing something.

public Map<String, Map<Integer, List<Object>>> receive() {
        count = messageLeftOverTracker.getCurrentCount();
        final Object lock = new Object();

        final List<Callable<List<MessageAndMetadata<K, V>>>> tasks = new LinkedList<Callable<List<MessageAndMetadata<K, V>>>>();

        for (final List<KafkaStream<K, V>> streams : createConsumerMessageStreams()) {
            for (final KafkaStream<K, V> stream : streams) {
                tasks.add(new Callable<List<MessageAndMetadata<K, V>>>() {
                    @Override
                    public List<MessageAndMetadata<K, V>> call() throws Exception {
                        final List<MessageAndMetadata<K, V>> rawMessages = new ArrayList<MessageAndMetadata<K, V>>();
                        try {
                            while (count < maxMessages) {
                                final MessageAndMetadata<K, V> messageAndMetadata = stream.iterator().next();
                                synchronized (lock) {
                                    if (count < maxMessages) {
                                        rawMessages.add(messageAndMetadata);
                                        count++;
                                    }
                                    else {
                                        messageLeftOverTracker.addMessageAndMetadata(messageAndMetadata);
                                    }
                                }
                            }
                        }
                        catch (ConsumerTimeoutException cte) {
                            LOGGER.debug("Consumer timed out");
                        }
                        return rawMessages;
                    }
                });
            }
        }
        return executeTasks(tasks);
    }

NoClassDefFoundError during Shutdown

I have a Spring Boot application that uses 1.2.1 of Kafka Integration. I see the following exception when the application shuts down. Commons Lang 2.5 is already in the classpath and I don't get any errors during start up.

2015-09-02 18:28:31.431 ERROR 10672 --- [       Thread-1] [                                    ] o.s.b.f.s.DefaultListableBeanFactory     : Destroy method on bean with name '(inner bean)#1ea930eb#1' threw an exception

java.lang.NoClassDefFoundError: org/apache/commons/lang/builder/HashCodeBuilder
    at org.springframework.integration.kafka.support.ProducerMetadata.hashCode(ProducerMetadata.java:117)
    at java.util.concurrent.ConcurrentHashMap.replaceNode(ConcurrentHashMap.java:1106)
    at java.util.concurrent.ConcurrentHashMap.remove(ConcurrentHashMap.java:1097)
    at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor.postProcessBeforeDestruction(PersistenceAnnotationBeanPostProcessor.java:374)
    at org.springframework.beans.factory.support.DisposableBeanAdapter.destroy(DisposableBeanAdapter.java:239)
    at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.destroyBean(DefaultSingletonBeanRegistry.java:578)
    at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.destroySingleton(DefaultSingletonBeanRegistry.java:554)
    at org.springframework.beans.factory.support.DefaultListableBeanFactory.destroySingleton(DefaultListableBeanFactory.java:900)
    at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.destroyBean(DefaultSingletonBeanRegistry.java:589)
    at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.destroySingleton(DefaultSingletonBeanRegistry.java:554)
    at org.springframework.beans.factory.support.DefaultListableBeanFactory.destroySingleton(DefaultListableBeanFactory.java:900)
    at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.destroyBean(DefaultSingletonBeanRegistry.java:589)
    at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.destroySingleton(DefaultSingletonBeanRegistry.java:554)
    at org.springframework.beans.factory.support.DefaultListableBeanFactory.destroySingleton(DefaultListableBeanFactory.java:900)
    at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.destroyBean(DefaultSingletonBeanRegistry.java:589)
    at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.destroySingleton(DefaultSingletonBeanRegistry.java:554)
    at org.springframework.beans.factory.support.DefaultListableBeanFactory.destroySingleton(DefaultListableBeanFactory.java:900)
    at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.destroySingletons(DefaultSingletonBeanRegistry.java:523)
    at org.springframework.beans.factory.support.DefaultListableBeanFactory.destroySingletons(DefaultListableBeanFactory.java:907)
    at org.springframework.context.support.AbstractApplicationContext.destroyBeans(AbstractApplicationContext.java:908)
    at org.springframework.context.support.AbstractApplicationContext.doClose(AbstractApplicationContext.java:884)
    at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.doClose(EmbeddedWebApplicationContext.java:150)
    at org.springframework.context.support.AbstractApplicationContext$1.run(AbstractApplicationContext.java:804)

KafkaMessageListenerContainer concurrency attribute

The concurrency attribute of the KafkaMessageListenerContainer has this for it's JavaDocs:

/**
 * The maximum number of concurrent {@link MessageListener}s running. Messages from
 * within the same partition will be processed sequentially.
 * @param concurrency the concurrency maximum number
 */
public void setConcurrency(int concurrency) {
    this.concurrency = concurrency;
}

Let's say I create a KafkaMessageListenerContainer that reads from individual Partitions of a topic and the topic has 2 partitions. If I have concurrency=4 I assume 4 threads will be reading those partitions. If I need to insure that the order is preserved on a given partition (which Kafka itself guarantees) how will the threads in the listenerContainer behave? Assume each partition has several thousand msgs and I set the listener to grab no more than 100 in a chunk. If thread 1 grabs 100 msgs from part1, thread 2 grabs 100 msgs from part2 what will happen with thread 3 and thread 4? Will they wait to grab msgs from part1 and part2 until the first 2 threads complete or can they potentially also grab msgs from both those parts and start processing them at the same time as the first 2 threads - in other words essentially becoming "Competing Consumers". I assume the later. If that is indeed the case is there any setting that will allow me to make them block until the other threads have completed with their processing before proceeding?

partitioner configuration in producer context not working

In README its mentioned :
partitioner : Custom implementation of a Kafka Partitioner interface.

But looks like partitioner config has been deprecated instead its advisable to use KafkaHeader for PartitionId.

log.warn("'partitioner' is a deprecated option. Use the 'kafka_partitionId' message header or " +
                            "the partition argument in the send() or convertAndSend() methods");

By any chance we can still use that config for producer as its more convenient than to assign partitionid individually for each message.

Message is null consuming large payload using message driven channel on kafka 8.2.2

Hi,

consuming messages using message driven channel with a payload more than 70-80 chars, I don't know the exact border but larger than 70 chars, the consuming message is null ...

Versions:
Kafka 8.2.2
Spring Integration 4.2.4
Spring Boot 1.3.2

below is a extended code of Gary Russell example ...

package com.unitedinternet.portal.messenger.kafka;

import java.util.Collections;
import java.util.Map;
import java.util.Properties;

import org.apache.kafka.common.serialization.StringSerializer;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.builder.SpringApplicationBuilder;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.SmartLifecycle;
import org.springframework.context.annotation.Bean;
import org.springframework.expression.common.LiteralExpression;
import org.springframework.integration.annotation.ServiceActivator;
import org.springframework.integration.channel.DirectChannel;
import org.springframework.integration.channel.QueueChannel;
import org.springframework.integration.kafka.core.BrokerAddress;
import org.springframework.integration.kafka.core.BrokerAddressListConfiguration;
import org.springframework.integration.kafka.core.Configuration;
import org.springframework.integration.kafka.core.ConnectionFactory;
import org.springframework.integration.kafka.core.DefaultConnectionFactory;
import org.springframework.integration.kafka.core.Partition;
import org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter;
import org.springframework.integration.kafka.listener.KafkaMessageListenerContainer;
import org.springframework.integration.kafka.listener.KafkaTopicOffsetManager;
import org.springframework.integration.kafka.listener.OffsetManager;
import org.springframework.integration.kafka.outbound.KafkaProducerMessageHandler;
import org.springframework.integration.kafka.serializer.common.StringDecoder;
import org.springframework.integration.kafka.support.KafkaProducerContext;
import org.springframework.integration.kafka.support.ProducerConfiguration;
import org.springframework.integration.kafka.support.ProducerFactoryBean;
import org.springframework.integration.kafka.support.ProducerMetadata;
import org.springframework.integration.kafka.support.ZookeeperConnect;
import org.springframework.integration.kafka.util.TopicUtils;
import org.springframework.messaging.Message;
import org.springframework.messaging.MessageHandler;
import org.springframework.messaging.PollableChannel;
import org.springframework.messaging.support.GenericMessage;

@SpringBootApplication
public class ProducerKafkaApplication {

    @Value("${kafka.topic}")
    private String topic;

    @Value("${kafka.messageKey}")
    private String messageKey;

    @Value("${kafka.broker.address}")
    private String brokerAddress;

    @Value("${kafka.zookeeper.connect}")
    private String zookeeperConnect;

    public static void main(String[] args) throws Exception {
        ConfigurableApplicationContext context
                = new SpringApplicationBuilder(ProducerKafkaApplication.class)
                .web(false)
                .run(args);
        DirectChannel toKafka = context.getBean("toKafka", DirectChannel.class);
        for (int i = 0; i < 2; i++) {
            toKafka.send(new GenericMessage<>("12345678901234567890123456789012345678901234567890123456789012345678901234567890-" + i));
        }
        PollableChannel fromKafka = context.getBean("received", PollableChannel.class);
        Message<?> received = fromKafka.receive(10000);
        while (received != null) {
            System.out.println(received);
            received = fromKafka.receive(10000);
        }
        context.close();
        System.exit(0);
    }

    @ServiceActivator(inputChannel = "toKafka")
    @Bean
    public MessageHandler handler() throws Exception {
        KafkaProducerMessageHandler handler = new KafkaProducerMessageHandler(producerContext());

        handler.setTopicExpression(new LiteralExpression(this.topic));
        handler.setMessageKeyExpression(new LiteralExpression(this.messageKey));

        return handler;
    }

    @Bean
    public ConnectionFactory kafkaBrokerConnectionFactory() throws Exception {
        return new DefaultConnectionFactory(kafkaConfiguration());
    }

    @Bean
    public Configuration kafkaConfiguration() {
        BrokerAddressListConfiguration configuration = new BrokerAddressListConfiguration(
                BrokerAddress.fromAddress(this.brokerAddress));

        configuration.setSocketTimeout(500);

        return configuration;
    }

    @Bean
    public KafkaProducerContext producerContext() throws Exception {
        KafkaProducerContext kafkaProducerContext = new KafkaProducerContext();
        ProducerMetadata<String, String> producerMetadata = new ProducerMetadata<>(this.topic, String.class,
                String.class, new StringSerializer(), new StringSerializer());
        Properties props = new Properties();
        props.put("linger.ms", "1000");
        ProducerFactoryBean<String, String> producer =
                new ProducerFactoryBean<>(producerMetadata, this.brokerAddress, props);
        ProducerConfiguration<String, String> config =
                new ProducerConfiguration<>(producerMetadata, producer.getObject());
        Map<String, ProducerConfiguration<?, ?>> producerConfigurationMap =
                Collections.<String, ProducerConfiguration<?, ?>>singletonMap(this.topic, config);
        kafkaProducerContext.setProducerConfigurations(producerConfigurationMap);
        return kafkaProducerContext;
    }

    @Bean
    public OffsetManager offsetManager() {
        ZookeeperConnect zookeeperConnect = new ZookeeperConnect(this.zookeeperConnect);

        return new KafkaTopicOffsetManager(zookeeperConnect, "si-offsets");
    }

    @Bean
    public KafkaMessageListenerContainer container(OffsetManager offsetManager) throws Exception {
        final KafkaMessageListenerContainer kafkaMessageListenerContainer = new KafkaMessageListenerContainer(
                kafkaBrokerConnectionFactory(), new Partition(this.topic, 0));

        kafkaMessageListenerContainer.setOffsetManager(offsetManager);
        kafkaMessageListenerContainer.setMaxFetch(100);
        kafkaMessageListenerContainer.setConcurrency(1);

        return kafkaMessageListenerContainer;
    }

    @Bean
    public KafkaMessageDrivenChannelAdapter adapter(KafkaMessageListenerContainer container) {
        KafkaMessageDrivenChannelAdapter kafkaMessageDrivenChannelAdapter =
                new KafkaMessageDrivenChannelAdapter(container);

        StringDecoder decoder = new StringDecoder();

        kafkaMessageDrivenChannelAdapter.setKeyDecoder(decoder);
        kafkaMessageDrivenChannelAdapter.setPayloadDecoder(decoder);
        kafkaMessageDrivenChannelAdapter.setOutputChannel(received());

        return kafkaMessageDrivenChannelAdapter;
    }

    @Bean
    public PollableChannel received() {
        return new QueueChannel();
    }


    @Bean
    public DirectChannel toKafka() {
        return new DirectChannel();
    }

    @Bean
    public TopicCreator topicCreator() {
        return new TopicCreator(this.topic, this.zookeeperConnect);
    }

    public static class TopicCreator implements SmartLifecycle {

        private final String topic;

        private final String zkConnect;

        private volatile boolean running;

        public TopicCreator(String topic, String zkConnect) {
            this.topic = topic;
            this.zkConnect = zkConnect;
        }

        @Override
        public void start() {
            TopicUtils.ensureTopicCreated(this.zkConnect, this.topic, 1, 1);
            this.running = true;
        }

        @Override
        public void stop() {
        }

        @Override
        public boolean isRunning() {
            return this.running;
        }

        @Override
        public int getPhase() {
            return Integer.MIN_VALUE;
        }

        @Override
        public boolean isAutoStartup() {
            return true;
        }

        @Override
        public void stop(Runnable callback) {
            callback.run();
        }

    }

}

Kafka host lookup fails with kerberized zookeeper

I was playing with Spring Cloud Dataflow Yarn with kerberized ambari install. Kafka and zookeeper are both using kerberos. When I ran TimeSourceKafkaApplication with this system, below error was raised. I believe ZookeeperConfiguration.doGetBrokerAddresses silently fails or someone is eating authentication exception if there is one.

java.lang.IllegalArgumentException: Host cannot be empty
    at org.springframework.util.Assert.hasText(Assert.java:168) ~[spring-core-4.2.6.RELEASE.jar!/:4.2.6.RELEASE]
    at org.springframework.integration.kafka.core.BrokerAddress.<init>(BrokerAddress.java:38) ~[spring-integration-kafka-1.3.1.RELEASE.jar!/:na]
    at org.springframework.integration.kafka.core.ZookeeperConfiguration$BrokerToBrokerAddressFunction.valueOf(ZookeeperConfiguration.java:101) ~[spring-integration-kafka-1.3.1.RELEASE.jar!/:na]
    at org.springframework.integration.kafka.core.ZookeeperConfiguration$BrokerToBrokerAddressFunction.valueOf(ZookeeperConfiguration.java:96) ~[spring-integration-kafka-1.3.1.RELEASE.jar!/:na]
    at com.gs.collections.impl.list.mutable.FastList.collect(FastList.java:943) ~[gs-collections-5.0.0.jar!/:na]
    at com.gs.collections.impl.list.mutable.FastList.collect(FastList.java:807) ~[gs-collections-5.0.0.jar!/:na]
    at org.springframework.integration.kafka.core.ZookeeperConfiguration.doGetBrokerAddresses(ZookeeperConfiguration.java:82) ~[spring-integration-kafka-1.3.1.RELEASE.jar!/:na]
    at org.springframework.integration.kafka.core.AbstractConfiguration.getBrokerAddresses(AbstractConfiguration.java:158) ~[spring-integration-kafka-1.3.1.RELEASE.jar!/:na]
    at org.springframework.integration.kafka.core.DefaultConnectionFactory.refreshMetadata(DefaultConnectionFactory.java:175) ~[spring-integration-kafka-1.3.1.RELEASE.jar!/:na]
    at org.springframework.cloud.stream.binder.kafka.KafkaMessageChannelBinder$3.doWithRetry(KafkaMessageChannelBinder.java:409) ~[spring-cloud-stream-binder-kafka-1.0.2.RELEASE.jar!/:1.0.2.RELEASE]
    at org.springframework.cloud.stream.binder.kafka.KafkaMessageChannelBinder$3.doWithRetry(KafkaMessageChannelBinder.java:405) ~[spring-cloud-stream-binder-kafka-1.0.2.RELEASE.jar!/:1.0.2.RELEASE]
    at org.springframework.retry.support.RetryTemplate.doExecute(RetryTemplate.java:263) ~[spring-retry-1.1.2.RELEASE.jar!/:na]
    at org.springframework.retry.support.RetryTemplate.execute(RetryTemplate.java:154) ~[spring-retry-1.1.2.RELEASE.jar!/:na]
    at org.springframework.cloud.stream.binder.kafka.KafkaMessageChannelBinder.ensureTopicCreated(KafkaMessageChannelBinder.java:405) [spring-cloud-stream-binder-kafka-1.0.2.RELEASE.jar!/:1.0.2.RELEASE]
    at org.springframework.cloud.stream.binder.kafka.KafkaMessageChannelBinder.doBindProducer(KafkaMessageChannelBinder.java:296) [spring-cloud-stream-binder-kafka-1.0.2.RELEASE.jar!/:1.0.2.RELEASE]
    at org.springframework.cloud.stream.binder.kafka.KafkaMessageChannelBinder.doBindProducer(KafkaMessageChannelBinder.java:121) [spring-cloud-stream-binder-kafka-1.0.2.RELEASE.jar!/:1.0.2.RELEASE]
    at org.springframework.cloud.stream.binder.AbstractBinder.bindProducer(AbstractBinder.java:184) [spring-cloud-stream-1.0.2.RELEASE.jar!/:1.0.2.RELEASE]
    at org.springframework.cloud.stream.binding.ChannelBindingService.bindProducer(ChannelBindingService.java:113) [spring-cloud-stream-1.0.2.RELEASE.jar!/:1.0.2.RELEASE]
    at org.springframework.cloud.stream.binding.BindableProxyFactory.bindOutputs(BindableProxyFactory.java:206) [spring-cloud-stream-1.0.2.RELEASE.jar!/:1.0.2.RELEASE]
    at org.springframework.cloud.stream.binding.OutputBindingLifecycle.start(OutputBindingLifecycle.java:57) [spring-cloud-stream-1.0.2.RELEASE.jar!/:1.0.2.RELEASE]
    at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:173) [spring-context-4.2.6.RELEASE.jar!/:4.2.6.RELEASE]
    at org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:51) [spring-context-4.2.6.RELEASE.jar!/:4.2.6.RELEASE]
    at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:346) [spring-context-4.2.6.RELEASE.jar!/:4.2.6.RELEASE]
    at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:149) [spring-context-4.2.6.RELEASE.jar!/:4.2.6.RELEASE]
    at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:112) [spring-context-4.2.6.RELEASE.jar!/:4.2.6.RELEASE]
    at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:852) [spring-context-4.2.6.RELEASE.jar!/:4.2.6.RELEASE]
    at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.finishRefresh(EmbeddedWebApplicationContext.java:140) [spring-boot-1.3.5.RELEASE.jar!/:1.3.5.RELEASE]
    at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:541) [spring-context-4.2.6.RELEASE.jar!/:4.2.6.RELEASE]
    at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.refresh(EmbeddedWebApplicationContext.java:118) [spring-boot-1.3.5.RELEASE.jar!/:1.3.5.RELEASE]
    at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:766) [spring-boot-1.3.5.RELEASE.jar!/:1.3.5.RELEASE]
    at org.springframework.boot.SpringApplication.createAndRefreshContext(SpringApplication.java:361) [spring-boot-1.3.5.RELEASE.jar!/:1.3.5.RELEASE]
    at org.springframework.boot.SpringApplication.run(SpringApplication.java:307) [spring-boot-1.3.5.RELEASE.jar!/:1.3.5.RELEASE]
    at org.springframework.boot.SpringApplication.run(SpringApplication.java:1191) [spring-boot-1.3.5.RELEASE.jar!/:1.3.5.RELEASE]
    at org.springframework.boot.SpringApplication.run(SpringApplication.java:1180) [spring-boot-1.3.5.RELEASE.jar!/:1.3.5.RELEASE]
    at org.springframework.cloud.stream.app.time.source.kafka.TimeSourceKafkaApplication.main(TimeSourceKafkaApplication.java:29) [time-source-kafka-1.0.0.BUILD-SNAPSHOT.jar!/:1.0.0.BUILD-SNAPSHOT]
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_91]
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_91]
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_91]
    at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_91]
    at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:54) [time-source-kafka-1.0.0.BUILD-SNAPSHOT.jar!/:1.0.0.BUILD-SNAPSHOT]
    at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]

Config I have which works ok with non-kerberized cluster is:

spring:
  cloud:
    stream:
      kafka:
        binder:
          brokers: sandbox.hortonworks.com:6667
          zkNodes: sandbox.hortonworks.com:2181

Dynamically change the producer producer-context

I am not sure whether this is the right place to ask questions, but I haven't found any blog to put my question so I am asking the question here.

I am using the following producer context configuration.Currently I am getting the broker-list and topic from consul using spring cloud consul.

I have requirement where in case my broker-list changes and I update the consul with the new brokerlist
I should be able to refresh the producer-context with the new configuration in case there is a change in my broker-list.

For my custom classes I was able to refresh the values I am getting by simply annotating my classes with @RefreshScope. I am not sure how I refresh the configuration for the producer-context.

  <int-kafka:producer-context id="kafkaProducerContext" producer-properties="producerProperties">
        <int-kafka:producer-configurations>
            <int-kafka:producer-configuration
                broker-list="${app.kafkabrokers}" key-class-type="java.lang.String"
                value-class-type="java.lang.String" topic="${app.outputkafka.topic}"
                key-encoder="encoder"
                value-encoder="kafkaEncoder" />
        </int-kafka:producer-configurations>
    </int-kafka:producer-context>  

KEY_SERIALIZER_CLASS_CONFIG & VALUE_SERIALIZER_CLASS_CONFIG

I apologize in advance if this is the wrong forum to ask this question, but I couldn't find much documentation around the matter.

Let's say we generate some Avro classes from IDL or schema and want to use those objects as the things we will send to / receive from a Kafka topic.

The Kafka Producer Channel Adapter section of this example:

https://spring.io/blog/2016/04/11/spring-integration-kafka-support-2-0-0-m1-is-now-available

gives some hint of how to wire this all together, but I still feel like I'm missing something.

What should the values of be if we want to serialize an Avro object?

  @Bean
  public ProducerFactory<String, byte[]> producerFactory() {
    Map<String, Object> props = new HashMap<>();
    props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, this.brokerAddress);
    props.put(ProducerConfig.RETRIES_CONFIG, 0);
    props.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384);
    props.put(ProducerConfig.LINGER_MS_CONFIG, 1);
    props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432);
    props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
    props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class);
    return new DefaultKafkaProducerFactory<>(props);
  }

In earlier versions this class existed:

org.springframework.integration.kafka.serializer.avro.AvroSpecificDatumBackedKafkaEncoder

Potentially ignored failed message delivery

In KafkaProducerMessageHandler.handleMessageInternal(Message<?> message), the message is fired to Kafka broker with this call:

this.kafkaProducerContext.send(topic, partitionId, messageKey, message.getPayload()); KafkaProducer.send(ProducerRecord<K,V> record)

It returns a Future<RecordMetadata> object, but for some reason the response is ignored. I assume it's because the intention is to assume the request will be successful and let any exception bubble up to the caller later on. However, when I look into the code for KafkaProducer, it actually silently swallow any ApiException and return a FutureFailure object instead of directly throwing exceptions.

I am curious that in this case, without checking the Future response from the KafkaProducerContext, would we run the risk of ignoring an failed message delivery to Kafka broker?

Kafka Producer Not working using spring integration

I am not sure what is wrong in below code but it doesn't send any message to Kafka as I cant see it using Kafka console producer

My Spring XML file

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:int="http://www.springframework.org/schema/integration"
       xmlns:int-kafka="http://www.springframework.org/schema/integration/kafka"
       xmlns:task="http://www.springframework.org/schema/task"
       xsi:schemaLocation="http://www.springframework.org/schema/integration/kafka http://www.springframework.org/schema/integration/kafka/spring-integration-kafka.xsd
        http://www.springframework.org/schema/integration http://www.springframework.org/schema/integration/spring-integration.xsd
        http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
        http://www.springframework.org/schema/task http://www.springframework.org/schema/task/spring-task.xsd">

    <int:channel id="inputToKafka">
        <int:queue/>
    </int:channel>

    <int-kafka:outbound-channel-adapter id="kafkaOutboundChannelAdapter"
                                        kafka-producer-context-ref="kafkaProducerContext"
                                        auto-startup="false"
                                        channel="inputToKafka"
                                        topic="rating"
                                        message-key-expression="header.messageKey">
        <int:poller fixed-delay="1000" time-unit="MILLISECONDS" receive-timeout="0" task-executor="taskExecutor"/>
    </int-kafka:outbound-channel-adapter>

    <task:executor id="taskExecutor" pool-size="5" keep-alive="120" queue-capacity="500"/>

    <bean id="kafkaEncoder"
          class="org.springframework.integration.kafka.serializer.avro.AvroReflectDatumBackedKafkaEncoder">
        <constructor-arg value="java.lang.String"/>
    </bean>

    <int-kafka:producer-context id="kafkaProducerContext">
        <int-kafka:producer-configurations>
            <int-kafka:producer-configuration broker-list="ekafka01.metratech.com:6667"
                       key-class-type="java.lang.String"
                       value-class-type="java.lang.String"
                       topic="rating"
                       value-serializer="kafkaSerializer"
                       key-serializer="kafkaSerializer"
                       compression-type="none"/>
        </int-kafka:producer-configurations>
    </int-kafka:producer-context>
    <bean id="kafkaSerializer" class="org.apache.kafka.common.serialization.StringSerializer"/>
</beans>

Producer Class

import javax.inject.Inject;
import javax.inject.Named;

import org.springframework.integration.kafka.support.KafkaHeaders;
import org.springframework.integration.support.MessageBuilder;
import org.springframework.messaging.MessageChannel;

@Named
public class ModelProducer {


    @Inject
    MessageChannel inputToKafka;


    public void sendMessageToKafka(String message) {
        inputToKafka.send(
                MessageBuilder.withPayload(message)
                        .setHeader(KafkaHeaders.MESSAGE_KEY, "1")
                        .setHeader(KafkaHeaders.TOPIC, "rating")
                        .build());

    }



}

REST Controller

@RestController
@RequestMapping("/billableEvent")
public class BEController {

    @Inject
    ModelProducer kafkaProducer;

     @Inject
     KafkaProducer simpleKafkaProducer;

    @Autowired  
    private IValidator jsonValidator;
    @RequestMapping(value="/send", method=RequestMethod.POST)
    public @ResponseBody String createBE(@RequestBody String beSchema)
    {
        //Object result = jsonValidator.validate(beSchema);
        kafkaProducer.sendMessageToKafka(beSchema);
        //simpleKafkaProducer.sendMsgToKafka(beSchema);
        return beSchema;
    }
}

Consumer time out and zookeeper connect

Hi,

I am very confused on where to set consumer timeout. Please find below how I create consumer context. The only way I could get this work was to provide zookeeper.connect and consumer.timeout.ms directly to ConsumerConfig via Properties. Also not sure what ConsumerMetadata.setConsumerTimeout does.

kafkaConsumerContext.setZookeeperConnect and kafkaConsumerContext.setConsumerTimeout has no effect at all. Please let me know if I am missing something.

 private KafkaConsumerContext kafkaConsumerContext() throws IOException {
        KafkaConsumerContext kafkaConsumerContext = new KafkaConsumerContext();
        Map<String, ConsumerConfiguration> map = new HashMap<>();

        ConsumerMetadata consumerMetadata = new ConsumerMetadata();
        consumerMetadata.setGroupId(groupId);
        consumerMetadata.setValueDecoder(new AvroSpecificDatumBackedKafkaDecoder(User.class));
        consumerMetadata.setKeyDecoder(new AvroReflectDatumBackedKafkaDecoder(String.class));

        Map<String, Integer> topicStreamMap = new HashMap<>();
        topicStreamMap.put(topic, 1);
        consumerMetadata.setTopicStreamMap(topicStreamMap);
        consumerMetadata.setConsumerTimeout(consumerTimeout);

        Properties properties = new Properties();
        properties.put("zookeeper.connect", zookeeper);
        properties.put("group.id", groupId);
        properties.put("consumer.timeout.ms", consumerTimeout);
        properties.putAll(getProducerConfigProps());
        ConsumerConfig consumerConfig = new ConsumerConfig(properties);
        ConsumerConnectionProvider consumerConnectionProvider = new ConsumerConnectionProvider(consumerConfig);

        ConsumerConfiguration consumerConfiguration = new ConsumerConfiguration(consumerMetadata,
                consumerConnectionProvider, new MessageLeftOverTracker());
        //NOTE:  Max messages is what determines how many messages to be picked together.  Default is 1.
        consumerConfiguration.setMaxMessages(5000);
        //Same as producer config, key to this map does not have any meaning.  You can give any key name.
        map.put("config", consumerConfiguration);

        kafkaConsumerContext.setConsumerConfigurations(map);
        kafkaConsumerContext.setConsumerTimeout(consumerTimeout);

//        kafkaConsumerContext.setZookeeperConnect(new ZookeeperConnect("10.24.72.103:2181"));
        return kafkaConsumerContext;

    }

Pull out network cable - no way to recover

We want to be able to reconnect automatically if a network connection is dropped. When we use the Consumer it seems to spin up runnables into threads, but unfortunately in exceptional circumstances (such as disabling the network connection) - the ListenerContainer is left in a state were isRunning=true but the threads have bombed out, and .start() would need to be called to resume Consumption of messages - but with no way for client code to know that the ListenerContainer is no longer working, knowing that we should call start() is impossible

Incorrect broker's address extraction

Hello,
I'm trying to link an application with spring integration/kafka to Flink. When I connected int-kafka:message-driven-channel-adapter (https://github.com/rssdev10/flink-spring-integration/blob/master/drpc-client/src/main/resources/flink-integration-context.xml
) the application failed with an exception

Exception in thread "main" org.springframework.context.ApplicationContextException: Failed to start bean 'adapter'; nested exception is java.lang.NullPointerException at
org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:176) at
org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:51)
...
       at kafka.client.ClientUtils$.parseBrokerList(ClientUtils.scala:102)
        at org.springframework.integration.kafka.core.DefaultConnectionFactory.refreshMetadata(DefaultConnectionFactory.java:172)
        at org.springframework.integration.kafka.core.DefaultConnectionFactory.getPartitions(DefaultConnectionFactory.java:227)

I began debugging and found a reason of the fail. ClientUtils$.MODULE$.parseBrokerList(brokerAddressesAsString) at the mentioned above line 172 returns null because brokerAddressesAsString contains "localhost,:9092" with comma.

And the reason is the only Broker from the following fragment contains a host with comma like this "localhost,".

    protected List<BrokerAddress> doGetBrokerAddresses() {
        ZkClient zkClient = null;
        try {
            zkClient = new ZkClient(this.zookeeperServers, this.sessionTimeout, this.connectionTimeout,
                    ZKStringSerializer$.MODULE$);
            Seq<Broker> allBrokersInCluster = ZkUtils$.MODULE$.getAllBrokersInCluster(zkClient);
            FastList<Broker> brokers = FastList.newList(JavaConversions.asJavaCollection(allBrokersInCluster));
            return brokers.collect(brokerToBrokerAddressFunction);
        }

How to fix it?

BTW: the main purpose of this experiment is to organize gateway over kafka's messages to implement something like RPC with Flink. Unfortunately I not found ready example. May be my assumption about the gateway is incorrect. And obvious non implemented part is creation of Spring's message id by Kafka's message key.

Client side part of the project: https://github.com/rssdev10/flink-spring-integration/tree/master/drpc-client

Thanks

Dynamically change the producer producer-context

I am not sure whether this is the right place to ask questions, but I haven't found any blog to put my question so I am asking the question here.

I am using the following producer context configuration.Currently I am getting the broker-list and topic from consul using spring cloud consul.

I have requirement where in case my broker-list changes and I update the consul with the new brokerlist
I should be able to refresh the producer-context with the new configuration in case there is a change in my broker-list.

For my custom classes I was able to refresh the values I am getting by simply annotating my classes with @RefreshScope. I am not sure how I refresh the configuration for the producer-context.

"<int-kafka:producer-context id="kafkaProducerContext" producer-properties="producerProperties">
int-kafka:producer-configurations
<int-kafka:producer-configuration
broker-list="${app.kafkabrokers}" key-class-type="java.lang.String"
value-class-type="java.lang.String" topic="${app.outputkafka.topic}"
key-encoder="encoder"
value-encoder="kafkaEncoder" />
/int-kafka:producer-configurations
/int-kafka:producer-context "

Invalid characters sent at the beginning of message

Using following code, I send Elasticsearch documents for indexing. I tried converting basic Objects to JSON and sent via producer. However, every message (as checked from the console) appends jibberish characters like - ��t�{"productId":2455

public boolean sendMessage()
{
    PageRequest page = new PageRequest(0, 1); 
    Product p = product.findByName("Cream", page).getContent().get(0);
    String json = "";
    ObjectMapper mapper = new ObjectMapper();
    try {
        json = mapper.writeValueAsString(p);
    } catch (JsonProcessingException e1) {
        // TODO Auto-generated catch block
        e1.printStackTrace();
    }       
    logger.info("JSON = " + json);

    boolean status =  inputToKafka.send(org.springframework.integration.support.MessageBuilder.withPayload(json).build());
    try {
        Thread.sleep(10000);
    } catch (InterruptedException e) {
        e.printStackTrace();
    }
    return status;
}

Outbound configuration

 <?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:int="http://www.springframework.org/schema/integration"
       xmlns:int-kafka="http://www.springframework.org/schema/integration/kafka"
       xmlns:task="http://www.springframework.org/schema/task"
       xsi:schemaLocation="http://www.springframework.org/schema/integration/kafka http://www.springframework.org/schema/integration/kafka/spring-integration-kafka.xsd
        http://www.springframework.org/schema/integration http://www.springframework.org/schema/integration/spring-integration.xsd
        http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
        http://www.springframework.org/schema/task http://www.springframework.org/schema/task/spring-task.xsd">

    <int:channel id="inputToKafka">
        <int:queue/>
    </int:channel>

    <int-kafka:outbound-channel-adapter
            id="kafkaOutboundChannelAdapter"
            kafka-producer-context-ref="kafkaProducerContext"
            channel="inputToKafka">
        <int:poller fixed-delay="1000" time-unit="MILLISECONDS" receive-timeout="0" task-executor="taskExecutor"/>
    </int-kafka:outbound-channel-adapter>

    <task:executor id="taskExecutor" pool-size="5" keep-alive="120" queue-capacity="500"/>

    <int-kafka:producer-context id="kafkaProducerContext">
        <int-kafka:producer-configurations>
            <int-kafka:producer-configuration broker-list="localhost:9092"
                                              topic="shopo_topic"
                                              compression-codec="default"/>
        </int-kafka:producer-configurations>
    </int-kafka:producer-context>

    </beans>

Any clue ?

Possible Memory Leak ?

If it possible that the kafka.consumer.ZookeeperConsumerConnector watcher executor thread is started but not stopped ?

It get following in my log:

[frontend] INFO    2015-01-06 01:16:14,504 [default_XXXXXX-1420503374061-2286bb4a_watcher_executor] kafka.consumer.ZookeeperConsumerConnector   - [default_XXXXX-1420503374061-2286bb4a], starting watcher executor thread for consumer default_XXXXXX-1420503374061-2286bb4a

And if i shutdown my tomcat 8:

Exception in thread "default_XXXXXX-1420503374061-2286bb4a_watcher_executor" java.lang.IllegalStateException: Can't overwrite cause with java.lang.IllegalStateException: Illegal access: this web application instance has been stopped already.  Could not load kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anon$1$$anonfun$run$4.  The eventual following stack trace is caused by an error thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access, and has no functional impact.
    at java.lang.Throwable.initCause(Throwable.java:457)
    at org.apache.catalina.loader.WebappClassLoaderBase.checkStateForClassLoading(WebappClassLoaderBase.java:1306)
    at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1186)
    at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1147)
    at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anon$1.run(ZookeeperConsumerConnector.scala:360)
Caused by: java.lang.ClassNotFoundException
    at org.apache.catalina.loader.WebappClassLoaderBase.checkStateForClassLoading(WebappClassLoaderBase.java:1305)
    ... 3 more

I use the Milestone 2 from the Maven Repo

Thanks

Simple API equivalent to JmsTemplate

I'm looking for a simple way to send a message to a Topic and subscribe to that Topic.

All the examples I've read are inherently complex. Is there no simple API similar to JmsTemplate?

My goal is to replace ActiveMQ w/ Kafka in a Spring Boot project, but keep it as simple as possible. Right now I'm creating ZookeeperConfiguration and ConnectionFactory beans from application.yml, but don't really know where to go from here.

Ideally, I'd like to stay away from any XML based configuration. Any ideas or pointers would be appreciated.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.