GithubHelp home page GithubHelp logo

danielwegener / logback-kafka-appender Goto Github PK

View Code? Open in Web Editor NEW
636.0 44.0 261.0 255 KB

Logback appender for Apache Kafka

License: Apache License 2.0

Java 100.00%
logback-appender kafka-client logging

logback-kafka-appender's Introduction

logback-kafka-appender

Archive Warning: This project is no longer maintained (actually for some time now). I just do not find time to maintain this project in my free time. To make this clear I decided to better archive this project on github (and closing the unmoderated gitter channel) instead of just not reacting to new questions, issues and PRs. This may influence your decision to use this project although there still seem to be some happy users and stargazers. I'll be happy to unarchive this project if someone is willing to take over maintenance or link to an active fork.

Maven Central

Build master with Maven

codecov

This appender lets your application publish its application logs directly to Apache Kafka.

Logback incompatibility Warning

Due to a breaking change in the Logback Encoder API you need to use at least logback version 1.2.

Full configuration example

Add logback-kafka-appender and logback-classic as library dependencies to your project.

[maven pom.xml]
<dependency>
    <groupId>com.github.danielwegener</groupId>
    <artifactId>logback-kafka-appender</artifactId>
    <version>0.2.0</version>
    <scope>runtime</scope>
</dependency>
<dependency>
    <groupId>ch.qos.logback</groupId>
    <artifactId>logback-classic</artifactId>
    <version>1.2.3</version>
    <scope>runtime</scope>
</dependency>
// [build.sbt]
libraryDependencies += "com.github.danielwegener" % "logback-kafka-appender" % "0.2.0"
libraryDependencies += "ch.qos.logback" % "logback-classic" % "1.2.3"

This is an example logback.xml that uses a common PatternLayout to encode a log message as a string.

[src/main/resources/logback.xml]
<configuration>

    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
        </encoder>
    </appender>

    <!-- This is the kafkaAppender -->
    <appender name="kafkaAppender" class="com.github.danielwegener.logback.kafka.KafkaAppender">
            <encoder>
                <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
            </encoder>
            <topic>logs</topic>
            <keyingStrategy class="com.github.danielwegener.logback.kafka.keying.NoKeyKeyingStrategy" />
            <deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy" />
            
            <!-- Optional parameter to use a fixed partition -->
            <!-- <partition>0</partition> -->
            
            <!-- Optional parameter to include log timestamps into the kafka message -->
            <!-- <appendTimestamp>true</appendTimestamp> -->

            <!-- each <producerConfig> translates to regular kafka-client config (format: key=value) -->
            <!-- producer configs are documented here: https://kafka.apache.org/documentation.html#newproducerconfigs -->
            <!-- bootstrap.servers is the only mandatory producerConfig -->
            <producerConfig>bootstrap.servers=localhost:9092</producerConfig>

            <!-- this is the fallback appender if kafka is not available. -->
            <appender-ref ref="STDOUT" />
        </appender>

    <root level="info">
        <appender-ref ref="kafkaAppender" />
    </root>
</configuration>

You may also look at the complete configuration examples

Compatibility

logback-kafka-appender depends on org.apache.kafka:kafka-clients:1.0.0:jar. It can append logs to a kafka broker with version 0.9.0.0 or higher.

The dependency to kafka-clients is not shadowed and may be upgraded to a higher, api compatible, version through dependency overrides.

Delivery strategies

Direct logging over the network is not a trivial thing because it might be much less reliable than the local file system and has a much bigger impact on the application performance if the transport has hiccups.

You need make a essential decision: Is it more important to deliver all logs to the remote Kafka or is it more important to keep the application running smoothly? Either of this decisions allows you to tune this appender for throughput.

Strategy Description
AsynchronousDeliveryStrategy Dispatches each log message to the Kafka Producer. If the delivery fails for some reasons, the message is dispatched to the fallback appenders. However, this DeliveryStrategy does block if the producers send buffer is full (this can happen if the connection to the broker gets lost). To avoid even this blocking, enable the producerConfig block.on.buffer.full=false. All log messages that cannot be delivered fast enough will then immediately go to the fallback appenders.
BlockingDeliveryStrategy Blocks each calling thread until the log message is actually delivered. Normally this strategy is discouraged because it has a huge negative impact on throughput. Warning: This strategy should not be used together with the producerConfig linger.ms

Note on Broker outages

The AsynchronousDeliveryStrategy does not prevent you from being blocked by the Kafka metadata exchange. That means: If all brokers are not reachable when the logging context starts, or all brokers become unreachable for a longer time period (> metadata.max.age.ms), your appender will eventually block. This behavior is undesirable in general and can be mitigated with kafka-clients 0.9 (see #16).

In any case, if you want to make sure the appender will never block your application, you can wrap the KafkaAppender with logback's own AsyncAppender or, for more control, the LoggingEventAsyncDisruptorAppender from Logstash Logback Encoder.

An example configuration could look like this:

<configuration>

    <!-- This is the kafkaAppender -->
    <appender name="kafkaAppender" class="com.github.danielwegener.logback.kafka.KafkaAppender">
    <!-- Kafka Appender configuration -->
    </appender>

    <appender name="ASYNC" class="ch.qos.logback.classic.AsyncAppender">
        <!-- if neverBlock is set to true, the async appender discards messages when its internal queue is full -->
        <neverBlock>true</neverBlock>  
        <appender-ref ref="kafkaAppender" />
    </appender>

    <root level="info">
        <appender-ref ref="ASYNC" />
    </root>
</configuration>

Custom delivery strategies

You may also roll your own delivery strategy. Just extend com.github.danielwegener.logback.kafka.delivery.DeliveryStrategy.

Fallback-Appender

If, for whatever reason, the kafka-producer decides that it cannot publish a log message, the message could still be logged to a fallback appender (a ConsoleAppender on STDOUT or STDERR would be a reasonable choice for that).

Just add your fallback appender(s) as logback appender-ref to the KafkaAppender section in your logback.xml. Every message that cannot be delivered to kafka will be written to all defined appender-ref's.

Example: <appender-ref ref="STDOUT"> while STDOUT is an defined appender.

Note that the AsynchronousDeliveryStrategy will reuse the kafka producers io thread to write the message to the fallback appenders. Thus all fallback appenders should be reasonable fast so they do not slow down or break the kafka producer.

Producer tuning

This appender uses the kafka producer introduced in kafka-0.8.2. It uses the producer default configuration.

You may override any known kafka producer config with an <producerConfig>Name=Value</producerConfig> block (note that the boostrap.servers config is mandatory). This allows a lot of fine tuning potential (eg. with batch.size, compression.type and linger.ms).

Serialization

This module supports any ch.qos.logback.core.encoder.Encoder. This allows you to use any encoder that is capable of encoding an ILoggingEvent or IAccessEvent like the well-known logback PatternLayoutEncoder or for example the logstash-logback-encoder's LogstashEncoxer.

Custom Serialization

If you want to write something different than string on your kafka logging topic, you may roll your encoding mechanism. A use case would be to to smaller message sizes and/or better serialization/deserialization performance on the producing or consuming side. Useful formats could be BSON, Avro or others.

To roll your own implementation please refer to the logback documentation. Note that logback-kafka-appender will never call the headerBytes() or footerBytes() method.

Your encoder should be type-parameterized for any subtype of the type of event you want to support (typically ILoggingEvent) like in

public class MyEncoder extends ch.qos.logback.core.encoder.Encoder<ILoggingEvent> {/*..*/}

Keying strategies / Partitioning

Kafka's scalability and ordering guarantees heavily rely on the concepts of partitions (more details here). For application logging this means that we need to decide how we want to distribute our log messages over multiple kafka topic partitions. One implication of this decision is how messages are ordered when they are consumed from a arbitrary multi-partition consumer since kafka only provides a guaranteed read order only on each single partition. Another implication is how evenly our log messages are distributed across all available partitions and therefore balanced between multiple brokers.

The order of log messages may or may not be important, depending on the intended consumer-audience (e.g. a logstash indexer will reorder all message by its timestamp anyway).

You can provide a fixed partition for the kafka appender using the partition property or let the producer use the message key to partition a message. Thus logback-kafka-appender supports the following keying strategies strategies:

Strategy Description
NoKeyKeyingStrategy (default) Does not generate a message key. Results in round robin distribution across partition if no fixed partition is provided.
HostNameKeyingStrategy This strategy uses the HOSTNAME as message key. This is useful because it ensures that all log messages issued by this host will remain in the correct order for any consumer. But this strategy can lead to uneven log distribution for a small number of hosts (compared to the number of partitions).
ContextNameKeyingStrategy This strategy uses logback's CONTEXT_NAME as message key. This is ensures that all log messages logged by the same logging context will remain in the correct order for any consumer. But this strategy can lead to uneven log distribution for a small number of hosts (compared to the number of partitions). This strategy only works for ILoggingEvents.
ThreadNameKeyingStrategy This strategy uses the calling threads name as message key. This ensures that all messages logged by the same thread will remain in the correct order for any consumer. But this strategy can lead to uneven log distribution for a small number of thread(-names) (compared to the number of partitions). This strategy only works for ILoggingEvents.
LoggerNameKeyingStrategy * This strategy uses the logger name as message key. This ensures that all messages logged by the same logger will remain in the correct order for any consumer. But this strategy can lead to uneven log distribution for a small number of distinct loggers (compared to the number of partitions). This strategy only works for ILoggingEvents.

Custom keying strategies

If none of the above keying strategies satisfies your requirements, you can easily implement your own by implementing a custom KeyingStrategy:

package foo;
import com.github.danielwegener.logback.kafka.keying.KeyingStrategy;

/* This is a valid example but does not really make much sense */
public class LevelKeyingStrategy implements KeyingStrategy<ILoggingEvent> {
    @Override
    public byte[] createKey(ILoggingEvent e) {
        return ByteBuffer.allocate(4).putInt(e.getLevel()).array();
    }
}

As most custom logback component, your custom partitioning strategy may also implement the ch.qos.logback.core.spi.ContextAware and ch.qos.logback.core.spi.LifeCycle interfaces.

A custom keying strategy may especially become handy when you want to use kafka's log compactation facility.

FAQ

  • Q: I want to log to different/multiple topics!
    A: No problem, create an appender for each topic.

License

This project is licensed under the Apache License Version 2.0.

logback-kafka-appender's People

Contributors

blackrider97 avatar bryant1410 avatar danielwegener avatar dependabot[bot] avatar gitter-badger avatar gquintana avatar hackbert avatar kdombeck avatar michaelandrepearce avatar vboulaye avatar williamho avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

logback-kafka-appender's Issues

long time to start the spring boot app with logback-kafka-appender

I am doing a POC to ship the application logs to kafka using your logback-kafka-appender. I have written a very simple Hello word Rest API and trying to send the log to kafka but server is taking an ages to get started. If I remove logback.xml from source directory, server gets started in fraction of seconds. Here is dependencies:

com.github.danielwegener logback-kafka-appender 0.1.0 runtime ch.qos.logback logback-classic 1.1.6 runtime ch.qos.logback logback-core 1.1.6 runtime

Any help?

First few messages getting dropped

My Kafka Appender is as follows:

<appender name="fast-kafka-appender" class="com.github.danielwegener.logback.kafka.KafkaAppender">
        <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
            <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
        </encoder>

        <topic>intake-app-log</topic>
        <keyingStrategy class="com.github.danielwegener.logback.kafka.keying.NoKeyKeyingStrategy" />
        <deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy" />

        <producerConfig>bootstrap.servers=localhost:9092</producerConfig>
        <producerConfig>acks=0</producerConfig>
        <!--<producerConfig>linger.ms=1000</producerConfig>-->
        <!-- even if the producer buffer runs full, do not block the application but start to drop messages -->
        <producerConfig>max.block.ms=0</producerConfig>
        <!--<producerConfig>batch.size=0</producerConfig>-->
        <!--<producerConfig>buffer.memory=43554432</producerConfig>-->
        <producerConfig>client.id=${HOSTNAME}-${CONTEXT_NAME}-logback-relaxed</producerConfig>
        <appender-ref ref="FILE" />
    </appender>

My initial log messages are getting dropped, I guess because of the lazy loading strategy of the producer -> #53 (comment)

I don't want these logs to go to the fallback appender. Also, I cannot increase max.block.ms > 0 as that blocks the application which is not acceptable for my use case. The workaround that I am using right now is when the application starts I do a dummy log and then sleep the thread for 500ms as follows :

logger.info("sacrifice me");
Thread.sleep(500);

This hack solved the problem for now. I wanted to know if there is any way of disabling the lazy loading strategy and if I am missing any option here?

Thanks

Extra line \n in the logback-kafka-appender output

While attempting to consume my logback-access configuration, I noticed I'm getting an extra \n in the data stream. This is causing my consumer to see an empty row. For example,

{"message":"POST /kbmatch/api/v1/matches/simple HTTP/1.1","host":"bds00931","client":"127.0.0.1","date":"2016-02-11 16:54:10","verb":"POST","request":"/kbmatch/api/v1/matches/simple","httpversion":"HTTP/1.1","response":"200","bytes":"-","requestTime":"20","AuthToken":"xxx","CorrelationID":"MatchTest.jmx|Signatures-3-10-58.log","User-Agent":"Test/2.4.0-SNAPSHOT (Macintosh; Intel Mac OS X 10_10_5)"}

{"message":"POST /kbmatch/api/v1/matches/simple HTTP/1.1","host":"bds00931","client":"127.0.0.1","date":"2016-02-11 16:54:10","verb":"POST","request":"/kbmatch/api/v1/matches/simple","httpversion":"HTTP/1.1","response":"200","bytes":"-","requestTime":"19","AuthToken":"xxx","CorrelationID":"MatchTest.jmx|Signatures-3-10-59.log","User-Agent":"Test/2.4.0-SNAPSHOT (Macintosh; Intel Mac OS X 10_10_5)"}

{"message":"POST /kbmatch/api/v1/matches/simple HTTP/1.1","host":"bds00931","client":"127.0.0.1","date":"2016-02-11 16:54:10","verb":"POST","request":"/kbmatch/api/v1/matches/simple","httpversion":"HTTP/1.1","response":"200","bytes":"-","requestTime":"25","AuthToken":"xxx","CorrelationID":"MatchTest.jmx|Signatures-3-10-6.log","User-Agent":"Test/2.4.0-SNAPSHOT (Macintosh; Intel Mac OS X 10_10_5)"}

I don't believe this is a logback-access behavior, as I don't see this with the same data pattern in the logback.core.rolling.RollingFileAppender.

Any idea what might be causing this or how I can avoid it?

Perhaps both Kafka and logback-access are each adding an \n?
Here's my pattern

            <encoder class="com.github.danielwegener.logback.kafka.encoding.LayoutKafkaMessageEncoder">
                <charset>UTF-8</charset>                
                <layout class="ch.qos.logback.access.PatternLayout">
                    <pattern>%h %l %u %t{yyyy-MM-dd HH:mm:ss} "%r" %s %b %D :%i{X-BDS-AuthToken}:%i{X-BDS-CorrelationID} :%i{User-Agent}</pattern>
<!--
                    <pattern>{"message":"%r","host":"${HOSTNAME}","client":"%h","date":"%t{yyyy-MM-dd HH:mm:ss}","verb":"%m","request":"%U","httpversion":"%H","response":"%s","bytes":"%bytesSent","requestTime":"%D","AuthToken":"%i{X-BDS-AuthToken}","CorrelationID":"%i{X-BDS-CorrelationID}","User-Agent":"%i{User-Agent}"}</pattern>
-->
                </layout>
            </encoder>

Kafka package name string and Jar shading

We are shading Kafka Jars to avoid dependencies conflict.

This line is not recognized by shader

KAFKA_LOGGER_PREFIX = "org.apache.kafka.clients";

I would prefer something like this in order to be type safe:

KAFKA_LOGGER_PREFIX = Producer.class.getPackage().getName();

Using custom KeyStore tool for Kafka ProducerConfigs

Hello, I was looking to use this logback-kafka-appender in Heroku. The Kafka I am interacting with will be set up with ssl. I want to try to use: https://github.com/heroku/env-keystore . I will have certs and keys as environment variables. I was thinking of modifying producer configs for this reason and use it to set up the KeyStore dynamically.
This isn't really an issue but wanted to know if anyone has done anything like this?

Can not download 0.2.0

	<dependency>
			<groupId>com.github.danielwegener</groupId>
			<artifactId>logback-kafka-appender</artifactId>
			<version>0.2.0</version>
			<scope>runtime</scope>
		</dependency>
		<dependency>
			<groupId>ch.qos.logback</groupId>
			<artifactId>logback-classic</artifactId>
			<version>1.2.3</version>
			<scope>runtime</scope>
		</dependency>

why I can not download 0.2.0 version?

build not success

Hi, bro
i want try to build source from your repository, But a error happened in this class of KafkaAppender,
At line 118, Are you sure have this method of 'encode'?

kafka appender for logback-access events

I am currently using logback-access with tomcat to write access log events to a daily log file.

 <appender name="dailyFileAppender" class="ch.qos.logback.core.rolling.RollingFileAppender
     <file>${catalina.base}/logs/localhost_access_log.txt</file>
     <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
        <fileNamePattern>localhost_access_log.%d{yyyy-MM-dd}.txt.gz</fileNamePattern>
      </rollingPolicy>
      <encoder>
        <pattern>%h %l %u %t{yyyy-MM-dd HH:mm:ss} "%r" %s %b %D :%i{X-BDS-AuthToken}:%i{X-BDS-CorrelationID} :%i{User-Agent}</pattern>
      </encoder>
    </appender>

I would like to send them directly to kafka and this project seems like a good solution for that problem. Unfortunately, the logback-kafka-appender doesn't support IAccessEvent layout patterns, only ILogEvent layout patterns. It seems like I could get this to work by creating an encoding class derived from KafkaMessageEncoderBase. I'm about to give that a try, but do you know of a better approach or do you already know this is not going to work for some reason?

Thanks for any suggestions,
Diane

NullPointerException

Hello, there's a NullPointerException in AsynchronousDeliveryStrategy (producer is null) with following config:

<appender name="KAFKA" class="com.github.danielwegener.logback.kafka.KafkaAppender">
    <encoder class="com.github.danielwegener.logback.kafka.encoding.LayoutKafkaMessageEncoder">
        <layout class="ch.qos.logback.classic.PatternLayout">
            <pattern>%d{"yyyy-MM-dd HH:mm:ss,SSS"} %level [%X{project}] [%X{parentid}] \(%thread\) %logger{15}: %m
                -------------------------------------------------------------------------------%n
            </pattern>
        </layout>
    </encoder>
    <topic>ihorlogs</topic>
    <keyingStrategy class="com.github.danielwegener.logback.kafka.keying.HostNameKeyingStrategy" />
    <deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy" />
    <producerConfig>bootstrap.servers=localhost:9992</producerConfig>
    <producerConfig>acks=0</producerConfig>
    <producerConfig>linger.ms=100</producerConfig>
    <producerConfig>block.on.buffer.full=false</producerConfig>
    <filter class="ch.qos.logback.core.filter.EvaluatorFilter">
        <evaluator class="ch.qos.logback.classic.boolex.GEventEvaluator">
            <expression>
                e.mdc?.get("project").contains("ururu.projects")
            </expression>
        </evaluator>
        <OnMismatch>DENY</OnMismatch>
        <OnMatch>NEUTRAL</OnMatch>
    </filter>
</appender>

Any idea why?

Upgrade from 0.0.3 to 0.0.4 issue

Without a change to my logback.xml file I get an IndexOutOfBoundsException after I upgrade from 0.0.3 to 0.0.4.

<appender name="logFileAppender" class="ch.qos.logback.core.rolling.RollingFileAppender">
    <file>logs/${myappName}.log</file>
    <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
        <!-- daily rollover -->
        <fileNamePattern>logs/${myappName}.%d{yyyy-MM-dd}.log.gz</fileNamePattern>
        <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
            <maxFileSize>22MB</maxFileSize>
        </timeBasedFileNamingAndTriggeringPolicy>
        <!-- keep 90 days' worth of history -->
        <maxHistory>90</maxHistory>
    </rollingPolicy>
    :

The error log:

17:35:47,015 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [logFileAppender]
17:35:47,066 |-INFO in c.q.l.core.rolling.TimeBasedRollingPolicy - Will use gz compression
17:35:47,068 |-INFO in c.q.l.core.rolling.TimeBasedRollingPolicy - Will use the pattern logs/myService.%d{yyyy-MM-dd}.log for the active file
17:35:47,071 |-INFO in ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP@369e8d51 - The date pattern is 'yyyy-MM-dd' from file name pattern 'logs/myService.%d{yyyy-MM-dd}.log.gz'.
17:35:47,071 |-INFO in ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP@369e8d51 - Roll-over at midnight.
17:35:47,074 |-INFO in ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP@369e8d51 - Setting initial period to Wed Nov 11 16:18:24 CST 2015
17:35:47,077 |-ERROR in ch.qos.logback.core.joran.spi.Interpreter@46:25 - RuntimeException in Action for tag [rollingPolicy] java.lang.IndexOutOfBoundsException: No group 1
at java.lang.IndexOutOfBoundsException: No group 1
at at java.util.regex.Matcher.group(Matcher.java:538)
at at ch.qos.logback.core.rolling.helper.FileFilterUtil.extractCounter(FileFilterUtil.java:109)
at at ch.qos.logback.core.rolling.helper.FileFilterUtil.findHighestCounter(FileFilterUtil.java:93)
at at ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP.computeCurrentPeriodsHighestCounterValue(SizeAndTimeBasedFNATP.java:70)
at at ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP.start(SizeAndTimeBasedFNATP.java:50)
at at ch.qos.logback.core.rolling.TimeBasedRollingPolicy.start(TimeBasedRollingPolicy.java:90)
at at ch.qos.logback.core.joran.action.NestedComplexPropertyIA.end(NestedComplexPropertyIA.java:167)
at at ch.qos.logback.core.joran.spi.Interpreter.callEndAction(Interpreter.java:317)

Kafka producer connection being closed automatically

kafkaproducer_1 | Fri May 20 14:01:00,950 2016 org.apache.kafka.common.metrics.Metrics:? DEBUG Timer-2 Added sensor with name bufferpool-wait-time
kafkaproducer_1 | Fri May 20 14:01:00,950 2016 org.apache.kafka.common.metrics.Metrics:? DEBUG Timer-2 Added sensor with name buffer-exhausted-records
kafkaproducer_1 | Fri May 20 14:01:00,950 2016 org.apache.kafka.clients.producer.KafkaProducer:? INFO Timer-2 Closing the Kafka producer with timeoutMillis = 0 ms.
kafkaproducer_1 | Fri May 20 14:01:00,951 2016 org.apache.kafka.clients.producer.KafkaProducer:? DEBUG Timer-2 The Kafka producer has closed.
kafkaproducer_1 | Fri May 20 14:01:00,958 2016 org.apache.kafka.clients.producer.ProducerConfig:? INFO Timer-3 ProducerConfig values: 
kafkaproducer_1 |   compression.type = snappy
kafkaproducer_1 |   metric.reporters = []
kafkaproducer_1 |   metadata.max.age.ms = 300000
kafkaproducer_1 |   metadata.fetch.timeout.ms = 60000
kafkaproducer_1 |   reconnect.backoff.ms = 50
kafkaproducer_1 |   sasl.kerberos.ticket.renew.window.factor = 0.8
kafkaproducer_1 |   bootstrap.servers = [kafka:9092]
kafkaproducer_1 |   retry.backoff.ms = 100
kafkaproducer_1 |   sasl.kerberos.kinit.cmd = /usr/bin/kinit
kafkaproducer_1 |   buffer.memory = 33554432
kafkaproducer_1 |   timeout.ms = 30000
kafkaproducer_1 |   key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
kafkaproducer_1 |   sasl.kerberos.service.name = null
kafkaproducer_1 |   sasl.kerberos.ticket.renew.jitter = 0.05
kafkaproducer_1 |   ssl.keystore.type = JKS
kafkaproducer_1 |   ssl.trustmanager.algorithm = PKIX
kafkaproducer_1 |   block.on.buffer.full = false
kafkaproducer_1 |   ssl.key.password = null
kafkaproducer_1 |   max.block.ms = 60000
kafkaproducer_1 |   sasl.kerberos.min.time.before.relogin = 60000
kafkaproducer_1 |   connections.max.idle.ms = 540000
kafkaproducer_1 |   ssl.truststore.password = null
kafkaproducer_1 |   max.in.flight.requests.per.connection = 5
kafkaproducer_1 |   metrics.num.samples = 2
kafkaproducer_1 |   client.id = logback-analytics-c74a3af45647-{hike.instanceid}
kafkaproducer_1 |   ssl.endpoint.identification.algorithm = null
kafkaproducer_1 |   ssl.protocol = TLS
kafkaproducer_1 |   request.timeout.ms = 30000
kafkaproducer_1 |   ssl.provider = null
kafkaproducer_1 |   ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
kafkaproducer_1 |   acks = 1
kafkaproducer_1 |   batch.size = 16384
kafkaproducer_1 |   ssl.keystore.location = null
kafkaproducer_1 |   receive.buffer.bytes = 32768
kafkaproducer_1 |   ssl.cipher.suites = null
kafkaproducer_1 |   ssl.truststore.type = JKS
kafkaproducer_1 |   security.protocol = PLAINTEXT
kafkaproducer_1 |   retries = 2
kafkaproducer_1 |   max.request.size = 1048576
kafkaproducer_1 |   value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
kafkaproducer_1 |   ssl.truststore.location = null
kafkaproducer_1 |   ssl.keystore.password = null
kafkaproducer_1 |   ssl.keymanager.algorithm = SunX509
kafkaproducer_1 |   metrics.sample.window.ms = 30000
kafkaproducer_1 |   partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
kafkaproducer_1 |   send.buffer.bytes = 131072
kafkaproducer_1 |   linger.ms = 1000
kafkaproducer_1 | 

My logback appender config

    <!-- This is the KAFKA_APPENDER -->
    <!-- configuration is probably not so reliable under failure conditions but wont block your application at all -->
    <appender name="KAFKA_APPENDER" class="com.github.danielwegener.logback.kafka.KafkaAppender">
        <!-- This is the default encoder that encodes every log message to an utf8-encoded string  -->
        <encoder class="com.github.danielwegener.logback.kafka.encoding.LayoutKafkaMessageEncoder">
            <layout class="ch.qos.logback.classic.PatternLayout">
                <pattern>%d{yyyy-MM-dd HH:mmz}|%m%n</pattern>
            </layout>
        </encoder>
        <!-- deny all events with a level below the mentioned one -->
        <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
            <level>${KAFKA_ANALYTICS_LOG_LEVEL:-INFO}</level>
        </filter>
        <topic>analytics-logs</topic>
        <!-- we don't care how the log messages will be partitioned  -->
        <keyingStrategy class="com.github.danielwegener.logback.kafka.keying.HostNameKeyingStrategy" />

        <!-- use async delivery. the application threads are not blocked by logging -->
        <deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy" />

        <!-- each <producerConfig> translates to regular kafka-client config (format: key=value) -->
        <!-- producer configs are documented here: https://kafka.apache.org/documentation.html#newproducerconfigs -->
        <!-- bootstrap.servers is the only mandatory producerConfig -->
        <producerConfig>bootstrap.servers=${KAFKA_ANALTYICS_LOG_BROKERS}</producerConfig>
        <!-- The total bytes of memory the producer can use to buffer records waiting to be sent to the server default 32MB -->
        <producerConfig>buffer.memory=33554432</producerConfig>
        <producerConfig>acks=1</producerConfig>
        <!-- wait up to 1000ms and collect log messages before sending them as a batch -->
        <producerConfig>linger.ms=1000</producerConfig>
        <!-- These methods can be blocked either because the buffer is full or metadata unavailable. default= 60000-->
        <producerConfig>max.block.ms=60000</producerConfig>
        <!-- define a client-id that you use to identify yourself against the kafka broker -->
        <producerConfig>client.id=logback-analytics-${HOSTNAME}</producerConfig>
        <!-- The compression type for all data generated by the producer. The default is none.  valid values: none, gzip, snappy  -->
        <producerConfig>compression.type=snappy</producerConfig>
        <!-- Setting a value greater than zero will cause the client to resend any record whose send fails with a potentially transient error -->
        <producerConfig>retries=2</producerConfig>        
        <!-- this is the fallback appender if kafka is not available. -->
        <appender-ref ref="KAFKA_APPENDER_FALLBACK"/>
    </appender>

Its not working with String serializer/ deserializer

Hi,
I tried to configure the different value Serializer for producer but it is not working.

My configuration as below

        <producerConfig>key.serializer=org.apache.kafka.common.serialization.StringSerializer</producerConfig>

When I remove this change and use ByteArraySerializer it is working as normal.

@danielwegener Can you please explain?

Logback AsyncAppender

It´s still recommended to wrap KafkaAppender into a Logback AsyncAppender when using kafka client 0.9 or above to prevent blocking?

Thanks

org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.

Hi , when i had config logback.xml file and changed root level="debug" , the application log a lot of such as "org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms. " information and blockeding my application , if i change logback.xml file root level to info , it work fine , why is this ? thanks a lot.

exception log information

image

lib information

image

Kafka appender infinite loop ?

I tried to use Kafka Appender with spring boot / spring cloud application with kafka client 0.10.11 . There was error during startup org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms., which caused app to hang. I supposed, that there is problem of sending messages from kafka itself. I changed logback-spring.xml

	<logger name="kafka" additivity="false">
		<appender-ref ref="STDOUT" />
	</logger>
	<root level="info">
		<appender-ref ref="STDOUT" />
		<appender-ref ref="kafka-appender" />
	</root>

This workaround works, but messages from kafka are not sent to Kafka. Is it possible to fix this problem ?

Application not starting when Kafka Servers down

I'm using Spring boot and danielwegener kafka appender to publish application logs to Kafka but the spring boot application is not starting when Kafka server are down.

Can you please advise to resolve this issue?

<root level="INFO">
            <appender-ref ref="kafkaLogAppenderAsync"/>
    </root>
 <appender name="kafkaLogAppenderAsync" class="ch.qos.logback.classic.AsyncAppender">
        <appender-ref ref="logAppender"/>
    </appender>

<appender name="logAppender" class="com.github.danielwegener.logback.kafka.KafkaAppender">
        <encoder class="com.github.danielwegener.logback.kafka.encoding.LayoutKafkaMessageEncoder">
            <layout class="ch.qos.logback.classic.PatternLayout">
                <pattern>concise-local - %msg</pattern>
                <pattern>${CONSOLE_LOG_PATTERN}</pattern>
            </layout>
        </encoder>
        <topic>app-logs</topic>
        <keyingStrategy class="com.github.danielwegener.logback.kafka.keying.RoundRobinKeyingStrategy"/>
        <deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy"/>
        <producerConfig>bootstrap.servers=${kafka.bootstrap.servers}</producerConfig>
        <producerConfig>acks=0</producerConfig>
        <producerConfig>compression.type=gzip</producerConfig>
        <producerConfig>retries=2</producerConfig>
        <producerConfig>request.timeout.ms=10</producerConfig>
        <producerConfig>max.block.ms=800</producerConfig>
        <producerConfig>metadata.fetch.timeout.ms=10</producerConfig>
        <appender-ref ref="CONSOLE"/>
    </appender>

Would benefit from being able to use the generic Logstash encoder

Interested in a class that implements as:

public class KafkaJSONMessageEncoder extends LoggingEventCompositeJsonEncoder implements KafkaMessageEncoder, LifeCycle {}

???
This would allow an encoder to be configured that can handle Logstash timezone adjustments AND custom pattern layouts (instead of all the canned Logstash overhead) looks like this:

    -->
    <encoder class="com.github.danielwegener.logback.kafka.encoding.KafkaJSONMessageEncoder">
        <timeZone>UTC</timeZone>

        <providers>
            <timestamp/>

            <pattern>
                <pattern>
                    {"message": "%replace(%msg){'blahblah: ', ''}" }
                </pattern>
            </pattern>
        </providers>
    </encoder>

I have a sample source if your interested.

Need for a Json Message Encoder

My IT group who maintains Logstash/ElasticSearch/Kibana tells me I have to put JSON formatted messages on my Kafka Queue for logstash to pull out. I see you do not provide an encoder which supports writing to JSON or to the format expected by Logstash. I tried using logstash-logback-encoder, but it looks like they are incompatible:

ERROR in ch.qos.logback.core.joran.util.PropertySetter@7d1b7c86 - A "net.logstash.logback.encoder.LogstashEncoder" object is not assignable to a "com.github.danielwegener.logback.kafka.encoding.KafkaMessageEncoder" variable.
ERROR in com.github.danielwegener.logback.kafka.KafkaAppender[kafkaAppender] - No encoder set for the appender named ["kafkaAppender"].

Is there any hope for writing to a JSON formatted string?

Not all messages delivered to Kafka

Hi,

I am working on a poc and trying to send 10000 logs in loop to kafka but I can see that not all messages are delivered to Kafka. I have also tried with lower no of messages and got similar results. I am using AsynchronousDeliveryStrategy and I have 1 broker 1 partition configuration and rest setting are default for Kafka. I've tried tweaking various settings (the batch size, the retries, etc.) and it doesn't appear to have an effect. I am trying this on my local window machine.
Please suggest what combination of configuration I should use for above case.

My logback.xml file :

System.out %d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n System.err %d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n
<appender name="kafkaFallbackFileAppender"
	class="ch.qos.logback.core.FileAppender">
	<file>kafka_fallback.log</file>
	<append>false</append>
	<encoder>
		<pattern>[[app]] %d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} -
			%msg%n</pattern>
	</encoder>
</appender>

<appender name="very-relaxed-and-fast-kafka-appender"
	class="com.github.danielwegener.logback.kafka.KafkaAppender">
	<encoder
		class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
		<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n
		</pattern>
	</encoder>
	<topic>test</topic>
	<keyingStrategy
		class="com.github.danielwegener.logback.kafka.keying.NoKeyKeyingStrategy" />
	<deliveryStrategy
		class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy" />

	<producerConfig>bootstrap.servers=localhost:9092</producerConfig>
	<producerConfig>acks=0</producerConfig>
	<producerConfig>client.id=CCS-test</producerConfig>
	<producerConfig>buffer.memory=268435456</producerConfig>
	<producerConfig>retries=3</producerConfig>
	<appender-ref ref="kafkaFallbackFileAppender"/>
</appender>
<root level="info">
	<appender-ref ref="very-relaxed-and-fast-kafka-appender" />
	<!-- <appender-ref ref="STDOUT" /> -->
</root>

Thanks
::Pankaj

Need for a message encoder

HI, I am trying to use

INFO

%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n

ERROR 👍
20:39:54,018 |-ERROR in ch.qos.logback.core.joran.util.PropertySetter@71dac704 - A "ch.qos.logback.classic.encoder.PatternLayoutEncoder" object is not assignable to a "com.github.danielwegener.logback.kafka.encoding.KafkaMessageEncoder" variable.
20:39:54,018 |-ERROR in ch.qos.logback.core.joran.util.PropertySetter@71dac704 - The class "com.github.danielwegener.logback.kafka.encoding.KafkaMessageEncoder" was loaded by
20:39:54,018 |-ERROR in ch.qos.logback.core.joran.util.PropertySetter@71dac704 - [org.springframework.boot.loader.LaunchedURLClassLoader@5d099f62] whereas object of type
20:39:54,018 |-ERROR in ch.qos.logback.core.joran.util.PropertySetter@71dac704 - "ch.qos.logback.classic.encoder.PatternLayoutEncoder" was loaded by [org.springframework.boot.loader.LaunchedURLClassLoader@5d099f62].

Tried with different

instead, but still not working :

ERROR in ch.qos.logback.core.joran.spi.Interpreter@39:25 - no applicable action for [producerConfig], current ElementPath is [[configuration][appender][producerConfig]]
ERROR in ch.qos.logback.core.joran.action.AppenderRefAction - Could not find an AppenderAttachable at the top of execution stack. Near [appender-ref] line 42
at org.springframework.boot.logging.logback.LogbackLoggingSystem.loadConfiguration(LogbackLoggingSystem.java:169)

can you please assist

Cannot connect to Kafka > 0.9

The current version uses kafka-client 0.9.0 which is not working with newer releases of Kafka (as of today 0.10.1.1).

With Kafka 0.10 Java 8 is necessary even when still using the Scala 2.11 build, so changing the version will be a breaking change

When bootstrap.servers are not available, Thread Stuck on logger.warn

<appender name="ASYNC" class="ch.qos.logback.classic.AsyncAppender">
        <appender-ref ref="kafkaAppender" />
    </appender>

    <root level="INFO">
           <appender-ref ref="ASYNC" />
        <appender-ref ref="APPLICATION"/>

    </root>

   <dependency>
        <groupId>com.github.danielwegener</groupId>
        <artifactId>logback-kafka-appender</artifactId>
        <version>0.2.0-RC1</version>
        <scope>runtime</scope>
    </dependency>
    <dependency>
        <groupId>org.apache.kafka</groupId>
        <artifactId>kafka-clients</artifactId>
        <version>2.0.0</version>
    </dependency>


    <dependency>
        <groupId>ch.qos.logback</groupId>
        <artifactId>logback-classic</artifactId>
        <version>1.2.3</version>
    </dependency>
    <dependency>
        <groupId>ch.qos.logback</groupId>
        <artifactId>logback-core</artifactId>
        <version>1.2.3</version>
    </dependency>
mytopic
        <!-- Optional parameter to use a fixed partition -->
        <!-- <partition>0</partition> -->

        <!-- Optional parameter to include log timestamps into the kafka message -->x
        <!-- <appendTimestamp>true</appendTimestamp> -->

        <!-- each <producerConfig> translates to regular kafka-client config (format: key=value) -->
        <!-- producer configs are documented here: https://kafka.apache.org/documentation.html#newproducerconfigs -->
        <!-- bootstrap.servers is the only mandatory producerConfig -->
        <producerConfig>bootstrap.servers=wronghost:9092wronghost:9092,wronghost:9092</producerConfig>

        <producerConfig>compression.type=gzip</producerConfig>
        <producerConfig>retries=2</producerConfig>
        <!-- the maximum time to wait for the response of a sent message -->
        <producerConfig>request.timeout.ms=2500</producerConfig>
        <!-- the maximum time producer.send() and partitionsFor() will block waiting for acknowledgment -->
        <producerConfig>max.block.ms=10000</producerConfig>
        <!--The maximum amount of time in milliseconds to wait when reconnecting
        to a broker that has repeatedly failed to connect.
        If provided, the backoff per host will increase
        exponentially for each consecutive connection failure, up to this maximum.
        After calculating the backoff increase, 20% random jitter is added to avoid connection storms-->
        <producerConfig>reconnect.backoff.max.ms=1000</producerConfig>

        <producerConfig>max.block.ms=800</producerConfig>
        <producerConfig>metadata.fetch.timeout.ms=10</producerConfig>



        <!-- this is the fallback appender if kafka is not available. -->
       <!-- <appender-ref ref="FILE"/>
        <appender-ref ref="ERROR_FILE"/>
        <appender-ref ref="TRANSACTION_FILE"/>-->

        <appender-ref ref="APPLICATION"/>



    </appender>

Application is hung or stuck whenever its step on LOGGER.WARN statement.

Write to multiple topics

My requirement is to log to different/multiple topics. I see a note in the read me that advises to create one appender for each topic. My question is how to dynamically/programatically chose the appender while logging?

Broker reconnection issue

Expected Behavior

We should be able to restart kafka behind load balancers or restart one of the bootstrap servers without causing problems to app itself

Current Behavior

While we put loadbalancer in bootstrap servers:

<appender name="kafkaOutAppender" class="com.github.danielwegener.logback.kafka.KafkaAppender">
        <encoder>
                <pattern></pattern>
        </encoder>
        <topic></topic>
        <keyingStrategy class="com.github.danielwegener.logback.kafka.keying.HostNameKeyingStrategy" />
        <deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy" />
            <!-- each <producerConfig> translates to regular kafka-client config (format: key=value) -->
            <!-- producer configs are documented here: https://kafka.apache.org/documentation.html#newproducerconfigs -->
            <!-- bootstrap.servers is the only mandatory producerConfig -->
        <producerConfig>bootstrap.servers=1.2.3.4:9092</producerConfig>
        <producerConfig>acks=0</producerConfig>
        <producerConfig>block.on.buffer.full=false</producerConfig>
        <producerConfig>client.id=${HOSTNAME}-${CONTEXT_NAME}-logback-relaxed</producerConfig>
        <producerConfig>compression.type=none</producerConfig>

        <producerConfig>max.block.ms=0</producerConfig>
</appender>

bootstrap.servers=1.2.3.4:9092 is a loadbalancer with three servers behind

After restarting one of the kafka brokers I get reconnection errors in app:

- [Producer clientId=logback-relaxed] Uncaught error in kafka producer I/O thread:
[3/14/18 10:15:33:622 CET] 000000de SystemOut     O [kafka-producer-network-thread | logback-relaxed] cid: clid: E a: o.a.k.c.p.internals.Sender java.lang.NullP
ointerException: null
        at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:436)
        at org.apache.kafka.common.network.Selector.poll(Selector.java:399)
        at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:460)
- [Producer clientId=logback-relaxed] Uncaught error in kafka producer I/O thread:
[3/14/18 10:15:33:622 CET] 000000de SystemOut     O [kafka-producer-network-thread | logback-relaxed] cid: clid: E a: o.a.k.c.p.internals.Sender java.lang.NullP
ointerException: null
        at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:436)
        at org.apache.kafka.common.network.Selector.poll(Selector.java:399)
        at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:460)
- [Producer clientId=logback-relaxed] Uncaught error in kafka producer I/O thread:

When bootstrap.servers are not available, stops spring boot app from running

When I start a spring-boot application with a KafkaAppender, and the endpoint is not accessible, then regardless of how I've mucked with the values, the appender stops my application from running.. period. It just sits and waits for the connection and retries and retries.

2015-11-06 14:50:16,385 workfront-eureka-server 0 0 [kafka-producer-network-thread | workfront-eureka-server] WARN  org.apache.kafka.common.network.Selector - Error in I/O with /192.168.99.100
java.net.ConnectException: Connection refused

    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_20]
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) ~[na:1.8.0_20]
    at org.apache.kafka.common.network.Selector.poll(Selector.java:238) ~[kafka-clients-0.8.2.1.jar!/:na]
    at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192) [kafka-clients-0.8.2.1.jar!/:na]
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191) [kafka-clients-0.8.2.1.jar!/:na]
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122) [kafka-clients-0.8.2.1.jar!/:na]
    at java.lang.Thread.run(Thread.java:745) [na:1.8.0_20]

My config looks like this:

        <appender name="kafkaAppender" class="com.github.danielwegener.logback.kafka.KafkaAppender">
        <!-- This is the default encoder that encodes every log message to an utf8-encoded string  -->
            <encoder class="com.github.danielwegener.logback.kafka.encoding.PatternLayoutKafkaMessageEncoder">
            <layout class="ch.qos.logback.classic.PatternLayout">
                <pattern>%date ${myappName} ${parentCallChainID} ${callChainID} [%thread] %.-5level %X{username} %logger - %msg%n</pattern>
            </layout>
            </encoder>
            <topic>logss</topic>
            <keyingStrategy class="com.github.danielwegener.logback.kafka.keying.ContextNameKeyingStrategy" />
            <deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy" />

            <!-- each <producerConfig> translates to regular kafka-client config (format: key=value) -->
            <!-- producer configs are documented here: https://kafka.apache.org/documentation.html#newproducerconfigs -->
            <!-- bootstrap.servers is the only mandatory producerConfig -->
            <producerConfig>bootstrap.servers=192.168.99.100:9094</producerConfig>
            <!-- wait up to 1000ms and collect log messages before sending them as a batch -->
            <producerConfig>linger.ms=1000</producerConfig>
            <!--  amount of time to wait before attempting to reconnect to a given host when a connection fails. This avoids a scenario where the client repeatedly attempts to connect to a host in a tight loop -->
            <producerConfig>reconnect.backoff.ms=490</producerConfig>
            <!-- use gzip to compress each batch of log messages. valid values: none, gzip, snappy  -->
            <producerConfig>compression.type=gzip</producerConfig>
            <!-- even if the producer buffer runs full, do not block the application but start to drop messages -->
            <producerConfig>block.on.buffer.full=false</producerConfig>
            <!-- specify source of request in human readable form -->
            <producerConfig>client.id=${myappName}</producerConfig>


            <!-- there IS a fallback <appender-ref>. -->
            <appender-ref ref="STDOUT"/>
    </appender>

I do not want the logging to stop the system from running. I have other appenders that are running to local files. Please tell me I can configure this behavior somehow (ie.. let the system run and not use kafka appender .. but of course, keep trying to connect).

Async producer deadlocks

The async producer performs a nice deadlock when a broker dies and the producer buffer runs full ( with block.on.buffer.full=true). This leads to the situation that the producer tries to reconnect and starts to log reconnect messages. Theses log-messages try to become delivered by the appender again an run into the blocking buffer.

Infinity loop if any message logged by kafka client library when kafka server is down.

For any logs from kafka client library, the strategy is to defer these messages and drained them before the next log.

    @Override
    public void doAppend(E e) {
        ensureDeferredAppends();
        if (e instanceof ILoggingEvent && ((ILoggingEvent)e).getLoggerName().startsWith(KAFKA_LOGGER_PREFIX)) {
            deferAppend(e);
        } else {
            super.doAppend(e);
        }
    }
private void deferAppend(E event) {
        queue.add(event);
    }

However, if kafka server is down, this may lead to an infinity loop, especially when the log level is low(trace, debug, or even info).

production code use kafka appender to log --> kafka appender sends message to kafka client library --> cannot connect to kafka server, kafka client library log messages --> kafka appender gets these messages, try to send them to kafka client library --> ...

Would there be any fix for this issue? How about we eat logs from kafka client library. Users have to configure another logger for org.apache.kafka.clients

How to customize the message format

How to customize the message format. Like this.

{"appname":"${appName}","ip":"%ip","time": "%date{yyyy-MM-dd HH:mm:ss.SSS}","thread": "%thread","level": "%level","class": "%logger{36}","message": "%message"}

I want to get the correct JSON format
thank you

use LogstashEncoder. why all data is included in the message field?

Hi. I use tag issue-51-logback12-encoders

<appender name="myKafkaAppender" class="com.github.danielwegener.logback.kafka.KafkaAppender">
        <topic>gateway</topic>
        <keyingStrategy class="com.github.danielwegener.logback.kafka.keying.RoundRobinKeyingStrategy"/>
        <deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy"/>
        <producerConfig>bootstrap.servers=192.168.10.126:9092</producerConfig>
        <encoder class="net.logstash.logback.encoder.LogstashEncoder">
                <mdc/>
                <context/>
                <message/>
        </encoder>
    </appender>
{
      "message" => "{\"@timestamp\":\"2017-11-14T19:25:28.700+08:00\",\"@version\":1,\"message\":\"test\",\"logger_name\":\"com.lvkerry.test.TestLogBack\",\"thread_name\":\"main\",\"level\":\"INFO\",\"level_value\":20000,\"name1\":\"value1\"}\r\n",
     "@version" => "1",
    "@timestamp" => 2017-11-14T11:24:46.147Z
}

Why all the data in the message?
-Thanks

How to exclude messages

I have mixed bags of Json and string messages and would like to avoid sending simple string messages. How can I do that ?

Does this appender is supported to connect to kafka-cluster?

Does this appender is supported to connect to kafka-cluster?
I have launch a single kafka, it works.
but , i use "bootstrap-server"to set “kafka1:9092,kafka2:9092,kafka3:9092”,
the appender warnning my cluster id and the project can't bootstrap complete.

Messages being dropped by async delivery and not going to fallback appender

Hi:

I have been experimenting with logback-kafka-appender in order to pipe messages from an app into our Kafka cluster. I am using AsynchronousDeliveryStrategy, and I've noticed when cross-checking the output that I've redirected from STDOUT into a local log file that some messages are missing versus what I ultimately get back out of Kafka (using the kafka-console-consumer.sh script that ships with the Kafka binaries).

I was somewhat surprised, as while the volume isn't small, I wouldn't consider it huge on a big-data scale (roughly 61000 messages delivered over the course of 2 minutes, some at faster bursts than others). I've tried tweaking various settings (linger.ms, the batch size, the retries, etc.) and it doesn't appear to have an effect.

I figured I would then implement a fallback appender which would write to its own log file, which I could then push into Kafka at the app's end as a backfill. However, the log for the appender ends up being a 0-byte file. I attached to the app remotely via the Eclipse debugger, and set breakpoints on the places where onFailedDelivery is called, and did not hit any of them. I also profiled the code and did not see any relevant exceptions thrown.

I've tried the BlockingDeliveryStrategy, and while all the messages were delivered, it ended up causing a near doubling of the runtime of the app, so that really isn't an option. If the fallback appender could work, I am fine with that, but currently I seem to be at a crossroads where messages seem to be disappearing without fallback.

Have you seen anything like this? Is there anything you would suggest I try? I'm pasting in my logback.xml below (Github won't let me attach it as an xml nor a txt file).

Thanks!

-- Ken

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
  <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
    <encoder>
      <pattern>[[app]] %d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
    </encoder>
  </appender>


  <appender name="kafkaFallbackFileAppender" class="ch.qos.logback.core.FileAppender">
    <file>kafka_fallback.log</file>

    <!-- clobber -->
    <append>false</append>

    <!-- encoders are assigned the type ch.qos.logback.classic.encoder.PatternLayoutEncoder by default -->
    <encoder>
      <pattern>[[app]] %d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
    </encoder>
  </appender>


  <appender name="kafkaAppender" class="com.github.danielwegener.logback.kafka.KafkaAppender">
            <!-- This is the default encoder that encodes every log message to an utf8-encoded string  -->
            <encoder class="com.github.danielwegener.logback.kafka.encoding.PatternLayoutKafkaMessageEncoder">
                <layout class="ch.qos.logback.classic.PatternLayout">
                    <pattern>[[app]] %d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg</pattern>
                </layout>
            </encoder>

            <topic>logger-dev</topic>

            <keyingStrategy class="com.github.danielwegener.logback.kafka.keying.RoundRobinKeyingStrategy" />

            <deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy" />

            <!-- each <producerConfig> translates to regular kafka-client config (format: key=value) -->
            <!-- producer configs are documented here: https://kafka.apache.org/documentation.html#newproducerconfigs -->
            <!-- bootstrap.servers is the only mandatory producerConfig -->
            <producerConfig>bootstrap.servers=kafka:9092</producerConfig>


            <producerConfig>acks=all</producerConfig>

            <producerConfig>block.on.buffer.full=true</producerConfig>

            <producerConfig>buffer.memory=268435456</producerConfig>

        <producerConfig>retries=3</producerConfig>

        <producerConfig>batch.size=32768</producerConfig>

        <!-- Fallback to a file so we can batch send them at the end. -->
            <appender-ref ref="kafkaFallbackFileAppender"/>
 </appender>

  <root level="INFO">
    <appender-ref ref="STDOUT" />
    <appender-ref ref="kafkaAppender" />
  </root>
</configuration>

Typo

The error message "Hostname could not be found in context. HostNamePartitioningStrategy will not work." and variable name "hostname" are weird.

ContextNameKeyingStrategy: <-- problem code

@Override
    public void setContext(Context context) {
        super.setContext(context);
        final String hostname = context.getProperty(CoreConstants.CONTEXT_NAME_KEY);
        if (hostname == null) {
            addError("Hostname could not be found in context. HostNamePartitioningStrategy will not work.");
        } else {
            contextNameHash = ByteBuffer.allocate(4).putInt(hostname.hashCode()).array();
        }
    }

HostNameKeyingStrategy:

 @Override
    public void setContext(Context context) {
        super.setContext(context);
        final String hostname = context.getProperty(CoreConstants.HOSTNAME_KEY);
        if (hostname == null) {
            if (!errorWasShown) {
            addError("Hostname could not be found in context. HostNamePartitioningStrategy will not work.");
                errorWasShown = true;
            }
        } else {
            hostnameHash = ByteBuffer.allocate(4).putInt(hostname.hashCode()).array();
        }
    }

SLF4J Substitute Logger Warnings during startup

Kafka clients use slf4j as logging framework. Thus, during the startup of the KafkaAppender, during the construction of the slf4j ILoggerFactory it tries to log to slf4j. This is kindof the chicken egg problem. Logback solves this by introducing SubstituteLoggers (thin proxies) that are handed out by the logger-factory during its startup.

If an application (like the starting KafkaProducer) tries to log to an SubstituteLogger during startup, the SLF4J-Api creates warning messages on stderr like:

SLF4J: The following set of substitute loggers may have been accessed
SLF4J: during the initialization phase. Logging calls during this
SLF4J: phase were not honored. However, subsequent logging calls to these
SLF4J: loggers will work as normally expected.
SLF4J: See also http://www.slf4j.org/codes.html#substituteLogger
SLF4J: org.apache.kafka.clients.producer.KafkaProducer
SLF4J: org.apache.kafka.common.network.Selector
SLF4J: org.apache.kafka.clients.NetworkClient
SLF4J: org.apache.kafka.clients.producer.internals.Sender
SLF4J: org.apache.kafka.clients.producer.internals.RecordAccumulator
SLF4J: org.apache.kafka.clients.producer.ProducerConfig
SLF4J: org.apache.kafka.common.metrics.JmxReporter
SLF4J: org.apache.kafka.clients.producer.internals.Metadata
SLF4J: org.apache.kafka.common.utils.KafkaThread

This is not su much a real problem, rather just annoying and gives the wrong impression that something is seriously broken.

I tried to silence those loggers during creation of KafkaAppender by replacing the KafkaProducer (and its internal) temporaray delegate loggers with NoopLoggers but this is 1. ugly, 2. bad idea because it relies on kafkas internal usage of slf4j loggers (which may change in an unpredicted way) and 3. does not silence the warning output anyway because it seem to be enough to fetch a logger instance by LoggerFactory.getLogger(...) during ILoggerFactory construction phase to cause this warning.

Another idea would be to delay the construction of the KafkaProducer up to the first real logging attempt, but this would introduce further synchronization logic.

Does anyone has a better idea here?

Endless Kafka stacktraces when bad producer config

When the producer config is wrong, the createProducer raises an exception.
As this method is called by the LazyInitializer when a log arrives, each logs tries to create a producer, it fails and a stack trace is logged.
Prior to 0.2-RC1, the producer config check may have mitigated this issue.

Some solutions I can see:

  • Let the producer to be created when the KafkaAppender starts (eager initialization), this would bring a fail fast behaviour but may slow down startup.
  • Add some circuit breaker behaviour in the LazyInitializer, and allow only one producer creation.

Use the standard Logback Encoder interface instead of custom KafkaMessageEncoder

I'd like to plug https://github.com/logstash/logstash-logback-encoder into the KafkaAppender to send JSON formatted logs.

As a workaround, I just wrote an adapter:

public class LogbackKafkaEncoder<E> implements KafkaMessageEncoder<E> {
    private Encoder<E> encoder;

    public Encoder<E> getEncoder() {
        return encoder;
    }

    public void setEncoder(Encoder<E> encoder) {
        this.encoder = encoder;
    }

    @Override
    public byte[] doEncode(E event) {
        return encoder.encode(event);
    }
}

and

    <appender name="kafka" class="com.github.danielwegener.logback.kafka.KafkaAppender">
        <producerConfig>bootstrap.servers=localhost:9092</producerConfig>
        <topic>logstash</topic>
        <encoder class="fr.xxx.LogbackKafkaEncoder">
            <encoder class="net.logstash.logback.encoder.LogstashEncoder"/>
        </encoder>
    </appender>

AutoScan not working with Appender properties

I am trying to use the TurboFiltercalled ReconfigureOnChangeFilter to change the properties of appender like broker and topic and they wouldnt take effect with the KafkaAppender.
http://logback.qos.ch/manual/configuration.html#autoScan

Can somebody please point me to how I can implement this quickly for the Kafka Appender. We have an environment where we might need to change the brokers or topicname maybe not so frequently, but preferably without restarting the server.

thanks
Vidhya

Springboot 1.3.0: LogbackLoggingSystem failed while loading KafkaAppenderConfig if no keyingStrategy is configured

When using logback-kafka-appender with spring-boot 1.3.0.RELEASE the LogbackLoggingSystem fails while loading the KafkaAppenderConfig when no keyingStrategy is defined in the logback-kafka.xml.

This is because of an implementation change in Springboots's LogbackLoggingSystem, scanning the logger context status manager for errors and throwing an exception if at least one error was detected (in the former version, it always throws an exception if it catches an exception before).

There is a default keying strategy defined so it is not necessary to log it as an error. An info (like it's used for the deliveryStrategy) should be fine.

Workaround: Just define the default keyingStrategy explicitly in your logback-kafka.xml:

Support for Kafka 0.8.2.1

Hi,
I was wondering when version 0.0.3 will be released.

I'm using logback-kafka-appender to send messages to Kafka 0.8.2.1 and I see that version 0.0.2 uses Kafka 0.8.2.0 in pom.xml.

Thanks.

Support for Zookeeper of Kafka

I tried to send topic message using zookeeper address of Kafka cluster, but failed. Does logback-kafka-appender support to send kafka message using zookeepers?

Flushing Appender

What is the best way to make sure that all messages are flushed to Kafka before a program exits?

I realize this is more of a general logback question.

Blocked threads on synchronized block

Hi,
We're using the below configuration. Application was working fine for few weeks. One fine day it slowed down and lot of threads are showing as blocked. There's no deadlock in thread dump. All applications are in same network/DC, so there shouldn't be network issues. Kafka system doesn't seem to have any resource related (CPU/Memory) issues.
I'm not sure of root cause yet. Please let me know if you need more information.

                <keyingStrategy
                        class="com.github.danielwegener.logback.kafka.keying.RoundRobinKeyingStrategy" />
                <deliveryStrategy
                        class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy" />

                <!-- each <producerConfig> translates to regular kafka-client config (format:
                                             key=value) -->
                <!-- producer configs are documented here: https://kafka.apache.org/documentation.html#newproducerconfigs -->
                <!-- bootstrap.servers is the only mandatory producerConfig -->
                <producerConfig>bootstrap.servers=kafka-1:9092,kafka-2:9092,kafka-3:9092</producerConfig>
                <!-- restrict the size of the buffered batches to 8MB (default is 32MB) -->
        <producerConfig>buffer.memory=33554432</producerConfig>
        <!-- even if the producer buffer runs full, do not block the application but start to drop messages -->
                <!-- <producerConfig>block.on.buffer.full=false</producerConfig> -->
                <producerConfig>acks=1</producerConfig>
                <producerConfig>retries=1</producerConfig>
                <!-- The compression type for all data generated by the producer. -->
                <producerConfig>compression.type=gzip</producerConfig>
                <!-- Batch size of records into fewer requests in bytes -->
                <producerConfig>batch.size=163840</producerConfig>
                <!-- wait up to 1000ms and collect log messages before sending them as
                                             a batch -->
                <producerConfig>linger.ms=1000</producerConfig>
                <!-- The configuration controls how long KafkaProducer.send() and KafkaProducer.partitionsFor()
                                             will block. These methods can be blocked either because the buffer is full
                        or metadata unavailable -->
                <producerConfig>max.block.ms=2000</producerConfig>

"pool-7-thread-97" #3652 prio=5 os_prio=0 tid=0x00007fb834093800 nid=0x1106d waiting for monitor entry [0x00007fb6c9617000]
   java.lang.Thread.State: BLOCKED (on object monitor)
        at com.github.danielwegener.logback.kafka.KafkaAppender$LazyProducer.get(KafkaAppender.java:150)
        - waiting to lock <0x0000000715781fe8> (a com.github.danielwegener.logback.kafka.KafkaAppender$LazyProducer)
        at com.github.danielwegener.logback.kafka.KafkaAppender.append(KafkaAppender.java:118)
        at ch.qos.logback.core.UnsynchronizedAppenderBase.doAppend(UnsynchronizedAppenderBase.java:84)
        at com.github.danielwegener.logback.kafka.KafkaAppender.doAppend(KafkaAppender.java:51)
        at ch.qos.logback.core.spi.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:48)
        at ch.qos.logback.classic.Logger.appendLoopOnAppenders(Logger.java:270)
        at ch.qos.logback.classic.Logger.callAppenders(Logger.java:257)
        at ch.qos.logback.classic.Logger.buildLoggingEventAndAppend(Logger.java:422)
        at ch.qos.logback.classic.Logger.filterAndLog_0_Or3Plus(Logger.java:384)
        at ch.qos.logback.classic.Logger.debug(Logger.java:495)

Handle log-attempts with payload that is too big for the configured producer

As discussed in #14, it would be good to handle cases where logback attempts to send messages that become bigger than the configured max message size.
Such a case could result in warnings emitted through the logback warning system.
Modifying (truncating) a message is in general maybe not possible because we do not know the actual binary format.

Kafka producer is not able to update metadata

Hi,

I am getting below exception with 3 kafka brokers cluster and 1 zookeeper node.
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 10000 ms.

Kafka Appender config in logback.xml:

<!-- This is the kafkaAppender -->
	<appender name="KAFKA"
		class="com.github.danielwegener.logback.kafka.KafkaAppender">
		<!-- This is the default encoder that encodes every log message to an utf8-encoded 
			string -->
		<encoder
			class="com.github.danielwegener.logback.kafka.encoding.PatternLayoutKafkaMessageEncoder">
			<layout class="ch.qos.logback.classic.PatternLayout">
				<pattern>%d{yyyy-MM-dd HH:mm:ss} </pattern>
			</layout>
		</encoder>
		<topic>mytopic</topic>
		<keyingStrategy
			class="com.github.danielwegener.logback.kafka.keying.RoundRobinKeyingStrategy" />
		<deliveryStrategy
			class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy" />

		<!-- each <producerConfig> translates to regular kafka-client config (format: 
			key=value) -->
		<!-- producer configs are documented here: https://kafka.apache.org/documentation.html#newproducerconfigs -->
		<!-- bootstrap.servers is the only mandatory producerConfig -->
		<producerConfig>bootstrap.servers=localhost:9094,localhost:9092,localhost:9093</producerConfig>
		<producerConfig>max.block.ms=10000</producerConfig>
		<producerConfig>reconnect.backoff.ms=100</producerConfig>
		<producerConfig>retry.backoff.ms=100</producerConfig>
		<producerConfig>linger.ms=100</producerConfig>
		<producerConfig>batch.size=2</producerConfig>
		<producerConfig>block.on.buffer.full=false</producerConfig>
		<producerConfig>buffer.memory=2147483647</producerConfig>
	</appender>

producerConfig: {block.on.buffer.full=false, value.serializer=org.apache.kafka.common.serialization.ByteArraySerializer, max.block.ms=10000, reconnect.backoff.ms=100, batch.size=2, bootstrap.servers=localhost:9094,localhost:9092,localhost:9093, retry.backoff.ms=100, buffer.memory=2147483647, key.serializer=org.apache.kafka.common.serialization.ByteArraySerializer, linger.ms=100}

Using Kafka and Kafka client versions 0.10.1.1 in gradle build file:
runtime('org.apache.kafka:kafka_2.11:0.10.1.1'){
exclude group: 'org.slf4j', module: 'slf4j-log4j12'
exclude group: 'log4j', module: 'log4j'
}
runtime('org.apache.kafka:kafka-clients:0.10.1.1')

Using below classes:
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;

C:\kafka_2.11-0.10.1.1\bin\windows>kafka-topics.bat --describe --zookeeper localhost:2181/kafka --topic mytopic
Topic:mytopic   PartitionCount:1        ReplicationFactor:3     Configs:
        Topic: mytopic  Partition: 0    Leader: 0       Replicas: 0,2,1 Isr: 0,1,2

Please let me know for any additional details.
Any help is highly appreciated.

Regards,
Venkat

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.