GithubHelp home page GithubHelp logo

fantasticfiasco / serilog-sinks-http Goto Github PK

View Code? Open in Web Editor NEW
163.0 7.0 42.0 855 KB

A Serilog sink sending log events over HTTP.

License: Apache License 2.0

C# 98.29% PowerShell 1.41% JavaScript 0.30%
serilog sink http hacktoberfest

serilog-sinks-http's Introduction

Serilog.Sinks.Http - A Serilog sink sending log events over HTTP

CI/CD codecov NuGet Version SemVer compatible NuGet Documentation Join the chat at https://gitter.im/serilog/serilog Help

Package - Serilog.Sinks.Http | Platforms - .NET 4.5/4.6.1, .NET Standard 2.0/2.1

Table of contents


Introduction

This project started out with a wish to send log events to the Elastic Stack. I had prior experience of Elastic Filebeat and didn't like it. I thought the value it added was lower than the complexity it introduced.

Knowing that Serilog.Sinks.Seq existed, and knowing that the code was of really good quality, I blatantly copied many of the core files into this project and started developing a general HTTP sink.

And here we are today. I hope you'll find the sink useful. If not, don't hesitate to open an issue.

Super simple to use

In the following example, the sink will POST log events to http://www.mylogs.com over HTTP. We configure the sink using named arguments instead of positional because historically we've seen that most breaking changes where the result of a new parameter describing a new feature. Using named arguments means that you more often than not can migrate to new major versions without any changes to your code.

ILogger log = new LoggerConfiguration()
  .MinimumLevel.Verbose()
  .WriteTo.Http(requestUri: "https://www.mylogs.com", queueLimitBytes: null)
  .CreateLogger();

log.Information("Logging {@Heartbeat} from {Computer}", heartbeat, computer);

Used in conjunction with Serilog.Settings.Configuration the same sink can be configured in the following way:

{
  "Serilog": {
    "MinimumLevel": "Verbose",
    "WriteTo": [
      {
        "Name": "Http",
        "Args": {
          "requestUri": "https://www.mylogs.com",
          "queueLimitBytes": null
        }
      }
    ]
  }
}

The sink can also be configured to be durable, i.e. log events are persisted on disk before being sent over the network, thus protected against data loss after a system or process restart. For more information please read the wiki.

The sink is batching multiple log events into a single request, and the following hypothetical payload is sent over the network as JSON.

[
  {
    "Timestamp": "2016-11-03T00:09:11.4899425+01:00",
    "Level": "Information",
    "MessageTemplate": "Logging {@Heartbeat} from {Computer}",
    "RenderedMessage": "Logging { UserName: \"Mike\", UserDomainName: \"Home\" } from \"Workstation\"",
    "Properties": {
      "Heartbeat": {
        "UserName": "Mike",
        "UserDomainName": "Home"
      },
      "Computer": "Workstation"
    }
  },
  {
    "Timestamp": "2016-11-03T00:09:12.4905685+01:00",
    "Level": "Information",
    "MessageTemplate": "Logging {@Heartbeat} from {Computer}",
    "RenderedMessage": "Logging { UserName: \"Mike\", UserDomainName: \"Home\" } from \"Workstation\"",
    "Properties": {
      "Heartbeat": {
        "UserName": "Mike",
        "UserDomainName": "Home"
      },
      "Computer": "Workstation"
    }
  }
]

Typical use cases

Producing log events is only half the story. Unless you are consuming them in a matter that benefits you in development or production, there is really no need to produce them in the first place.

Integration with Elastic Stack (formerly know as ELK, an acronym for Elasticsearch, Logstash and Kibana) is powerful beyond belief, but there are many alternatives to get the log events into Elasticsearch.

Send log events from Docker containers

A common solution, given your application is running in Docker containers, is to have stdout (standard output) and stderr (standard error) passed on to the Elastic Stack. There is a multitude of ways to accomplish this, but one using Logspout is linked in the Sample applications chapter.

Send log events to Elasticsearch

The log events can be sent directly to Elasticsearch using Serilog.Sinks.Elasticsearch. In this case you've solved your problem without using this sink, and all is well in the world.

Send log events to Logstash

If you would like to send the log events to Logstash for further processing instead of sending them directly to Elasticsearch, this sink in combination with the Logstash HTTP input plugin is the perfect match for you. It is a much better solution than having to install Filebeat on all your instances, mainly because it involves fewer moving parts.

Sample applications

The following sample applications demonstrate the usage of this sink in various contexts:

The following sample application demonstrate how Serilog events from a Docker container end up in the Elastic Stack using Logspout, without using Serilog.Sinks.Http.

Install via NuGet

If you want to include the HTTP sink in your project, you can install it directly from NuGet.

To install the sink, run the following command in the Package Manager Console:

PM> Install-Package Serilog.Sinks.Http

Contributors

The following users have made significant contributions to this project. Thank you so much!

JetBrains
JetBrains

πŸš‡
C. Augusto Proiete
C. Augusto Proiete

πŸ’΅ πŸ’¬ πŸ’» πŸ€”
Louis Haußknecht
Louis Haußknecht

πŸ’» πŸ€” πŸ›
rob-somerville
rob-somerville

πŸ’» πŸ€” πŸ›
Kevin Petit
Kevin Petit

πŸ’» πŸ€” πŸ›
aleksaradz
aleksaradz

πŸ’» πŸ€” πŸ›
michaeltdaniels
michaeltdaniels

πŸ€”
dusse1dorf
dusse1dorf

πŸ€”
vaibhavepatel
vaibhavepatel

πŸ› πŸ’» πŸ€”
KalininAndreyVictorovich
KalininAndreyVictorovich

πŸ› πŸ’»
Sergios
Sergios

πŸ›
Anton Smolkov
Anton Smolkov

πŸ’» πŸ›
Michael J Conrad
Michael J Conrad

πŸ“–
Yuriy Sountsov
Yuriy Sountsov

πŸ€”
Yuriy Millen
Yuriy Millen

πŸ’» πŸ€”
murlidakhare
murlidakhare

πŸ›
julichan
julichan

πŸ›
janichirag11
janichirag11

πŸ›
PaulP
PaulP

πŸ›

serilog-sinks-http's People

Contributors

aleksaradz avatar antonsmolkov avatar augustoproiete avatar fantasticfiasco avatar github-actions[bot] avatar kalininandreyvictorovich avatar kvpt avatar lhaussknecht avatar renovate-bot avatar renovate[bot] avatar rob-somerville avatar siphonophora avatar vaibhavepatel avatar yuriy-millen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

serilog-sinks-http's Issues

Custom HttpClient in ASP.NET (not Core)

Hi,

I've implemented the HTTP sink several times in both Core 1.1 and 2.0 using a custom HttpClient. I need to implement the same on an old .NET Framework 4.6.2 WebForms/MVC website.

I've set everything up correctly but I get an error using a custom HttpClient, code from
https://github.com/FantasticFiasco/serilog-sinks-http/blob/master/test/Serilog.Sinks.HttpTests/LogServer/TestServerHttpClient.cs

The error is
Method 'PostAsync' in type 'SerilogTest.SerilogHttpClient' from assembly 'App_Code.frjuxum6, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null' does not have an implementation.

Has anyone come across this issue before?

Sink sends empty bodies to endpoint

Step 1: Describe your environment

Web Api running on Azure web app
.NET Framework 4.7
Serilog 2.5.0
Serilog.Sinks.RollingFile 3.3.0
Serilog.Sinks.PeriodicBatching 2.1.1
Serilog.Sinks.Http 4.2.0

Endpoint is a custom log storage api.

Step 2: Describe the problem

I have noticed the sink is sending HttpContent with an empty body to the HttpClient. Digging a bit deeper I see that, in HttpLogShipper.OnTick(), ReadPayload is always called and creates an empty payload when the end of file is reached. The code that follows will send the payload every 2 minutes (due to nextRequiredLevelCheckUtc), even if the payload is empty.

Is this behaviour by design (sending empty bodies)? If yes, why should the endpoint accept empty bodies?

Observed results

Sink is sending requests to the configured endpoint with empty bodies. Resulting in my case a response 400 Bad Request.

Expected results

Sink does not attempt to send anything unless there are events to dispatch to the endpoint.

Support Basic Auth

The http input for logstash supports basic auth but I can't see anywhere that it can be configured in this plugin. Am I correct?

It is a nice simple way of stopping attackers spamming a logging endpoint with junk events.

Provide the ability to use a Custom formatter

Hi,

The possibility to select a formatter based on a type is good but to limited.
For a specific usecase I want to a be able to use a custom formatter but with this sink I can't.
The majority of Serilog sinks have this possibility.
Consider to add it.

Thanks.

never sends HTTP request

I'm unable to get this sink to send anything to my Logstash server.

I'm using this code in a Net Core 3.1 console application:

            Serilog.Debugging.SelfLog.Enable(msg => Debug.WriteLine(msg));

            var log = new LoggerConfiguration()
             .WriteTo.Console()
             .WriteTo.Http("http://ularge1:8080")
             .CreateLogger();

            log.Information("Testing");

The testing message appears in the console, but the log entry is never sent to my server. I'm able to send something with curl and have no problem.

What debugging mechanisms are available?

Lower the bar of providing configuration to `IHttpClient`

Is your feature request related to a problem? Please describe.
It's easy to work with the IHttpClient parameter when you configure your sink in source code, but if you configure your sinks using appsettings.json or any other Microsoft.Extensions.Configuration source it becomes problematic. Serilog.Settings.Configuration requires a default constructor, thus constructor dependency injection is not possible. All other alternatives seems to have the wrong level of direction, where the implementation of the HTTP client references some static variable that holds the configuration.

Describe the solution you'd like
Configuring the HTTP client should be easy, even when your application use Microsoft.Extensions.Configuration.

Any chance you could upgrade you could upgrade rolling files, and provide new sink for it

So I have just spent quite a bit of time messing about with what I thought would be working just fine. But there seems to be an issue with the package you depend on for the durable files. So right now you depend on this package

**Serilog.Sinks.RollingFile" Version="3.*" **

https://github.com/FantasticFiasco/serilog-sinks-http/blob/master/src/Serilog.Sinks.Http/Serilog.Sinks.Http.csproj#L28

But there is a lot of chat about this not rolling the files correctly at its own GitHub page : serilog/serilog-sinks-rollingfile#26

image

This seems to have been fixed in v 4.0.0.0 of a different (newer) sink , which is available on NuGet

https://www.nuget.org/packages/serilog.sinks.file/

Which is what the RollingFile page is recommending people use now anyway

image

Just wondering if you could push out a new version of your sink which could potentially add another sink that could work with this newer recommended serilog.sinks.file (actually working rolling files based on size)

I would see this as 1 new Sink extension method say something like DurableHttpUsingSeriLogSizeRolled (or whatever you like) this would then use a BRAND new sink, which would internally use the newer recommended serilog.sinks.file

Basically you can not rely on this sink doing the right thing right now, due to the dependency it has on OLD Serilog.Sinks.RollingFile, which doesn't roll properly.

The trick will be to make sure you take all the parameters required to create this sink

/// <summary>
/// Write log events to the specified file.
/// </summary>
/// <param name="path">Path to the file.</param>
/// <param name="formatter">A formatter, such as <see cref="JsonFormatter"/>, to convert the log events into
/// text for the file. If control of regular text formatting is required, use the other
/// overload of <see cref="File(LoggerSinkConfiguration, string, LogEventLevel, string, IFormatProvider, long?, LoggingLevelSwitch, bool, bool, TimeSpan?, RollingInterval, bool, int?, Encoding)"/>
/// and specify the outputTemplate parameter instead.
/// </param>
/// <param name="fileSizeLimitBytes">The approximate maximum size, in bytes, to which a log file will be allowed to grow.
/// For unrestricted growth, pass null. The default is 1 GB. To avoid writing partial events, the last event within the limit
/// will be written in full even if it exceeds the limit.</param>
/// <param name="buffered">Indicates if flushing to the output file can be buffered or not. The default
/// is false.</param>
/// <param name="retainedFileCountLimit">The maximum number of log files that will be retained,
/// including the current log file. For unlimited retention, pass null. The default is 31.</param>
/// <param name="encoding">Character encoding used to write the text file. The default is UTF-8 without BOM.</param>
/// <param name="buffered">Indicates if flushing to the output file can be buffered or not. The default
/// is false.</param>
/// <param name="shared">Allow the log file to be shared by multiple processes. The default is false.</param>
/// <param name="rollingInterval">The interval at which logging will roll over to a new file.</param>
/// <param name="rollOnFileSizeLimit">If <code>true</code>, a new file will be created when the file size limit is reached. Filenames 
/// will have a number appended in the format <code>_NNN</code>, with the first filename given no number.</param>


new RollingFileSink(path, formatter, fileSizeLimitBytes, retainedFileCountLimit, encoding, buffered, shared, rollingInterval, rollOnFileSizeLimit);

How to stop event batching when using serilog-sinks-http sink

logging performed using serilog-sinks-http are being batched into a single event. Is there any way to restrict it, so that in elastic I can have one log per document and get better Kibana support.

Current Log looks like this
{ "events": [ { "Timestamp": "2016-11-03T00:09:11.4899425+01:00", "Level": "Information", "MessageTemplate": "Logging {@Heartbeat} from {Computer}", "RenderedMessage": "Logging { UserName: \"Mike\", UserDomainName: \"Home\" } from \"Workstation\"", "Properties": { "Heartbeat": { "UserName": "Mike", "UserDomainName": "Home" }, "Computer": "Workstation" } }, { "Timestamp": "2016-11-03T00:09:12.4905685+01:00", "Level": "Information", "MessageTemplate": "Logging {@Heartbeat} from {Computer}", "RenderedMessage": "Logging { UserName: \"Mike\", UserDomainName: \"Home\" } from \"Workstation\"", "Properties": { "Heartbeat": { "UserName": "Mike", "UserDomainName": "Home" }, "Computer": "Workstation" } } ] }

Expected Log: just the object not the collection
{ "event": { "Timestamp": "2016-11-03T00:09:11.4899425+01:00", "Level": "Information", "MessageTemplate": "Logging {@Heartbeat} from {Computer}", "RenderedMessage": "Logging { UserName: \"Mike\", UserDomainName: \"Home\" } from \"Workstation\"", "Properties": { "Heartbeat": { "UserName": "Mike", "UserDomainName": "Home" }, "Computer": "Workstation" } } }

Add netstandard2.0 target

Recommendation from the cross-platform targeting guidelines.

"DO include a netstandard2.0 target if you require a netstandard1.x target.

All platforms supporting .NET Standard 2.0 will use the netstandard2.0 target and benefit from having a smaller package graph while older platforms will still work and fall back to using the netstandard1.x target."

Add fluentd support

Is your feature request related to a problem? Please describe.
At this moment it's not possible to write logs into Fluentd. It's possible to enable http plugin there but it accept data in different format

Describe the solution you'd like
I'd like to be able to write

ILogger log = new LoggerConfiguration()
  .MinimumLevel.Verbose()
  .WriteTo.Http(new FluentdSink("http://my.fluent.d:9880"))
  .CreateLogger();

and write event there.

Describe alternatives you've considered
Existing Fluentd sink package is unusable and buggied. The most urgent concern is it escapes everything with quotes, so I'm unable to execute range and others non-text queries.

Additional context

Here is a link to fluentd http format documentation: https://docs.fluentd.org/input/http#basic-usage

Suggestion: Default durable files naming

By default durable buffer files are named "Buffer-{Date}.json" and "Buffer.bookmark"
I see two issues with that:

  1. When using multiple loggers only one logger (which acquired lock on file) works because of the file name collision. This is not immediately apparent and digging into Serilog selflog is needed for "first timers". Not sure if this is solvable, but at least add a warning in readme.
  2. Consider using "Buffer-{Date}.log" and "Buffer.bookmark.log" default names instead. This would be compatible with most gitignore files.

I can change names to my liking, but it would be nice to have strong defaults.

Is it possible to set (batch) formatter in configuration file?

Hello I tried to set batch Formatter in config file but I get exception
'Invalid cast from 'System.String' to 'Serilog.Sinks.Http.IBatchFormatter'

My configuration file looks like this:

{
    "Name": "DurableHttpUsingFileSizeRolledBuffers",
    "Args": {
        "requestUri": "http://localhost:31311",
        "batchFormatter": "ArrayBatchFormatter"
    }
}

Is there any way how to do this or is the only way to configure logger in code?

Thanks

Add net461 target

Recommendation from the cross-platform targeting guidelines.

"CONSIDER adding a target for net461 when you're offering a netstandard2.0 target.

Using .NET Standard 2.0 from .NET Framework has some issues that were addressed in .NET Framework 4.7.2. You can improve the experience for developers that are still on .NET Framework 4.6.1 - 4.7.1 by offering them a binary that is built for .NET Framework 4.6.1."

Headers for (Durable)Http sinks

Is your feature request related to a problem? Please describe.
I can't find a way to add headers to a (Durable)Http sink. Is there a way and I've just missed it? I need to add an Authorization header.

Describe the solution you'd like
It sounds like this might be a similar type of issue: serilog-contrib/serilog-sinks-elasticsearch#87. Better for me would be another ctor parameter like "headers" that accepts a delimited list of headers and values. Even simpler, an "authorization" parameter that accepts a string.

Describe alternatives you've considered
Use a custom HttpClient?

Durable File Sink stops triggering timer

First of all I would like to say what a fantastic library this is. We were using the durable file sink to buffer logs on disk and send to Azure Log Analytics via the HTTP Data Collector API. It was working all great until I started to notice a weird behavior recently where the bookmark stops updating and timer not triggering the HTTP call to send the log events to Azure Log Analytics. What's weird is the timer usually stops triggering at a specific time around 12:01AM. The file event still continues getting logged to the disk but the bookmark stops getting updated. If I delete the logs and restart the application, the timer starts to work and bookmark gets updated and continues sending log events to Azure Log Analytics until around 12:01AM where bookmark stops updating and timer stops triggering again.

It's difficult to reproduce the issue locally as this happens when we are logging events continuously throughout the day and timer stops triggering usually around 12:00AM. Do you have any pointers as to what could cause this? Any help would greatly be appreciated.

Thank you!

Getting Properties on elk

Hi, im trying to use seri log with ELK stack by help of your project.

I'm using Asp.Net Core 2.0.0-preview2-final

Here is the how i use

ILogger log = new LoggerConfiguration()
.Enrich.WithProperty("CorrelationId", Guid.NewGuid().ToString ())
.Enrich.FromLogContext()
.MinimumLevel.Verbose()
.WriteTo.DurableHttp("http://localhost:31311") // logstash uri
.CreateLogger()
.ForContext<httpsink.Program>();
When i try to log like below, it won't parse properties to kibana ui.
log.Information("The {user} has access to {resource}", "cengiz","some resource info");
Here is the result
sink1

When i log same message directly to elasticsearch instead of logstash with "serilog.sinks.elasticsearch"
It produces to fileds.

Here is the result
sink2

Do you have any suggestions?
By the way when sending batch logs , all item send as a one message as a string. Is there way to prevent this?
Thanks in advance.

Best practices for sending "debug data objects" to ELK stask

This is not strictly Serilog.Sinks.Http related, but many use this sink for writing to ELK so this might be a good place for discussion.

Step 1: Describe your environment

ELK stask + Serilog with Http sink

Step 2: Describe the problem

Elastic rejects documents when property name has different structure than already indexed document

Steps to reproduce:

logger.Information("MsgTest1 {@payload}", new int[] { 1, 2, 3 });
logger.Information("MsgTest2 {@payload}", new {Foo = new int[] { 1 }});

Observed results

Elastic search fails to save second event document, because already indexed Property.Payload type differs from second one.
Elastic error: "mapper [Properties.Payload] of different type, current_type [long], merged_type [ObjectMapper]"

This is serious issue when using vanilla ELK stask + Serilog.Sinks.Http across multiple developers/services.
One developer logs something using generic parameter name, then second developer unknowingly tries to use same name, but different object type and his messages are not logged (almost silently dropped).

Possible solutions, as I see them:

  1. Add this when configuring serilog: .Destructure.ByTransformingWhere<object>((t)=>true, (x)=>JsonConvert.SerializeObject(x, Newtonsoft.Json.Formatting.Indented))
  2. Add logstash filter which stringifies whole [Properties] object, something like this: json_encode { source => "[Properties]" }
  3. Decide on a rule to always serialiaze complex anonymous objects yourself before passing then to serilog.
  4. Write custom formatter for Http sink which would serialize all properties to strings (unstructuring them)
  5. Extend Http sink to accept property transformer. TextFormatter would be left as is, but http sink would use property formatter for serializing properties. This could look like serilog's: ByTransformingWhere . Do so could allow having custom prpoperties format for http sink, but keep the defaults for other sinks and message rendering.
  6. Extend Http sink to accept simple param bool StringifyProperties. Or something more flexible: Func<object, string> ParamsStrigifier
  7. Write Elastic dynamic template which treats everything in [Properties] as string. Like described here: http://blog.florian-hopf.de/2016/05/stringify-everything-elasticsearch.html

Is there the "force send all logs from buffer" feature?

Hello! First of all, thank you for this library, I really like to use it.

I'm sorry if it's not the right place to ask my question.
Here is it. I have a desktop application that connected to external API and sends logs to it.
So I want to handle a situation when I need to be sure that all logs from the buffer are sent and only after I want to shut down this desktop app
I checked the repository code but didn't find anything similar. So, is there a feature like this or should I look for a workaround?

Recommended for Logstash filter config (in ELK)

Hi, thank you very much for this package, we started to use it.

As you know, this package batches log events and sends logs as a single log request into Logstash.
Logs are batched as json array as string & and put into the "message" field of the http batch log reqeust to Logstash. Then logstash sends this request to elasticsearch etc.

The problem is, Kibana/Elastic see these log events in the "message" field just a simple text.

So we tried some Logstash filters and splitted these batched-log-events as seperate logs with following Logstash configuration.

I just wanted to share our findings with anyone who may be had same problem & interest in splitting batched logs into seperate logs in Logstash.

input {
        tcp {
                port => 5000
        }
        http {
                port => 8080
        }
}

filter {
        json {
                source => "message"
        }
        mutate {
                update => { "message" => "..." }
        }
        split {
                field => "events"
        }
}

output {
        elasticsearch {
                hosts => "elasticsearch:9200"
        }
}

Best regards.

Action Required: Fix Renovate Configuration

There is an error with this repository's Renovate configuration that needs to be fixed. As a precaution, Renovate will stop PRs until it is resolved.

Error type: Cannot find preset's package (github>FantasticFiasco/renovate-config)

Exception formatting/destructuring

I see in both NormalTextFormatter and CompactTextFormatter, logEvent.Exception.ToString() is use to provide exception info. Currently I'm using Serilog.Exceptions which provides detailed exception info in serilog properties including stack traces, and therefore there're two stack traces in my log.

What about provide an IExceptionFormatter or can we use serilog destructures directly to format exception?

Sample here:

{
  "_index": "logstash-2018.01.18",
  "_type": "logs.http",
  "_id": "AWEHGZtESwX53LwBstQt",
  "_version": 1,
  "_score": null,
  "_source": {
    "MessageTemplate": "Exception",
    "@timestamp": "2018-01-18T02:29:34.135Z",
    "RenderedMessage": "Exception",
    "@version": "1",
    "host": "192.168.0.195",
    "Level": "Warning",
    "type": "logs.http",
    "Properties": {
      "App": "Console",
      "ExceptionDetail": {
        "Type": "System.ArgumentNullException",
        "Message": "Value cannot be null.",
        "StackTrace": "   at Logging.Samples.Cases.LoggerCase.ThrowException() in D:\\Projects\\Logging\\Logging.Samples.Cases\\AllCases.cs:line 66\r\n   at Logging.Samples.Cases.LoggerCase.Test() in D:\\Projects\\Logging\\Logging.Samples.Cases\\AllCases.cs:line 55",
        "HResult": -2147467261,
        "TargetSite": "Void ThrowException()",
        "Source": "Logging.Samples.Cases",
        "ParamName": null
      },
      "ExceptionMessage": "Value cannot be null.",
      "SourceContext": "Logging.Samples.Console.Program"
    },
    "Timestamp": "2018-01-18T10:29:32.4287097+08:00",
    "Exception": "System.ArgumentNullException: Value cannot be null.\r\n   at Logging.Samples.Cases.LoggerCase.ThrowException() in D:\\Projects\\Logging\\Logging.Samples.Cases\\AllCases.cs:line 66\r\n   at Logging.Samples.Cases.LoggerCase.Test() in D:\\Projects\\Logging\\Logging.Samples.Cases\\AllCases.cs:line 55"
  },
  "fields": {
    "@timestamp": [
      1516242574135
    ]
  },
  "highlight": {
    "Properties.App": [
      "@kibana-highlighted-field@Console@/kibana-highlighted-field@"
    ]
  },
  "sort": [
    1516242574135
  ]
}

Durable file size rolled HTTP sink not sink to API properly

I am trying to implement Durable file size rolled HTTP sink in my .net core api.
Below is the configuration i have done
"Serilog": {
"MinimumLevel": "Verbose",
"WriteTo": [
{
"Name": "DurableHttpUsingFileSizeRolledBuffers",
"Args": {
"requestUri": "http://localhost:19709/api/LogEvents/log-events",
"httpClient": "Order.API.LogSink.CustomHttpSink, Order.API",
"bufferBaseFileName": "C:\Temp\serilog-configuration-.json",
"bufferFileSizeLimitBytes": null,
"retainedBufferFileCountLimit": 31,
"batchPostingLimit": 100,
"period": "00:00:00.001"
}
}
]
}
I am trying to log 100 times in my api but its sinking only 2 logs to my request uri.
And it will create the below files in my temp folder

image

But in the Bookmark file only first file information will be exists
575:::C:\Temp\serilog-configuration.json-20190513.json and it will sink only this file logs.

Can some one help me out for proper configuration so that it should sink all the inforation to my endpoint api.

Support for fewer rolling files, and shorter time window

We're logging audit data at a high rate. The RollingFileSink in the DurableHttpSink is creating a new log file once a day, and keeping 31 logs. We'd like to move to a model where log files are created every 30 minutes, and only the last three are retained. Do you think this could be made a configuration option? Should I submit a pull request?

Configuring sink via LoggerConfiguration.ReadFrom.Configuration()

I'm probably doing something wrong here, but I cannot get Http sink to work when configuring via the appSettings route. Maybe this is simply not supported?

Step 1: Describe your environment

.NET Core

Step 2: Describe the problem

Steps to reproduce:

  1. Configure sink via IConfigurationRoot
    "Serilog": {
        "MinimumLevel": "Debug",
        "Using": [ "Serilog.Sinks.Http" ],
        "WriteTo": [
            {
                "Name": "Http",
                "Args": { "requestUri": "http://127.0.0.1:38421/" }
            }
     }
  1. Feed above config to (new LoggerConfiguration().ReadFrom.Configuration(...))
  2. Run

Observed results

  • The Http sink is not present in the sinks collection and I receive no sort of error to troubleshoot
  • Same configuration and code works with other sinks such as LiterateConsole
  • The Http sink DOES work if I configure it via .WriteTo.Http(...) but we need per-environment configuration and would like it to work as other sinks

Expected results

  • Expected configuration above to result in a Serilog.Core.Logger instance with an Http sink instance in the _sinks property.

Possible additional functionality

Is it possible to make the library create its own instance of the IhttpClient?
You could take a string to the concrete implementation and use reflection to find and instantiate the client.
This way, you can allow users to write their own Ihttpclient implementation and use
https://github.com/serilog/serilog-settings-configuration
ReadFrom.Configuration(configuration)

As it stands, I want to Write to a Debug Sink in development, but write to the http sink in production and I also need to create my own httpclient because I need to specify authentication.

Batching doesn't seem to work that well

Step 1: Describe your environment

  • Windows 10
  • Visual Studio 2017
  • .NET Core 2.1

Step 2: Describe the problem

The Durable Http Batching doesn't seem to respect the batch size.

Steps to reproduce:

  1. Create a .NET Core 2.1 Asp.Net web Api project, and replace the default ValuesController with this
   [Route("api/[controller]")]
    [ApiController]
    public class EventsController : ControllerBase
    {
        static List<int> _counts = new List<int>();

        // GET api/events
        [HttpGet]
        public IEnumerable<string> Get()
        {
            return _counts.Select(x=>x.ToString());
        }

        public IActionResult Post([FromBody] EventBatchRequestDto batch)
        {
            _counts.Add(batch.Events.Count());
            return Ok();
        }
    }

    public class EventBatchRequestDto
    {
        public IEnumerable<EventDto> Events { get; set; }
    }


    public class EventDto
    {
        public DateTime Timestamp { get; set; }
        public String Level { get; set; }
        public String MessageTemplate { get; set; }
        public String RenderedMessage { get; set; }
        public String Exception { get; set; }
        public Dictionary<String, dynamic> Properties { get; set; }
        public Dictionary<String, RenderingDto[]> Renderings { get; set; }
    }


    public class RenderingDto
    {
        public String Format { get; set; }
        public String Rendering { get; set; }
        public override Boolean Equals(Object obj)
        {
            if (!(obj is RenderingDto other))
                return false;

            return
                Format == other.Format &&
                Rendering == other.Rendering;
        }

        public override Int32 GetHashCode()
        {
            return 0;
        }
    }
  1. Create a new .NET Core 2.1 Console app, and make sure you have these Nugets
<PackageReference Include="Bogus" Version="24.1.0" />
<PackageReference Include="Serilog.Sinks.Console" Version="3.1.1" />
<PackageReference Include="Serilog.Sinks.Http" Version="5.0.1" />

Where this is the Console apps Program.cs class MAKE SURE YOU CHANGE THE HTTP ENDPOINT TO YOUR OWN ONE

using System;
using System.Net.Http;
using System.Threading;
using Serilog.Sinks.Http.BatchFormatters;

namespace Serilog.Http.Tester
{
    class Program
    {
        static void Main(string[] args)
        {
            Random rand = new Random(5000);

            ILogger logger = new LoggerConfiguration()
                .MinimumLevel.Verbose()
                .WriteTo.DurableHttp(
                    requestUri: "http://localhost:52603/api/events",
                    batchPostingLimit:10,
                    batchFormatter: new DefaultBatchFormatter(),
                    httpClient: new SerilogHttpSinkHttpClientWrapper(new HttpClient(new HttpClientHandler
                        {
                            ClientCertificateOptions = ClientCertificateOption.Manual,
                            ServerCertificateCustomValidationCallback = (_, __, ___, ____) => true
                        }),
                        true)
                )
                .WriteTo.Console()
                .CreateLogger()
                .ForContext<Program>();

            var customerGenerator = new CustomerGenerator();
            var orderGenerator = new OrderGenerator();
            int i = 0;
            while (true)
            {
                var customer = customerGenerator.Generate();
                var order = orderGenerator.Generate();

                logger.Information("{@customer} placed {@order}", customer, order);
                i++;
                Console.WriteLine($"Sent {i} events");
                Thread.Sleep(rand.Next(0,1000));
            }

        }
    }
}
  1. Run both of these together in Visual Studio

Observed results

So leave it running for a while, then in chrome hit the GET endpoint Uri (which I am using just see the batch size that was seen in the previous POST enpoint calls from this Serilog sink), so for me this is http:localhost:52603/api/events

Expected results

I expected to see that the messages were batched according to the batchPostingLimit that I have in the Console App code above. Which is 10.

But instead I see output like this, this is with me sending 29 log messages, which are randomly sent to serilog sink between 0-1s delay between

image

Is there some Windowing feature that is at play with the batchPostingLimit ?

Even if I leave it off entirely where the default should be 1000, I get these sorts of results

image

Rolling file sink question

So this is more of a question than an issue, but we could really do with an answer to this. So the question is this

DurableHttpSink use of Rolling File Sink

So I know the DurableHttpSink uses RollingFileSink internally. I just wanted to know if the DurableHttpSink is intelligent enough to only write log entries to the rolling file sink when there is no connectivity to the HTTP endpoint, or does it just write every log entry it has seen to the rolling file sink files no matter what the state of the HTTP endpoint

I mean based on these lines in the DurableHttpSink source code, pretty sure the answer is ALL items get written to disk no matter what.

/// <summary>
/// Initializes a new instance of the <see cref="DurableHttpSink"/> class.
/// </summary>
public DurableHttpSink(
	string requestUri,
	string bufferPathFormat,
	long? bufferFileSizeLimitBytes,
	int? retainedBufferFileCountLimit,
	int batchPostingLimit,
	TimeSpan period,
	ITextFormatter textFormatter,
	IBatchFormatter batchFormatter,
	IHttpClient client)
{
	if (bufferFileSizeLimitBytes.HasValue && bufferFileSizeLimitBytes < 0)
		throw new ArgumentOutOfRangeException(nameof(bufferFileSizeLimitBytes), "Negative value provided; file size limit must be non-negative.");

	shipper = new HttpLogShipper(
		client,
		requestUri,
		bufferPathFormat,
		batchPostingLimit,
		period,
		batchFormatter);

	sink = new RollingFileSink(
		bufferPathFormat,
		textFormatter,
		bufferFileSizeLimitBytes,
		retainedBufferFileCountLimit,
		Encoding.UTF8);
}

/// <summary>
/// Emit the provided log event to the sink.
/// </summary>
public void Emit(LogEvent logEvent) =>
	sink.Emit(logEvent);

Is there no possibility to change how this works, such that if the HTTP endpoint is alive and well, nothing is written to the rolling sink buffer files. And the HttpLogShipper could ultimately decide if the endpoint is not healthy ONLY then should it start storing stuff in the rolling sink buffer files

I would love to know your thoughts on this

My own thoughts are it would drastically reduce the size of the maintained files on disk

Followup to Authorization Header question

I'm having a problem with an HTTP sink. I've created a custom authenticated HTTP client as described in the wiki. It looks like this:

public class AuthorizedHttpClient : IHttpClient
{
	private readonly HttpClient client;

	public AuthorizedHttpClient(string authCode)
	{
		client = new HttpClient();
		var authHeader = new AuthenticationHeaderValue("Basic", authCode);
		client.DefaultRequestHeaders.Authorization = authHeader;
	}

	public Task<HttpResponseMessage> PostAsync(string requestUri, HttpContent content)
	{
		return client.PostAsync(requestUri, content);
	}

	public void Dispose() => client.Dispose();

}

The logger is created like this:

		logger = new LoggerConfiguration()
			.WriteTo.DurableHttp(
				requestUri: myURL,
				textFormatter: new NormalTextFormatter(),
				httpClient: new AuthorizedHttpClient(authCode))
			.CreateLogger();

If I replace DurableHttp with Console (and an output template), it works as expected. But when I use DurableHttp, I get nothing. I checked the URL and authCode separately with a WebRequest set to POST and application/json, they are correct.

Shouldn't Serilog at some point call PostAsync? It never does. Any idea what I might be doing wrong?

Provide the ability to prevent sending properties that do appears in template

Serilog Console output template has this:
{Properties} - All event property values that don't appear elsewhere in the output.
We need this functionality in Http sink, to prevent sending unneeded properties used only for message formatting.
This would prevent flooding recipients with redundant data.

Example usage:

var logger = new LoggerConfiguration()	
.WriteTo.Http("http://url", sendOnlyUnusedProperties = true)
.CreateLogger();

//This would send only otherData through http, id would not be sent because it is in message itself
logger.Information("Received: {id}", id, otherData);

'Timestamp' field does not display UTC time correctly

Describe the bug
When I use this sink, and examine the json object returned by Kibana, I see that the Timestamp field does not include correct timezone information.
For example, the current value is:
"Timestamp": "2019-02-26T08:00:07.0552471+00:00",
However, I am in time zone UTC+2, so I would expect the value to be:
"Timestamp": "2019-02-26T08:00:07.0552471+02:00",
To Reproduce
Steps to reproduce the behavior:
Just examine the Timestamp value and check if the time is incorrect

Expected behavior
Timestamp should contain correct UTC time.
This is very important when I have a few services with different timezone writing to the same log. If I want to sort by the Timestamp - I get incorrect order.

Any way around this limitation?

Thanks,
Gil

How to change naming conventions of default properties

Environment

Windows Server 2012

Problem

Unable to change the default property naming conventions (eg : MessageTemplate to messageTemplate)

Steps to reproduce:

Pushing the info log into elastic using Http sink though elastic is resulting into PascalCase Notation for the default Serilog property.

Observed results

"events": [
{
"MessageTemplate": "Received {@activity_log}",
"Level": "Information",
"Properties": {
"activityLog": {
"dateUtc": "0001-01-01T00:00:00.0000000",
"application": "bentleyCpv",
"_typeTag": "ActivityLogModel",
"eventDescription": null,
"ipAddress": null
}
},
"Timestamp": "2018-01-22T07:50:11.3241344-05:00"
}
]

Expected results

"events": [
{
"messageTemplate": "Received {@activity_log}",
"level": "Information",
"properties": {
"activity_log": {
"dateUtc": "0001-01-01T00:00:00.0000000",
"application": "bentleyCpv",
"_typeTag": "ActivityLogModel",
"eventDescription": null,
"ipAddress": null
}
},
"timestamp": "2018-01-22T07:50:11.3241344-05:00"
}
]

DurableHttpUsingFileSizeRolledBuffers silently fails when write no permissions on buffer file.

Describe the bug
DurableHttpUsingFileSizeRolledBuffers silently fails when the application doesn't have permission to create or write to the buffer file.

To Reproduce
Create a DurableHttpUsingFileSizeRolledBuffers with just the requestUri

Expected behavior
Thrown an exception.

Additional context
Discovered this issue when deploying applications using "enterprise" Docker containers where the default container user has limited permissions to local file system.

Just a heads up. Thank you for the work in producing this.

How to get rid of charset from Content-Type header when Posting request to logstash using http sink

Step 1: Describe your environment

  • Windows Server 2012

Step 2: Describe the problem

When sending Post request using http sink to logstash, data is being saved as string in the message field not as object. This is happening because of charset presence in the Content-Type header

Observed results

By default when Serilog is sending POST request to Logstash, it's adding "Content-type: application/json; charset=utf-8” to the request header. How can I just send just "Content-type: application/json” without charset

Expected results

Need to just get rid of charset from Content-Type header value when Posting request to logstash

Trigger batch send off of max byte count

Is your feature request related to a problem? Please describe.
Currently, the http service we're hitting has a 1MB request body limit. The formatter has an option for eventBodyLimitBytes, but hitting that leads to dropped messages. As a result, we have to make a best guess at a batchPostingLimit that is low enough to prevent dropped messages. Event sizes vary widely, picking a magic number here isn't ideal.

Describe the solution you'd like
Ideally there would be a mechanism to specify the maximum size sent in a single POST request. At the time the limit is met, a POST is made (i.e. the message size would be a trigger similar to batchPostingLimit and period.

Describe alternatives you've considered
We have tried reducing the batch posting limit; however, to reduce it enough to ensure messages don't get dropped we'd base the calculation off the largest event sent (16KB) which would be inefficient in times when more typical events are sent (around 300 bytes).

Custom Formatting

I am using Serilog HTTP (5.0.1 version) sink for logging to Logstash in my .Net Core Project. In startup.cs I have following code to enable serilog.

Log.Logger = new LoggerConfiguration() .Enrich.FromLogContext() .WriteTo.Http("http://mylogstashhost.com:5000").Enrich.WithProperty("user", "xxx").Enrich.WithProperty("serviceName", "yyy") .MinimumLevel.Warning() .CreateLogger();

My events are being send as the following format.

{"events":[{"Timestamp":"2018-10-19T18:16:27.6561159+01:00","Level":"Warning","MessageTemplate":"abc","RenderedMessage":"abc","user":"xxx","serviceName":"yyy","Properties":{"ActionId":"b313b8ed-0baf-4d75-a6e2-f0dbcb941f67","ActionName":"MyProject.Controllers.HomeController.Index","RequestId":"0HLHLQMV1EBCJ:00000003","RequestPath":"/"}}]}

But seems receiving Logstash endpoint does not accept such format and I can not see my logs on Kibana. But when I send as following format all is ok.

{"Timestamp":"2018-10-19T18:16:27.6561159+01:00","Level":"Warning","MessageTemplate":"abc","RenderedMessage":"abc","user":"xxx","serviceName":"yyy","Properties":{"ActionId":"b313b8ed-0baf-4d75-a6e2-f0dbcb941f67","ActionName":"MyProject.Controllers.HomeController.Index" ,"RequestId":"0HLHLQMV1EBCJ:00000003","RequestPath":"/"}}

So Logstash doesnt like the event to be in Events{} and also it wants "user" and "ServiceName" tags out of "Properties". I read posts about custom formatting, tried to use CompactJsonFormatter, JsonFormatter but none of them gave me what i wanted. Is there a way to achieve this ?

.NET Core 2.0

With the official release of .NET Core 2.0, release a new version of this sink.

[Question] How to compile MessageTemplate + Properties obtained from the HTTP sink?

I'm trying to develop a logging server for my company using your HTTP sink, plugged into ASP.NET Core MVC.

The sink sends the following POST:

{
  "events": [
    {
      "Timestamp": "2017-03-07T02:07:34.8287216+07:00",
      "Level": "Information",
      "MessageTemplate": "Executing ViewResult, running view at path {Path}.",
      "Properties": {
        "Path": "/Views/Home/Index.cshtml",
        "EventId": { "Id": 1 },
        "SourceContext": "Microsoft.AspNetCore.Mvc.ViewFeatures.Internal.ViewResultExecutor",
        "ActionId": "606648ce-b9fc-4811-8e5d-9d15b5e037c8",
        "ActionName": "Sample.Controllers.HomeController.Index (Sample)",
        "RequestId": "0HL34OG547AUO",
        "RequestPath": "/"
      }
    },
    {
      "Timestamp": "2017-03-07T02:07:34.8718259+07:00",
      "Level": "Information",
      "MessageTemplate": "User profile is available. Using '{FullName}' as key repository and Windows DPAPI to encrypt keys at rest.",
      "Properties": {
        "FullName": "C:\\Users\\Ryan\\AppData\\Local\\ASP.NET\\DataProtection-Keys",
        "SourceContext": "Microsoft.Extensions.DependencyInjection.DataProtectionServices",
        "ActionId": "606648ce-b9fc-4811-8e5d-9d15b5e037c8",
        "ActionName": "Sample.Controllers.HomeController.Index (Sample)",
        "RequestId": "0HL34OG547AUO",
        "RequestPath": "/"
      }
    },
    {
      "Timestamp": "2017-03-07T02:07:35.0372262+07:00",
      "Level": "Information",
      "MessageTemplate": "Executed action {ActionName} in {ElapsedMilliseconds}ms",
      "Properties": {
        "ActionName": "Sample.Controllers.HomeController.Index (Sample)",
        "ElapsedMilliseconds": 3271.4887000000003,
        "EventId": { "Id": 2 },
        "SourceContext": "Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker",
        "ActionId": "606648ce-b9fc-4811-8e5d-9d15b5e037c8",
        "RequestId": "0HL34OG547AUO",
        "RequestPath": "/"
      }
    },
    {
      "Timestamp": "2017-03-07T02:07:35.0467514+07:00",
      "Level": "Information",
      "MessageTemplate": "{HostingRequestFinished:l}",
      "Properties": {
        "ElapsedMilliseconds": 3598.5328,
        "StatusCode": 200,
        "ContentType": "text/html; charset=utf-8",
        "HostingRequestFinished": "Request finished in 3598.5328ms 200 text/html; charset=utf-8",
        "EventId": { "Id": 2 },
        "SourceContext": "Microsoft.AspNetCore.Hosting.Internal.WebHost",
        "RequestId": "0HL34OG547AUO",
        "RequestPath": "/"
      },
      "Renderings": {
        "HostingRequestFinished": [
          {
            "Format": "l",
            "Rendering": "Request finished in 3598.5328ms 200 text/html; charset=utf-8"
          }
        ]
      }
    }
  ]
}

Which is cool. However, I'm stuck with the process of converting the MessageTemplate + Properties into a compiled message.

Is there an instruction for doing so? Thank you in advance.

Configurable requestUri in HttpLogShipper

What I want to implement is the configurable/changeable requestUri field, which is defined and not configurable once the application starts. Consider the case if the url is configured and syncing from database/remote config center, or different events need to be sent to different logstash forwarders.

What I am doing recently is creating my own HttpClientWrapper which requires a Func<string> _urlProvider and ignores the requestUri from the invoker, but this is not quite reasonable to define the url in the IHttpClient and doesn't meet the second scenario above.

Please consider provide a Func<LogEvent, string> uri getter in HttpLogShipper.

Sample of mine HttpClientWrapper

        internal class HttpRequester : IHttpClient
        {
            private readonly Func<string> _urlProvider;
            private readonly HttpClient _client;

            public HttpRequester(Func<string> urlProvider)
            {
                _urlProvider = urlProvider;
                _client = new HttpClient();
            }

            public Task<HttpResponseMessage> PostAsync(string requestUri, HttpContent content)
            {
                return _client.PostAsync(_urlProvider.Invoke(), content);
            }

            public void Dispose()
            {
                _client.Dispose();
            }
        }

DurableHttpSink sometimes splits log events in wrong place

It looks like DurableHttpSink has some multithreading issue.
We have a netcoreapp2.0 service with quite big load and it is using DurableHttpSink for sending logs to logstash.

Sometimes it happens so that events are split in the middle somehow and logstash is unable to deserialize it as JSON.

Serilog sink is configured using configuration file as:

      {
        "Name": "DurableHttp",
        "Args": {
          "requestUri": "http://logstash:30000/DEV/Service",
          "bufferPathFormat": "C:/Logs/Service/Buffer-{Hour}.json"
        }
      }

Service is running under Windows Server behind IIS.
Sometimes we have logged events which are ending in the middle and continued in next logstash event.
For example: one event is ending like:

{"events": [{"Timestamp": "2018-09-10T05:01:32.9541105+02:00","Level": "Warning","MessageTemplate": "Using old values for service {serviceName}","RenderedMessage": "Using old values for service \"service\"","Properties": {"serviceName": "service","SourceContext": "ServiceMonitor","RequestId": "0HLGECVV4D9FE:00003261","RequestHeaders": {"Machine]}

and next event:

{"events": [Name": "srv01", "ThreadId": 47 }]}

As you can see json in both is invalid and log even is simply split into 2.
We get 2 or 3 events per day which are invalid.

For me it looks like buffer file reading before sending does not care if event is fully written. Just assuming that file reading happens at the time when writing happens, and somehow it is able to read half of even.

Using: Serilog.Sinks.Http Version=4.2.1

Safe default for HttpClient

Is your feature request related to a problem? Please describe.
As stated in the microsoft docs

HttpClient is intended to be instantiated once and re-used throughout the life of an application. Instantiating an HttpClient class for every request will exhaust the number of sockets available under heavy loads. This will result in SocketException errors. Below is an example using HttpClient correctly.

Describe the solution you'd like
Use IHttpClientFactory to implement the default HttpClient which is only available in the DI using services.AddHttpClient (for .NET Core)

Describe alternatives you've considered
Using a lighter version of HttpClientFactory that does not require DI

Additional context

Logging without batching

I'd like to be able to send http events without batching in an array. I set batchPostingLimit: 1, but that just gave me an array of 1 event. I created my own IBatchFormatter by cloning DefaultBatchFormatter and removing the "events" array. LogEvent doesn't appear to override ToString, and I can't find an extension method, so output.Write(logEvent) just writes the name of the class into the stream.

What can I do to resolve this?

HttpLogShipper starts hammering target after a certain amount of errors

If sending log messages fails for prolonged periods of time (e.g. due to a wrong configuration or the target system being unreachable), the log shipper will start hammering the target every 2 seconds (in the default configuration).

This is due to an overflow in ExponentialBackoffConnectionSchedule#NextInterval, where the cast in line 61 will overflow. In the default configuration with a period of 2 seconds backoffPeriod will be 50000000 ticks (as per minimum backoff period of 5 seconds). When the number of errors reaches 39, the backoffFactor becomes 2^38. The expression var backedOff = (long)(backoffPeriod * backoffFactor) becomes (long)(2^38 * 50000000), which is greater than (long)(2^38 * 2^25) (50000000 being > 2^25) or greater than 2^63. The cast to long overflows and gives -9223372036854775808. Following through, line 67 results in the actual backoff being the base period (2 seconds in the default configuration). From that point on, this happens every time NextInterface.get is called.

To Reproduce

  1. Configure the HTTP sink to log to an invalid target (e.g. a computer without a server running).
  2. Emit a log message and wait for 39 retries (roughly 6 hours, could be shortened by changing the MaximumBackoffInterval to something shorter, e.g. a few seconds).
  3. See HttpLogShippper retrying every 2 seconds (in the default configuration).

Expected behavior
The retransmission timeout should stay capped to 10 minutes, even if the messages cannot be sent for prolonged periods of time (e.g. by capping failuresSinceSuccessfulConnection to something reasonably small, so that the exponential function does not produce excessively large values).

Add queueLimit parameter

Step 1: Describe your environment

  • Any

Step 2: Describe the problem

HttpSink is inherited from PeriodicBatchingSink but doesn't support queueLimit parameter. It would be great to have the ability to set the upper bound of the queue to know max memory consumption.

Steps to reproduce:

Observed results

  • HttpSink doesn't support queueLimit parameter.

Expected results

  • HttpSink supports queueLimit parameter.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.