GithubHelp home page GithubHelp logo

serilog-contrib / serilog-sinks-elasticsearch Goto Github PK

View Code? Open in Web Editor NEW
432.0 19.0 196.0 676 KB

A Serilog sink that writes events to Elasticsearch

License: Apache License 2.0

C# 100.00%
serilog-sink elasticsearch serilog looking-for-maintainer

serilog-sinks-elasticsearch's People

Contributors

amirsasson avatar blachniet avatar caringdev avatar dependabot[bot] avatar kanadaj avatar klettier avatar konste avatar leh2 avatar matthewddennis avatar matthewrwilton avatar mattlaver avatar maximrouiller avatar merbla avatar mikkelbu avatar minhnguyendev avatar mivano avatar mookid8000 avatar mpdreamz avatar nblumhardt avatar nenadvicentic avatar orjan avatar oscarmorasu avatar piaste avatar rajmondburgaj avatar saarfroind avatar sirseven avatar sstorie avatar tresoldigiorgio avatar uvw avatar vhatsura avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

serilog-sinks-elasticsearch's Issues

ASP.NET Web Api stops emitting to Elasticsearch

We have a ASP.NET Web Api application that uses Serilog with the Elasticsearch sink. It is configured for durability and the buffer continues to fill but the push to Elasticsearch freezes shortly after the application starts. It is running under IIS and a restart of the AppPool clears the buffer into Elasticsearch. A review of the sink's source indicates that the ElasticSearchLogShipper OnTick() handler handles both the persistence to the durable log file and transport to Elasticsearch. It must fire but we've been unable to determine why it does not get forwarded to Elasticsearch.

Issues with durable sink after upgrade to current stable version

Hi,
I've just upgraded from an older version (of serilog and the elasticsearch sink) and whenever i try to use the durable sink I get an "System.MissingMethodException" in "Serilog.Sinks.Elasticsearch.dll"

Method not found: "Void Serilog.Sinks.RollingFile.RollingFileSink..ctor(System.String, Serilog.Formatting.ITextFormatter, System.Nullable1<Int64>, System.Nullable1, System.Text.Encoding, Boolean)".

As soon as i remove the BufferBaseFilename from the elasticsearchOptions i do no longer get an exception - but of course i'm loosing the durability :(

var elasticSearchOptions = new ElasticsearchSinkOptions(new Uri("http://localhost:9200")) { AutoRegisterTemplate = true, IndexFormat = "some-app-{0:yyyy.MM.dd}", BufferBaseFilename = @"e:\some-app-buffer.log", BatchPostingLimit = 10000, MinimumLogEventLevel = LogEventLevel.Information }; Logger = new LoggerConfiguration().WriteTo.Elasticsearch(elasticSearchOptions).CreateLogger();

Regards,
Jan

Connection error logs

I'm having trouble getting this to connect to my elastic search over https. The certificate is installed correctly however I'm wondering if there is a certificate chain error.

How can I view the output of this sink's failed connection attempts?

support for Serilog 2

Is it on the cards, and if so, what's the plan for Serilog 2.0 support? We're migrating to the ELK stack and have a code base on SL2 already, and would like to continue to stay on that release.

Thanks in advance

How to configure a field to not be analyzed?

Currently, I have some string fields I'm logging that show in Kibana as analyzed. Example code of how I have it setup...

var loggerConfig = new LoggerConfiguration()
                       .MinimumLevel.Debug()
                       .WriteTo.Elasticsearch(new ElasticsearchSinkOptions(new Uri("http://logs:9200"))
                       {
                            AutoRegisterTemplate = true,
                            IndexFormat = "mytool-{0:yyyy.MM.dd}"
                       });
serilog = loggerConfig.CreateLogger();

string value1 = "hello";
string value2 = "world";

serilog.Information("{value1} - {value2}", value1, value2);

How do I make it so that {value2} for example is not analyzed?

Thanks!

Exceptions as an array breaks Kibana usage

A recent commit 010af25 changes the behaviour from writing exceptions to ES from writing an object in the "exception" field into writing an array of exceptions in "exceptions".

This breaks Kibana dashboards based on the ES logging, since Kibana doesn't play well with arrays. For instance we can't put exception.message on a dashboard.

While I understand the reason for making the change is probably to avoid deep nesting in the case of innerExceptions, I think it'd be beneficial to keep a single exception object in the "exception" field.

I'd love to provide a pull request for this, if we can agree on what behaviour it should have ? Perhaps make it configurable so users can configure the sink to use the old or new behaviour ?

Question - Possible to use a CustomFormatter *and* app settings?

I was wondering if it's possible to specify a CustomFormatter and use app settings configuration for the Elasticsearch sink.

All of the other options I'm after are available via appSettings - but the CustomFormatter doesn't appear to be. I'm just wondering if there is any way to use a custom formatter while keeping the config within app settings.

Thanks in advance.
-Erik

App/Web Config documentation

Currently there is very little documentation of what properties a user can set in a config and it's very frustrating for newcomers to setup the configuration. Documentation should be improved either by mentioning where that information can be found or with actual information.

ExceptionAsObjectJsonFormatter wrong documentation

Hi guys,

Just a quick one.

This snippet in the documentation is not correct.

new ElasticsearchSink(new ElasticsearchSinkOptions(url)
    {
      CustomFormatter = new ExceptionAsJsonObjectFormatter(renderMessage:true)
    });

Thanks

How do you guys handle different data types for the same field when logging?

Hi, we have been evaluating elasticsearch a number of times but never really made the jump. I keep running into issues with the field types that are automatically registered when using elasticsearch for logging.

Field types get registered the first time serilog sends that field to elasticsearch and if you later happen to send that field with another type the event is not indexed.

We would run into this quite often I fear as logging, at least the way we do it, is quite dynamic and we do not impose strict rules/review on the type of the fields we choose to include in log events.

So I wanted to ask how others are overcoming this when using serilog and elasticsearch?

Missing {"@ in body

Hi,

I have been using serilog for some time now, and have noticed that we once in a while are missing some logs. When I looked into what might caused this I found the following:

First I created a test API that recieved a call and when logged it into a bufferFile (That never miss a log btw).
I when created a program, that sent 5000 requests to the API. I could see 5000 logs in the bufferFile, but only 4999 logs when I looked at kibana.

I when looked at the network traffic, and found the following:
image

If we take a look on the server running elasticsearch it gives this error:
image

So, from what I can see is that serilog sometimes cuts {"@ away from timestamp. I've tried to run it multiple times, but it does not always happen. (Often need to run it 4-10 times before I see the error again). It is however always the second log that fails.
The bufferFile does not cut away {"@ as shown below:
image

Not sure why this happens, and especially why this only happens sometimes?

Currently running with ElasticSearch 5.2.1 and Kibana 5.2.1.
The nuget package I use is Serilog.Sinks.Elasticsearch version 5.0.0 (depend on serilog version 2.2.1)

Serilog elasticsearch sink only works from a controller

A couple of weeks ago I installed ELK and everything seemed to work fine. I have a web application with serilog and the elasticsearch sink, everything is working fine when I'm logging from a controller.

I'm using the latest packages.

However, when I tried to log from a console application it didn't work. Then I tried to log from a unit test in my web project and it din't work either.

How can this be?

I looked at the web requests in fiddler and found that when I log from the unit test it sends an "empty" request to elastic to this url: /_template/serilog-events-template

This also happens when I log from the controller in the web project, but the web project also sends another request right afterwards, /bulk, which contains the error message.

Here is an example:

`var url = ConfigurationManager.AppSettings["Logger.Url"];
var obj = new Test() { Name = "MyName", Phone = "123456789" };
var messageTemplate = "This is a test {@obj}";
var exception = new Exception("Exception test");

var loggerConfig = new LoggerConfiguration()
.MinimumLevel.Debug()
.WriteTo.Elasticsearch(new ElasticsearchSinkOptions(new Uri(url))
{
AutoRegisterTemplate = true,
});

var logger = loggerConfig.CreateLogger();
logger.Error(exception, messageTemplate, obj);`

Any suggestions?

Support for elasticsearch 5.x

Hi.
Is there any on-going plans for supporting version 5.x of elasticsearch?
If not, is there any workaround i can use so this sink can work with elasticsearch-net 5.0?

IndexDeciver - AppSettings

Hi there.

Just wondering if there is a way to get the index decider from the app settings. Currently I had config in app settings which I have since had to move to startup to get this functionality.

This is my current code;
var ES = new ElasticsearchSinkOptions(new Uri(ConfigurationManager.AppSettings["SeriLog.ElasticSearch.nodeUri"])) { MinimumLogEventLevel = (LogEventLevel)Enum.Parse(typeof(LogEventLevel), ConfigurationManager.AppSettings["SeriLog.ElasticSearch.MinimumLogLevel"]), Period = TimeSpan.FromSeconds(int.Parse(ConfigurationManager.AppSettings["SeriLog.ElasticSearch.PostingPeriodSeconds"])), BatchPostingLimit = int.Parse(ConfigurationManager.AppSettings["SeriLog.ElasticSearch.BatchPostingLimit"]), TemplateName = "test-template", IndexFormat = "test-{0:yyyy.MM.dd}", IndexDecider = (@event, offset) => $"test-{@event.Level.ToString()}-{@event.Timestamp:yyyy.MM.dd}".ToLower() };

The problem line is the last one, IndexDecider = (@event, offset) => $"test-{@event.Level.ToString()}-{@event.Timestamp:yyyy.MM.dd}".ToLower(). Is there a way to put this into app settings? I prefer having my set up done there rather than in code.

When using Buffer-files the callback to IndexDecider is fired with an empty LogEvent.

new ElasticsearchSinkOptions(new Uri(elasticSearchUri)) {
                    BufferBaseFilename = elasticSearchBufferFolder,
                    IndexDecider = (ev, offset) =>
                    {
                        return string.Format("{0}-{1}", "logs", ev.LogEvent.ToString(), offsetDate);
                    }
};

Row 163 in ElasticSearchLogShipper is passing in null.

var indexName = _state.GetIndexForEvent(null, date);

What we're trying to accomplish is reading out the LogLevel of the event and creating indices upon that so we can have different retention policies on fatal/errors and info/debug. Maybe there is another way to accomplish that?

Not writing to all sinks with multiple elasticsearch sinks (one or the other)

I found some issues when multiple elasticsearch are defined for sink

<add key="serilog:write-to:Elasticsearch.nodeUris" value="http://localhost:9200;http://remoteurl:9200" />

It does writes all the events but some events I see are only written in one or the another. so not duplicating events on both servers.

ES5 "org.elasticsearch.index.mapper.MapperParsingException: failed to parse [fields.0]" on receiving end when using anonymous destructuring.

Hello,

Given the empty ES installation and the following console app code:

class Program
    {
        static void Main(string[] args)
        {
            Log.Logger = new LoggerConfiguration()
                .WriteTo.Elasticsearch(new ElasticsearchSinkOptions(new Uri("http://localhost:9200"))
                {
                    AutoRegisterTemplate = true
                })
                .CreateLogger();

            while (true)
            {
                Log.Information("Here is some information {@0}", new { NowIs = DateTime.UtcNow, SomeStuff = "something" });
                Thread.Sleep(300);
            }
        }
    }

The logs don't enter Elasticsearch 5 with the following exception:

org.elasticsearch.index.mapper.MapperParsingException: failed to parse [fields.0]
	at org.elasticsearch.index.mapper.FieldMapper.parse(FieldMapper.java:297) ~[elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:438) ~[elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:475) ~[elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.index.mapper.DocumentParser.innerParseObject(DocumentParser.java:371) ~[elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrNested(DocumentParser.java:361) ~[elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:435) ~[elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:475) ~[elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.index.mapper.DocumentParser.innerParseObject(DocumentParser.java:371) ~[elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrNested(DocumentParser.java:361) ~[elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.index.mapper.DocumentParser.internalParseDocument(DocumentParser.java:93) ~[elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.index.mapper.DocumentParser.parseDocument(DocumentParser.java:66) ~[elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:274) ~[elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.index.shard.IndexShard.prepareIndex(IndexShard.java:529) ~[elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.index.shard.IndexShard.prepareIndexOnPrimary(IndexShard.java:506) ~[elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.action.index.TransportIndexAction.prepareIndexOperationOnPrimary(TransportIndexAction.java:174) ~[elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.action.index.TransportIndexAction.executeIndexRequestOnPrimary(TransportIndexAction.java:179) ~[elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:351) [elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.action.bulk.TransportShardBulkAction.index(TransportShardBulkAction.java:158) [elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.action.bulk.TransportShardBulkAction.handleItem(TransportShardBulkAction.java:137) [elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.action.bulk.TransportShardBulkAction.onPrimaryShard(TransportShardBulkAction.java:123) [elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.action.bulk.TransportShardBulkAction.onPrimaryShard(TransportShardBulkAction.java:74) [elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.action.support.replication.TransportWriteAction.shardOperationOnPrimary(TransportWriteAction.java:78) [elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.action.support.replication.TransportWriteAction.shardOperationOnPrimary(TransportWriteAction.java:50) [elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:902) [elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:872) [elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.action.support.replication.ReplicationOperation.execute(ReplicationOperation.java:113) [elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:319) [elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:254) [elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:838) [elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:835) [elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.index.shard.IndexShardOperationsLock.acquire(IndexShardOperationsLock.java:142) [elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.index.shard.IndexShard.acquirePrimaryOperationLock(IndexShard.java:1651) [elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.action.support.replication.TransportReplicationAction.acquirePrimaryShardReference(TransportReplicationAction.java:847) [elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.doRun(TransportReplicationAction.java:271) [elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:250) [elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:242) [elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) [elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.transport.TransportService$6.doRun(TransportService.java:548) [elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:504) [elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.0.0.jar:5.0.0]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [?:1.8.0_111]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [?:1.8.0_111]
	at java.lang.Thread.run(Unknown Source) [?:1.8.0_111]
Caused by: java.lang.IllegalStateException: Can't get text on a START_OBJECT at 1:231
	at org.elasticsearch.common.xcontent.json.JsonXContentParser.text(JsonXContentParser.java:85) ~[elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.common.xcontent.support.AbstractXContentParser.textOrNull(AbstractXContentParser.java:199) ~[elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.index.mapper.TextFieldMapper.parseCreateField(TextFieldMapper.java:379) ~[elasticsearch-5.0.0.jar:5.0.0]
	at org.elasticsearch.index.mapper.FieldMapper.parse(FieldMapper.java:286) ~[elasticsearch-5.0.0.jar:5.0.0]
	... 43 more

Changing "{@0}" to "{@Info}" makes it work, however this is not the solution I am looking for, because of existing codebase, and other active sinks.

Any help appreciated.

P.S.
There are a lot of warnings for
The [string] field is deprecated, please use [text] or [keyword] instead on *

Regards,

Docs not up-to-date?

Hi,

So I see that there's a 5.0 on nuget for this sink but I can't find any information on what is on 5.0 here, nor what the breaking changes are.

Am I just not looking in the right place? Having the releases section up-to-date would be very helpful as well.

How to prevent string fields to be analysed?

Whilst logging events in elasticsearch through serilog all the string fields are geeting mapped as analysed. Is there a way to prevent or take control of the mapping?

app.config setting is as follows

  <appSettings>
    <add key="serilog:using" value="Serilog.Sinks.Elasticsearch" />
    <add key="serilog:minimum-level" value="Debug" />
    <add key="serilog:write-to:Elasticsearch.nodeUris" value="http://localhost:9201" />
    <add key="serilog:write-to:Elasticsearch.indexFormat" value="taf-301-test-{0:yyyy.MM}" />
    <add key="serilog:write-to:Elasticsearch.autoRegisterTemplate" value="true" />
    <add key="serilog:write-to:Elasticsearch.BufferBaseFilename" value="c:\logs" />
  </appSettings>

Thanks

specify Authorization header in configuration

i am using the Serilog.Configuration to configure my serilog, and i am writing my logs to Elastic search.
my ElasticSearch instance requires Bearer Token in the Authorization Header.
at the moment there is no way specifying the Authorization header in the web/app.config file.
only pragmatically by specifying : ModifyConnectionSettings.

BufferBaseFilename not been used with appSettings

I have the following code, which when executed does not result in any Log.Bookmark file when I switch off my local elasticsearch server.

var log = new LoggerConfiguration()
            .ReadFrom.AppSettings()
            .CreateLogger();
for (int i = 0; i < 100000; i++)
{
    log.Debug("hello world");
}

log.Dispose();
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <startup>
    <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" />
  </startup>
  <appSettings>
    <add key="serilog:using" value="Serilog.Sinks.Elasticsearch" />
    <add key="serilog:write-to:Elasticsearch.NodeUris" value="http://localhost:9200;http://remotehost:9200" />
    <add key="serilog:write-to:Elasticsearch.IndexFormat" value="custom-index-{0:yyyy.MM}" />
    <add key="serilog:write-to:Elasticsearch.TemplateName" value="myCustomTemplate" />
    <add key="serilog:write-to:Elasticsearch.TypeName" value="myCustomLogEventType" />
    <add key="serilog:write-to:Elasticsearch.BatchPostingLimit" value="50" />
    <add key="serilog:write-to:Elasticsearch.Period" value="2" />
    <add key="serilog:write-to:Elasticsearch.InlineFields" value="true" />
    <add key="serilog:write-to:Elasticsearch.MinimumLogEventLevel" value="Verbose" />
    <add key="serilog:write-to:Elasticsearch.BufferBaseFilename" value="C:\Temp\Logs" />
    <add key="serilog:write-to:Elasticsearch.BufferLogShippingInterval" value="5000" />
  </appSettings>
</configuration>

However when I change the code to following, I start seeing the Logs.bookmark. However it is empty.

var log = new LoggerConfiguration()
            .WriteTo.Elasticsearch(new ElasticsearchSinkOptions(new Uri("http://localhost:9200"))
            {
                MinimumLogEventLevel = LogEventLevel.Verbose,
                AutoRegisterTemplate = true,
                BufferBaseFilename = @"C:\Temp\Logs"
            })
            .CreateLogger();
for (int i = 0; i < 100000; i++)
{
    log.Debug("hello world");
}

log.Dispose();

Inconsistent naming

Hi guys!
Please, rename ElasticsearchSinkOptions to ElasticSearchSinkOptions and ElasticsearchJsonFormatter
to ElasticSearchJsonFormatter.
And one too - ElasticsearchSinkState to ElasticSearchSinkState. Or i can create PR for this.
On your decision.
Thank you!

Appsettings; Minimum level from Serilog is not being respected

This sink defaults to a minimum level of Information if it is not overwritten specifically. This only occurs when initialising from AppSettings.

The code pasted below will allow you to reproduce the issue.

<appSettings>
  <add key="serilog:minimum-level" value="Debug" />
  <add key="serilog:using:RollingFile" value="Serilog.Sinks.RollingFile" />
  <add key="serilog:write-to:RollingFile.buffered" value="false" />
  <add key="serilog:write-to:RollingFile.pathFormat" value="C:\Logs\SerilogTest\Log-{Date}.txt" />
  <add key="serilog:write-to:RollingFile.retainedFileCountLimit" value="30" />

  <add key="serilog:using" value="Serilog.Sinks.Elasticsearch" />
  <add key="serilog:write-to:Elasticsearch.nodeUris" value="http://localhost:8888" />
  <add key="serilog:write-to:Elasticsearch.indexFormat" value="test-{0:yyyy.MM}" />
  <add key="serilog:write-to:Elasticsearch.templateName" value="test-template" />
  <add key="serilog:write-to:Elasticsearch.batchPostingLimit" value="1" />
  <!--<add key="serilog:write-to:Elasticsearch.minimumLogEventLevel" value="Debug"/>-->
</appSettings>
Log.Logger = new LoggerConfiguration()
    .ReadFrom.AppSettings()
    //.WriteTo.Elasticsearch(new ElasticsearchSinkOptions(new Uri("http://localhost:8888"))
    //{
    //    AutoRegisterTemplate = true,
    //    TemplateName = "test-template",
    //    IndexFormat = "test-{0:yyyy.MM}",
    //    BatchPostingLimit = 1
    //})
    .CreateLogger();

var file = File.CreateText("C:\\Logs\\SeriLogErrors.txt");
Serilog.Debugging.SelfLog.Enable(TextWriter.Synchronized(file));

Log.Logger.Debug("I am a debug message");

Add support for pipelines

I'd like to be able to specify pipeline while ingesting events into ES. From what I see this can be done quite easily by specifying a BulkRequestParameters.Pipeline option in requestParameters configurator parameter in Bulk method.

Can we create a TypeDecider just like the IndexDecider?

Hi -- We are using the index decider configuration and it works great. We would also like to be able to determine the TypeName based on content of the LogEvent in a similar way. We can fork the code and then create a PR with the new options - but thought it was worth asking here to see if this was something that would be considered or if some other approach would be better.

Custom template name when using GetTemplateContent option?

@blachniet, #51 added the GetTemplateContent option which I was able to get working when OverwriteTemplate = true. However, the name of the template is serilog-events-template. I have several tools and services that will be logging with Serilog and many of them need custom templates.

Is there a way, when using GetTemplateContent, to specify the name of the template?

Add static method to flush orphaned buffers files

My use case of this sink let me with a lot of orphaned buffer files (ASP.NET web application with an app pool creation / destruction for each version). As far as I know, there is no way at the moment to handle this exlicitly (?).

It would be nice to have a method that I can call to force the processing of buffer files (I mean buffer files not being actively used by a logger instance).

I guess a good place would be a static method on ElasticsearchSink, something like that:

public static void FlushBuffer(ElastictsearchSinkOptions options);

The method would throw when an error occurs (not intercepted by the Serilog selflog).

Does this look ok as a new feature?
Would a PR be considered if I decide to start implementing that? (it would probably imply some refactoring to isolate the shipping logic)

serilog-sinks-elasticsearch does not support serilog 2.0.0

Getting following error, when adding latest nuget package for serilog (i.e. 2.0.0) and then adding serilog.sinks.elasticsearch :

'Serilog 2.0.0' is not compatible with 'Serilog.Sinks.Elasticsearch 3.0.134 constraint: Serilog (>= 1.5.14 && < 2.0.0)'

Log events don't arrive in Elasticsearch with {@Property} in the Messagetemplate

I use the Elasticsearch sink (DurableElasticsearchSink) in an IIS hosted WCF service.

Question 1: Is this a supported scenario?

Question 2: Some of my log messages aren't sent to Elasticsearch when the Message Template includes an {@MyClassValues} but in the BufferBaseFile-Bookmark file the bookmark is set to the last line. (The shipper "thinks" it has already sent the log event but I don't see it in Elasticsearch/Kibana).
An interesting thing is that some other log messages with @ destructuring operators work as expected.

The object I want to destructure is an instance of the following class:

public class MyClass
{
    public Guid Identifier { get; set; }

    public int MyInt1 { get; set; }

    public int MyInt2 { get; set; }
}

Are there any known issues?

Data not being saved to my elastic index

Hi there,

I can't quite figure out my issue here. Perhaps I'm using serilog and this sink in ways that I am not meant to. My issue is that I am seeing this sink bookmarking the durable file successfully but not seeing the data in my index!

I have a single public static IConnectionPool SharedElasticConnection { get; } property that I use to instantiate all my elastic clients (inc usages of this sink)

I am instantiating multiple seriloggers - one per type of object I want to save to elastic - each with this elastic sink configured in pretty much the same way. They are held by my generic class public class ElasticPersistorService<TDocument>

var elasticOptions = new ElasticsearchSinkOptions(_settings.ConnectionPool)
{
    IndexFormat = "MyIndex-{0:yyyy.MM.dd}",
    TypeName = typeof(TDocument).Name,
    AutoRegisterTemplate = true,
    CustomFormatter = new ElasticsearchJsonFormatter(inlineFields: true, closingDelimiter: string.Empty), // we do this to make renderMessage: false (this is a textual representation of our data object)
    CustomDurableFormatter = new ElasticsearchJsonFormatter(inlineFields: true, closingDelimiter: string.Empty), // we do this to make renderMessage: false (this is a textual representation of our data object)
    elasticOptions.BufferBaseFilename = $"{thisLogFolder}\\{typeof(TDocument).Name}-buffer"
};
_seriLogger = new LoggerConfiguration()
    .WriteTo.Elasticsearch(elasticOptions)
    .CreateLogger()
    .ForContext<TDocument>();

I use this _seriLogger property in one place:

public void SendToElastic(TDocument d)
{
    _seriLogger.Information("{@" + typeof(TDocument).Name + "}", d);
}

I want these loggers to place documents formatted by serilog and this sink, into a single index (but each with a different type name) but they aren't appearing...

For example today, using the above code, I have MyType-buffer-20160309.json sized at 74,832KB with the latest MyType-buffer.bookmark containing:

76647204:::C:\Serilog\MyType-buffer-20160309.json`.  

It is my understanding that this means that the sink has determined a successful Index operation for the majority of the file. However, I only 1 or 2 documents for each type in my ElasticSearch instance. I am not using LogStash - just straight to Elastic via HTTP.

The strange thing is that when I create a new index, say MyIndexNew-{0:yyyy.MM.dd}, it all works nicely for a while then stops again.

Any ideas?

Help, is there any sample project to use this sink?

I am pretty sure, I am doing something wrong.

I have created a new project added Serilog and Serilog.Sinks.Elasticsearch nuget packages.

When I run I face following error

"An unhandled exception of type 'System.IO.FileNotFoundException' occurred in LogConsole.exe

Additional information: Could not load file or assembly 'Serilog.FullNetFx, Version=1.5.0.0, Culture=neutral, PublicKeyToken=24c2f752a8e58a10' or one of its dependencies. The system cannot find the file specified."

I have .Net Core RC2 installed on my machine, I suspect it could be related to that but not sure.

var loggerConfig = new LoggerConfiguration() .WriteTo.Elasticsearch(new ElasticsearchSinkOptions(new Uri("http://localhost:80")) { AutoRegisterTemplate = true, }); Log.Logger = loggerConfig.CreateLogger(); LogMessage message = new LogMessage(); Log.Information("{@message}", message); Console.ReadKey();

Example of index decider with date in the index name?

I was just trying to figure out how to use the IndexDecider configuration function. I've got a piece of data in the LogEvent (say a property called "product") that will vary from different calling methods. I'd just like to have the indexes be {product}-{0:yyyy.MM.dd} like the existing index name format, just with the product being variable.

I don't call this an "issue" -- I just couldn't figure it out based on the documentation available.
Thanks for the help.

Dependency on Elasticsearch.Net is not needed

Being faced with a problem of updating dependencies in the production system I found the problem with the dependency between minor logging system plugin and some core level code both using the Elasticsearch.Net. Right now the problem is isolated with some tricks but I think it would be great to get rid of the Elasticsearch.Net package use in the Sink's code. I've discovered 3 key usages:

  • Connectivity management (mostly the StaticConnectionPool usage);
  • Registering the template (kind of HTTP PUT request)
  • Bulk indexing (HTTP POST request)

This is not something could not be built into the code (either build from scratch or using the original elastic's client code).

Thus, we would achieve really nice low coupled solution is not interfering with main project dependencies.

It would be good to open the discussion before starting such massive refactoring.

Tag releases and include AssemblyInformationalVersion attribute with commit hash code

Tagging releases allows to find the commit from which some specific version of the library was built from the git side. AssemblyInformationalVersion with commit hash code also helps with that too. It's better to include both of the options to give more flexibility. The attribute looks like this:

[assembly: AssemblyInformationalVersion("1.3.1.74-g23246c2")]

Is there a way to preserve the order of log events?

At the moment when we log events that occur very closely togeather, serilog sometimes ends up putting the same timestamp for these events. Is there a way to preserve the order of the logs even when the timestamp is same so that we can retrieve the events in the order through kibana or NEST APIs?

See related question asked at the serilog forum serilog/serilog#893

Transformations

Hi,

This is more of a question than an issue and I'm new to ELK so may be somewhat nieve. I've inherited a codebase that uses serilog to log large arbitrary objects when writing log statements. Usually these represent the state the method is in (local variables, etc). Each log is different and the names of variables are not standardized across all statements. Sometimes large complex object graphs are logged.

  1. I guess the first question is, is serilog meant to be used in this way? I see the benefit of having the field structured in the index when that field is standardized and can be aggregated or queried, but not so much when I just want the values logged to the 'message' field. (Which is analyzed in es anyways)
  2. Is there an issue with having a few thousand fields in an es index. And should I be changing these log messages so that they don write arbitrarily deep structures?
  3. Should I manually create the index and allow serilog to push whatever documents it likes but only index on the fields I am interested in standardizing and querying outside of the message field.

Thanks very much

Better exception viewing in Kibana?

I'm guessing this isn't a problem with this sink, but I'm really not sure. Exceptions, mainly stack traces, are a disaster to view in Kibana. Right now I have to copy/paste into Sublime Text and find/replace \n with a line break. If that's how I have to do it then it's no problem, but I wanted to check if there's a better way to do it.

capture

Elasticsearch seems to ignore MinimumLogEventLevel

I have a Serilog logger with two sinks, Elasticsearch and a custom sink (RichTextBox).

The minimum logging level for the logger is set to VERBOSE, which each sink has their level set (see below).

  1. Elasticsearch - set via MinimumLogEventLevel property of ElasticSearchSinkOptions.
  2. Customsink - using restrictedToMinimumLevel.

In the following configuration, the Elasticsearch sink always logs Verbose and higher, while the custom sink always correctly logs Information and higher. How can I get the Elasticsearch sink to log Information and higher?

 Log.Logger = new LoggerConfiguration()
                .MinimumLevel.Verbose()
                .WriteTo.Sink(new ElasticsearchSink(new ElasticsearchSinkOptions(new Uri(elasticsearchUrl))
                    {
                        AutoRegisterTemplate = true,
                        IndexFormat = "anduin-station-{0:yyyy.MM.dd}",
                        MinimumLogEventLevel = Serilog.Events.LogEventLevel.Information
                    }))    
                .WriteTo.TextboxLogger(control: txtOutput,
                                       restrictedToMinimumLevel: Serilog.Events.LogEventLevel.Information)
                .Enrich.FromLogContext()
                .Enrich.With(new StationNameEnricher())
                .Enrich.With(new UsernameEnricher())
                .CreateLogger();

Am I missing the point, or is this a bug?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.