GithubHelp home page GithubHelp logo

Comments (4)

tobiasbrunner avatar tobiasbrunner commented on June 12, 2024

My personal perception is, that this format is not very parser friendly, although we managed to create a regex that fits the majority of messages.

Not sure what issue you had with parsing these messages with a regex as it seems pretty straight forward to me (even with the IKE name/ID being optional, it seems quite easy to parse).

Note that depending on your use case, there are already structured log messages available via vici protocol. And charon-systemd's journal logger also logs these elements separately.

What's your use case anyway?

Would it be possible to add a more verbose jsonFormat output, configurable from the Logger Configuration?

I guess it would be possible to write the messages in a more structured way to files. Note that your example isn't JSON at all, though. That would look more like this (changed some of the properties to what we use in the vici logger):

{"time": "2024-04-25T06:14:35", "thread": 6, "group": "IKE", "level": 1, "ikesa-name": "az", "ikesa-uniqueid": 3, "msg": "received DELETE for ESP CHILD_SA with SPI 6ec3571d"}

However, there are some potential issues. One is that JSON doesn't support multi-line strings, so newlines in log messages would either have to be escaped as \n, or used as separator to export the lines as array (i.e. something like "msg": [ "line1", "line2" ]). Similarly, " would have to be escaped as \".

Another possible issue is that logging each log message as a JSON object wouldn't result in a valid JSON file. We could theoretically start the file with [ and add a , between the objects, but that would also never be valid (the array is never closed, unless that's done manually before processing the file) and it would cause problems when appending messages to existing files. So what we'd basically produce is a "stream" of JSON objects and it would depend on the parser that processes it whether that's acceptable or not. For instance, jq accepts such input just fine (it actually won't accept it if the objects are comma-separated or as an incomplete array), and with the --slurp option it would even automatically convert the object "stream" into an array of objects in order to apply filters on it.

from strongswan.

SaschaJohn avatar SaschaJohn commented on June 12, 2024

Hello @tobiasbrunner thanks for answering and also starting to think about this feature request.
We're initiating a VPN connection from a kubernetes container.

This means, that we have nothing running except charon-systemd and sleep to keep the container running.
Journal logger or vici consumer would be additional workload within the container.

Therefore, logging to stderr fits perfectly to ingest all log messages to our central grafana loki instance.

In loki we then split the message to label according to a regex. And as you said, this was pretty much straight forward.
We then use the parts, to create a Grafana log view, that contains the parts in table colums, each logline represented in a row.

The only drawback is that the regex syntax to split the message to labels is such beautiful in loki as:

| regexp "(?P<method>\\w+) (?P<path>[\\w|/]+) \\((?P<status>\\d+?)\\) (?P<duration>.*)"

You might get the point ;) Where as the json should be automatically splitted to labels. We might now see this as an argument to reconsider our logging environment.

I also know the ELK stack a bit for the same purpose, where json is also a better fit for kibana visualization.

And the system can automatically derive labels from the json keys. Samke applies to loki.

I don't know how likely you currently use " and multiline strings in logging, but I agree this is a point to take into considaration as well. This might complicate the implementation.

The idea is not having a valid json file as output, but log each line/message as json object.

from strongswan.

tobiasbrunner avatar tobiasbrunner commented on June 12, 2024

This means, that we have nothing running except charon-systemd

What about systemd? Or are you running that daemon without systemd?

Journal logger or vici consumer would be additional workload within the container.

Maybe you could bind mount the journal socket (e.g. /run/systemd/journal/socket) from the host into the container so the daemon could log to the journald running on the host. Promtail seems to be able to get logs from there into Loki.

I don't know how likely you currently use " and multiline strings in logging

" is used in some outputs (e.g. subject DNs of certificates) but we more commonly use ' for such stuff as the former has to be escaped in C strings as well. Multi-line strings are only used on higher log levels e.g. to dump message data (see the not directly related #2066 for examples). We'd also have to escape \, by the way, as that can be part of the ASCII view of dumped binary data.

from strongswan.

tobiasbrunner avatar tobiasbrunner commented on June 12, 2024

By the way, I pushed a patch to the 2222-json-filelog branch that adds an option to log messages as JSON objects.

from strongswan.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.