GithubHelp home page GithubHelp logo

fortinet-2-elasticsearch's Introduction

FortiDragon (fortinet-2-elasticsearch) 🐉

FortiDragon

Engage

Join our community on Discord 🤓. Feel free to ask about anything on the channel.

You are already saving a lot of money by using Fortinet+Elastic, so consider making a contribution to the project. 💰💰💰 (It is NOT a must for asking for help)

FortiDragon 🆚 Filebeat

So you want to take you Fortinet logs to Elasticseach??? You have come to the right place!!! 👍

But wait! Doesn't Elastic provide a Filebeat module for Fortinet??? Why should you go with all the logstash hassle?? 🤷

Well, Filebeat module and Fortidragon are like cousins 👪. The logic for Filebeat module for Fortigate was based on FortiDragon, we colaborated together with Elastic when they made it.

The main difference is that FortiDragon is a full anayltics platform for threat hunting with Fortinet datasources, we do not restrict it to just the "ingestion" of logs.

We actually use FortiDragon on our day to day operations for threat hunting, so we undestand all the painpoints of a security analyst. That is why we created it on the first place, after 10+ years experience with Fortinet we could not find a solution that could extract all the juice out of Fortinet logs. We tried several SIEMs along the way and found out that firewall logs are just a checkbox on their datasheet. Full parsing and performance for such volume of logs was not carefully considered by any SIEM vendor. Finally we decided we needed to build it ourselves and chose Elastic because of its flexibility, performance and cost. FortiDragon is by far the best out there.

Some notable differences are with Filebeat:

Category FortiDragon Filebeat
Dashboard We got super cool dashboards!!! Just one for now 😢
Other platforms FortiEDR, Forticlient, Fortimail, Fortiweb Just Fortigate
Updates Much more often Dependant to Elastic releases
Installation Harder Easier

If you can handle the hassle of logstash installation, it is worth the effort.

Installation

Let's get this party on!!! 🤩

On Fortigate

  1. Configure syslog
  config log syslogd setting
      set status enable
      set server "logstash_IP"
      set port 5140
  end

Or if you run FortiOS v7, you can use syslog5424. RECOMMENDED

  config log syslogd setting
      set status enable
      set server "logstash_IP"
      set port 5141
      set format rfc5424
  end

You have to be very careful with your firewall name when usinng syslog5424 format

MY_FIREWALL_SITEA will not work

MY-FIREWALL-SITEA will work

  1. Extendend logging on webfilter OPTIONAL
  config webfilter profile
      edit "test-webfilter"
          set extended-log enable
          set web-extended-all-action-log enable
      next
  end

You may get a warning that you need to change to reliable syslogd. Remember that "The full rawdata field of 20KB is only sent to reliable Syslog servers. Other logging devices, such as disk, FortiAnalyzer, and UDP Syslog servers, receive the information, but only keep a maximum of 2KB total log length, including the rawdata field, and discard the rest of the extended log information."

  1. If you also would like to have metrics about your SDWAN Performance SLAs and view them in the SDWAN dashboard, you need to set both sla-fail-log-period and sla-pass-log-period on your healthchecks.
  config health-check
      edit "Google"
          set server "8.8.8.8" "8.8.4.4"
          set sla-fail-log-period 10
          set sla-pass-log-period 30
          set members 0
          config sla
              edit 1
                  set latency-threshold 100
                  set jitter-threshold 10
                  set packetloss-threshold 5
              next
          end
      next
  end
  1. You can also pump your own fields into Fortigate's syslog OPTIONAL
  config log custom-field
      edit "3"
          set name "org"
          set value "some_organization_name"
      next
  end
  
  config log setting
      set custom-log-fields "3"
  end

On Kibana (Dev Tools)

  1. Load ingest pipeline OPTIONAL Remember to comment out on the output pipeline if you decide not to use it.
PUT _ingest/pipeline/add_event_ingested
{
  "processors": [
    {
      "set": {
        "field": "event.ingested",
        "value": "{{_ingest.timestamp}}"
      }
    }
  ]
}
  1. Create ILM policies according to your needs. You can use these examples. Make sure you name them accordingly to your index strategy. For our case, that would be:
  • logs-fortinet.fortigate.traffic

  • logs-fortinet.fortigate.utm

  • logs-fortinet.fortigate.event

    In our experience, type=traffic generates lots of logs, while type=event very few. So it makes sense to have different lifecycles for differente types of logs. Other slicing ideas can be found below.

  1. Load component templates both from Elastic ECS and FortiDragon specific. Do it manually one by one:
PUT _component_template/ecs-base
{
  "_meta": {
    "documentation": "https://www.elastic.co/guide/en/ecs/current/ecs-base.html",
    "ecs_version": "8.3.1"
  },
  "template": {
    "mappings": {
      "properties": {
        "@timestamp": {
          "type": "date"
        },
        "labels": {
          "type": "object"
        },
        "message": {
          "type": "match_only_text"
        },
        "tags": {
          "ignore_above": 1024,
          "type": "keyword"
        }
      }
    }
  }
}
  1. Load index templates

  2. Load Dashboards: Go to Management --> Stack Management --> Saved Objects --> Import

  3. Make sure dashboard controls are enabled: Go to Management --> Kibana Advanced Settings --> Presentation Labs --> Enable dashboard controls

On Logstash

  1. Install Logstash

  2. A good idea would be to setup your ES password as a secret

  3. Logstash Hostname OPTIONAL

Add HOSTNAME="myhostname" to /etc/default/logstash when running logstash as a service

echo HOSTNAME=\""$HOSTNAME"\" | sudo tee  -a /etc/default/logstash

It is very useful if you run several logstash instances.

  1. Install tld filter plugin (You should do it every time you upgrade logstash version as well)
    cd /usr/share/logstash
    sudo bin/logstash-plugin install logstash-filter-tld
  1. Copy pipelines.yml to your logstash folder.
  2. Copy conf.d content to your conf.d folder.
  3. Start logstash

Hopefully you should be dancing with your logs by now. 🕺💃

Pipelines sequence

The overall pipeline flow is as follows:

graph LR;
    fortimail-input-kv-->fortimail_2_ecs;
    fortiweb-input-kv-->fortiweb_2_ecs;
    fortigate-input-kv-->fortigate_2_ecs;
    forticlient-input-kv-->forticlient_2_ecs;
    fortisandbox-input-kv-->fortisandbox_2_ecs;
    fortimail_2_ecs-->common_ecs-output;
    fortiweb_2_ecs-->common_ecs-output;
    fortigate_2_ecs-->common_ecs-output;
    forticlient_2_ecs-->common_ecs-output;
    fortisandbox_2_ecs-->common_ecs-output;

Input Syslog / KV

Receives syslog logs and populates data_stream fields depending on udp port. You can also uncomment Fortianalyzer tags is you are using it for syslog forwarding. Fortianalyzer stamps its own date format to the log, so it needs to be treated different.

Splits the original log into key-value pairs and sets the timestamp. Timezone is also obtained from the log itself if FortiOS v6.2+.

Fortigate 2 ECS

  • Validates nulls on IP fields. Fortinet loves to fill with "N/A" null fields, which turns into ingestion errors if your field has IP mapping. We could do it with grok.
  • Renames Fortigate fields that overlaps with ECS. In the future this will be done on the kv filter stage, to be more ECS compliant.
  • Translates Fortigate field to ECS by type of logs.
  • Introduces network.protocol_category used on dashboards controls. Mega useful!!!
  • Populates other ECS fields based on ECS recommendations, like event.kind, event.category, event.type.

Common ECS / Output

Populates several ECS fields based on other present fields.

  • *.locality for private, loopback, link-local, multicast and public addresses. These fields are not ECS official fields.
  • Geo localitzation of IPs. 🌎
  • related.ip and related.user.
  • network.bytes and network.packets.
  • event.duration. ⌛
  • event.hour_of_day and event.day_of_week. These fields are not ECS official fields.
  • Calcualtes network.community_id just for tcp/udp.
  • Registered domain.
  • Url parsing.
  • user_agent.*.
  • network.transport.

Output is based on index strategy, which is crucial for large ingestion cases 🤯. On Fortigate datastreams are split by type.

In our experience, type=traffic generates lots of logs, while type=event very few. Even inside type=traffic you might have that most of your logs have action=denied, so you may want to split them even further. Splitting into several datastreams allows to assign different ILMs policies and also will be faster for searching.

Elasticseach has a "statical" approach to datastream definition because we have to somehow map our strategy to the datastream fields. If we would like to add a couple of fields to our splitting decision, like action and source.locality, we would need to insert those fields into data_stream.dataset and we might impact index template and ILM. Surely if we want to benefit from a faster searching on those fields we would need to change their mapping to constant_keyword. We don't know in advance how our data is distributed, and even if we knew, that might change on the future. On the other hand, Grafana Loki provides a more "flexible" approach, we just index the fields that should "split" our data, and their are treated as labels. This looks really cool and we will be exploring Grafana Loki on the near future.

Dashboards

We have tried to follow Fortigate´s Logs & Report section. Main objective of these dashboards is to do threat hunting by checking some critical KPIs in order to spot anomalies on them.

We have migrated eveythigh to Lens now, so that has helped a lot on performance, but still it is very recommended to fine tune the dashboards with the relevant info to your needs. There a lot of visualizations on each dashboard so keep in mind performance can be impacted (slow loading times or even failures when loading).

Structure

All dashboards are connected via its header structure. Making it easy to navigate trough them.

header

Dashboards follow a (max) 3 layer structure, going from more general to more specific.

  1. Top level reference Fortinet´s type field: traffic, utm or event. UTM is already disaggregated so it can be easier to go to an specif UTM type, just like in Fortigate´s Logs & Report section.

  2. Second level dives into traffic direction (if possible). For example: on traffic dashboard, we have Outbound | Inbound | LAN 2 LAN. It makes total sense to analyze it separetly.

  3. Third level refers to which metric are we using for exploring the dataset: we only use sessions and bytes.

  • sessions: we consider each log as a unique session. Be careful is you have connections that go through several firewalls or vdoms.

  • bytes: we analyze source.bytes and destination.bytes by both sum and average.

    logid=20 introduces duplicate data when doing aggregations (sum of bytes for a particular source.ip). That is why it is filtered out on all dashboards. It is recommended not to drop this logs as they might be useful for troubleshooting or forensic analysis.

  1. Controls, above header structure, let you quickly filter your data.

Visualizations

Dashboards have 2 sections:

  1. The upper visualizations show specific fields for the dataset that it is been analyzed. We split the most meaningful fields by action, because that is why you bought a firewall in first place, to allow/block stuff. upper
  2. The lower visualizations are entity specific, on the first row there will always be source.ip, destination.ip, network.protocol which are fields that are present on all logs. The second raw has entities that might be useful on the analysis of that specific dashboard. lower

Datasets and ECS Translations

Datasets

We need your help for getting the datasets for versions 6.2 and forward. Currently we only got this. Fortinet fields are always "evolving" (changing witout any notice and logic), and not all changes get docummented on Log Reference releases. Any help on taking a Log Refence file and transform it into a csv will be welcome. 🆘

Translations Sheets

Once we got the Log Reference guides turned into spreadsheets we could process the data. We had to denormalize data, merge fields, verify fields mapping (data type), look for fileds that overlap with ECS fields, translate fields to ECS, make mappings and pipelines configs.

We plan to consolidate datasets per major version in a single spreadsheet.

Fortigate

FortiOS_6.2.X_Log_Reference - Public

Fortigate logs are an ugly beast, mainly because its lack of (good) documentation. Altough it has been improving, it is still far from being coherent. For example, starting from 6.2.1, type "utm" was documented, altough it existed long ago.

On top of that, GTP events cause some field mismatch mapping like:

  • checksum: string | uint32

  • from: ip | string

  • to: ip | string

  • version: string | uint32

As far as we are concern, GTP is only part of Fortigate Carrier, which is a different product (¿?) How can Fortigate manage a field that has 2 different data types in its internal relational database? how does fortianalyzer do it? We have no idea because we have never seen GTP events on a real scenario. In order to avoid any data type mismatch, GTP events are not going to be considered, and unless you use Fortigate Carrier, you are not going to see them either.

The spreadsheet goes like:

  1. Data 6.2.X is the denormalized data set obtained from the Log Reference Guide of version 6.2.X. This is the raw dataset of every version.

  2. Data has all the datasets combined from Data 6.2.X sheets. You can look at it as the denormalize version of all datasets of major release 6.2.

  3. On Overlap ECS - Summary of fields, we look for any field on Fortigate dataset that could overlap with ECS fields. First we consolidate all fields with a dynamic table, and then lookup for it over root fields on ECS 1.X. For example, Fortigate dataset has field agent, which overlaps with field agent on ECS. If we find an overlap, we propose a rename for Fortigate field: fortios.agent.

  4. We have decided to slice the full dataset by type, resulting in 3 datasets: traffic, utm and event. Each of them has its own translation. So, on sheets Summary of "traffic type" fields, Summary of "utm type" fields and Summary of "event type" fields we consolidate the fields of each dataset independently.

  5. On ECS Translation of "XXX type" fields is where the magic happens. Based on our criteria, we translate to ECS the fields we consider can fit.

  6. On logstash - XXX we consolidate the translated fields of previous sheets and generate logstash code.

  7. On fortigate mapping we filter all fortigate fields that are not string and. Based on its type, mapping is generated. The template we use consider keyword as default mapping, this is why we only explicitly define non-keyword fields.

Translation is where we need more help from the community!!! Any suggestions are welcome.

Fortisandbox

FortiSandbox - Log Reference v3.1.2 - Public

Not updated in a while 😕

Fortiweb

FortiWeb_6.2.0_Log_Reference - Public

Not updated in a while 😕

Limitations

We have not tested it yet on FortiOS v7+

Roadmap 🛣️

We can divide the whole project into these phases:

Ingestion

  • Parsing (DONE ✅)
  • ECS normalization (DONE ✅)
  • Common enrichements (GeoIP, network.community_id, etc.) (DONE ✅)
  • Fortigate v7 support, specially Syslog RFC5424 format. (WIP 🏗)
  • Palo Alto support (WIP 🏗)
  • Asset Enrichment: Fortigate can map user identity inside the logs, but that is not enough. We need to map networks funtionality, assets risk and group. For example, we might have a critical application that may have several web servers, sveral backends and a distributed DB; we need to know in our logs that all those IPs belong to the same application, which is a critical asset for business. If an alert gets generated, we can treat it with more priority. 🧠
  • IoC enrichment: IoC in general has 2 sides: enriching a log that is currently being ingested, and enriching a log that has already been ingested. Both approachs are needed, and both have very different challenges. For the first one, we can tweak the geoip filter for such purpose, like these guys do for assets.
  • Explore other ingest options: Kafka and Rsyslog

Ingestion should be about fortifying raw logs as much as possible, so we can have more inputs for doing analysis.

Platform

One of the benefits of FortiDragon is that we are not limited to Elastic, we can use any tool we would like. Altough we love ELK, there are some other tools that can be used on specific areas such as Jupyter Notebooks or Google Colab for ML analytics, or Neo4j for graph analysis.

On the near future, we would like to integrate Loki/Grafana. Logstash already has a Loki output plugin, so it should not be very diffult to start testing it. We want to explore other visualization and alerting options.

graph LR;
    Fortigate-->Logstash;
    Palo_Alto-->Logstash;
    Logstash-->Elasticsearch;
    Logstash-->Loki;
    Elasticsearch-->Kibana;
    Loki-->Grafana;

Analytics 🧠🧠🧠

We got our super enriched logs 🦾, now what 😕?? Well, firewall logs are just firewall logs: lots of them, very few value on each of them. We need to pivot from just logs to datapoints, by defining entities and features about the behaivor of those entities.

Traditionally we have used the 5 tuple concept to identify connections, meaning the components of the tuple are the important entities of any given connection. However, with next generation application firewalls we now have more relevant information, like application or user. We can define a "new" 5 tuple, composed of source.ip, source.user, destination.ip, service (destination port + protocol) and application. These are the entities on the NGFW world that are present on all connecions and we should analyze KPIs of them and the interation between them. 💡💡💡

For example, let's say in a period of 1 hour, we see an IP that has had connecions to a thousand different DNS servers. That is really weird, right?

What have we just done? We have definied source.ip as our entity, and we had definied unique destination.ip on UDP/53 over 1 hour as a feature of that entity, transforming all those 1k logs into a single document. We can define many features that can be relevant to take a look at. Those features are what we call Key Security Indicators (KSIs) and they will be the foundation for making sense out of network logs. Once we got KSIs for our entities, we can profile them just by checking how these KSIs evolve over time, or in comparisson to other entities in our infraestructure. We should use Transforms and ML of Elasticsearch for such purpose.

Another particular topic that has always had me wonder is P(A-->B), meaning the probability of A talking to B. We already got all the connections that are running on the network, so obtaining that probability should be trivial. More complex to calculate would be P(A-->B-->C), probability of A talking to C through B. What we are trying to get is the relations that different assets of our network have. If we mix it with KSIs of every individual asset, we can have a very powerful analysis. This seems particular useful for lateral movement and beaconing. For such analytics we need a graph database like Neo4j.

Visualization

  • More dashboards: SD-WAN, traffic shapping, consolidated risk-score, etc.
  • Vega visualiations.
  • Canvas for reports and C-level presentations. 🖌
  • Grafana

Authors

Logstash pipelines and Elasticsearch config @hoat23 and @enotspe 🐉

Dataset analysis and Kibana @enotspe 🐉

Current maintenance @enotspe 🐉

fortinet-2-elasticsearch's People

Contributors

cyb3rsn0rlax avatar enotspe avatar hoat23 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fortinet-2-elasticsearch's Issues

Filebeat integration

Hello,

Is it possible to skip Logstash and use the new Filebeat Fortinet module?

Regards

Question: FortiAnalyzers?

Can this deployment pull from FortiAnalyzer syslog feeds or is this only possible from a FortiGate?

We have multiple FortiGates all connected to a FortiAnalyzer, so we'd like to run a single feed for all gateways.

Issue with Creating Transforms

We are seeing an issue with the ELK Stack and Creating the Transforms,

Using the Put Command, I have tried to load the first Transform via Dev console and it yields the following:

{
"error": {
"root_cause": [
{
"type": "validation_exception",
"reason": "Validation Failed: 1: Failed to test query, received status: BAD_REQUEST;"
}
],
"type": "validation_exception",
"reason": "Validation Failed: 1: Failed to test query, received status: BAD_REQUEST;",
"caused_by": {
"type": "search_phase_execution_exception",
"reason": "all shards failed",
"phase": "query",
"grouped": true,
"failed_shards": [
{
"shard": 0,
"index": ".ds-logs-fortinet.fortigate.traffic-default-2023.11.10-000034",
"node": "tGJaL1oXRjepu80fEqDmsQ",
"reason": {
"type": "illegal_argument_exception",
"reason": "Fielddata is disabled on [fgt.srchwvendor] in [.ds-logs-fortinet.fortigate.traffic-default-2023.11.10-000034]. Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [fgt.srchwvendor] in order to load field data by uninverting the inverted index. Note that this can use significant memory."
}
}
],
"caused_by": {
"type": "illegal_argument_exception",
"reason": "Fielddata is disabled on [fgt.srchwvendor] in [.ds-logs-fortinet.fortigate.traffic-default-2023.11.10-000034]. Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [fgt.srchwvendor] in order to load field data by uninverting the inverted index. Note that this can use significant memory.",
"caused_by": {
"type": "illegal_argument_exception",
"reason": "Fielddata is disabled on [fgt.srchwvendor] in [.ds-logs-fortinet.fortigate.traffic-default-2023.11.10-000034]. Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [fgt.srchwvendor] in order to load field data by uninverting the inverted index. Note that this can use significant memory."
}
}
}
},
"status": 400
}

kibana index patterns

when i first imported the kibana index patterns they looked like this

image

i think this might be some misconfiguration or on my behalf or on the files provided...

PS: this is regarding the ecs-fortigate-*

Can someone give me a hint of what could've worked wrong?

[LogStash] Failed to parse field [dns.resolved_ip] of type [ip]

Hello,

I am using the plugin filter mutate {copy =>{ "[ipaddr]"=> "[dns][resolved_ip]" } } and ipaddr contains multiple values ("127.0.0.1, 192.168.0.3, 192.168.0.4") and when I try to ingest this into ElasticSearch the field dns.resolved_ip is an IP so the error I am receiving is:
"status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [dns.resolved_ip] of type [ip] in document with id 'w3ZAWnEBlAHVcZpD_2dx'. Preview of field's value: '127.0.0.1, 192.168.0.3, 192.168.0.4'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"'127.0.0.1, 192.168.0.3, 192.168.0.4' is not an IP string literal."

I think the ipaddr value needs to be parsed and break out the values into an array to show the end result like this:
"dns": {
"resolved_ip": [
"127.0.0.1",
"192.168.0.3",
"192.168.0.4"
],

Instead of
"dns": {
"resolved_ip": [
"127.0.0.1, 192.168.0.3, 192.168.0.4"
],

Which is not an IP address but just a string of text.

Let me know if you need any clarifications. I am working on a solution to parse the ipaddr data but if you already have one, please provide!

Thanks!

Questions - Fortigate - SysLog - CEF?

Hi,

First of all, this looks really good and appreciate all the efforts you have gone through to make it this far! I am very interested to ingest Fortinet logs into ElasticSearch.

1st Question: Do you have a slack, gitter, or other project communication channel to ask questions like these and help with the project?

2nd Question: Does the input for the SysLog need to be the regular format or in CEF? I have started down this path with FileBeat and CEF to ingest all the CEF fields to something but wasn't sure how this project was going about that.

3rd Question: Is there any other documentation to get this setup? I can fumble through and pick different pieces and parts but I wanted to make sure there wasn't anything available that made this process easier.

Thanks and keep up the good work!

[Question] Pipeline.yml example

Hello,

Can you add a pipeline.yml example on how to use different modules in logstash ?
For now all my files are on the same folder and I use the following pipeline :

  • pipeline.id: fortigate
    path.config: "/etc/logstash/conf.d/fortigate/*.conf"

But I don't understand how you use pipeline address ?

Logstash Error on field data

Is it realy necessary to change the mappings for them to be "Field Data : True"??
Why aren't the dashboards using the keywords?
image

[Fortinet] Could not index event to Elasticsearch. [sentdelta] value out of range

I have noticed that once in a while an event won't get ingested because the [sentdelta] field contains a value such as 18446744073706429550 which is larger than the type long required in Elastic. Is there a way to handle bigintegers in Elastic? Otherwise I might just make this a text field or changing the mapping settings to let it index anyways. How has anyone else managed this?

Can't import dashboards

I've downloaded the raw NDJSON files, and when going into the UI - Stack Management - Kibana - Saved Objects --> Import, I get "Sorry, there was an error. The file could not be processed". I've tried it with all 4 of the NDJSON files on the github repository. I'm coming from Palo Alto to Fortinet, so I'd love to have pre-filled in dashboards that I can look at and tear apart to see how it works.

Any suggestions?

Thank you

:edit: for clarification - I'm on 7.9.0 for the whole ELK stack.

Logstash Shard error

Hello,

I received data and is ingested and procesed fine. But when new day starts, my ingestion give this error:

Validation Failed: 1: this action would add [2] shards, but this cluster currently has [999]/[1000] maximum normal shards open

Any idea?

Can't import dashboards

I've downloaded the raw NDJSON files, and when going into the UI - Stack Management - Kibana - Saved Objects --> Import, I get "Sorry, there was an error. The file could not be processed". I've tried it with all 4 of the NDJSON files on the github repository. I'm coming from Palo Alto to Fortinet, so I'd love to have pre-filled in dashboards that I can look at and tear apart to see how it works.

Any suggestions?

Thank you

I can't discover Fortigate log

HI, I'm newby of ELK Stack, I'm working now with FortiManager&FortiAnalyzer, and I'm courious to try FortiDragon :) , I've started with Bitnami distribution ELK stack. I've followed your guide step-by-step , I stopped and unistalled Filebeat, I' see the syslog traffic incoming from UDP port but I don't discover any data stream Fortinet Log ...Any Idea ??

Thanks for your help....if you need to see some configurations file or log I can show you..it's a Lab environnement.

[Fortigate] UTM Virus url.path should be url.full

Fortios.url contains the full url when the type is utm with the subtybe of virus.

It seems that this is the only subtype of UTM that does this instead of just putting in the path.

With the current way the pipelines are laid out in the project, I don't think we can inject and if, elseif into the copy statements.

I re-engineered the pipelines to use if [field] {mutate {copy ...}} for all of them for more granular control.

This was my solution to ensure that the UTM Virus logs would put the full url into url.full:

	if [subtype] =="virus" {
            mutate { copy =>{ "[fortios][url]"=> "[url][full]" }}
        }
        else if [fortios][url] {
            mutate { copy =>{ "[fortios][url]"=> "[url][path]" }}
        }

The full section would look like this, but not sure if you want a PR to make this large of a change if you have a better way of handling this logic.

    if [type] == "traffic" {
            if [app] {mutate { copy => { "[app]"=> "[network][application]" }}}
            if [collectedemail] {mutate { copy => { "[collectedemail]"=> "[source][user][email]" }}}
            if [comment] {mutate { copy => { "[comment]"=> "[rule][description]" }}}
            if [dstcollectedemail] {mutate { copy => { "[dstcollectedemail]"=> "[destination][user][email]" }}}
            if [dstintf] {mutate { copy => { "[dstintf]"=> "[observer][egress][interface][name]" }}}
            if [dstintfrole] {mutate { copy => { "[dstintfrole]"=> "[observer][egress][interface][role]" }}}
            if [dstip] {mutate { copy => { "[dstip]"=> "[destination][ip]" }}}
            if [dstmac] {mutate { copy => { "[dstmac]"=> "[destination][mac]" }}}
            if [dstname] {mutate { copy => { "[dstname]"=> "[destination][address]" }}}
            if [dstport] {mutate { copy => { "[dstport]"=> "[destination][port]" }}}
            if [duration] {mutate { copy => { "[duration]"=> "[event][duration]" }}}
            if [group] {mutate { copy => { "[group]"=> "[source][user][group][name]" }}}
            if [msg] {mutate { copy => { "[msg]"=> "[message]" }}}
            if [policyid] {mutate { copy => { "[policyid]"=> "[rule][id]" }}}
            if [policyname] {mutate { copy => { "[policyname]"=> "[rule][name]" }}}
            if [policytype] {mutate { copy => { "[policytype]"=> "[rule][ruleset]" }}}
            if [poluuid] {mutate { copy => { "[poluuid]"=> "[rule][uuid]" }}}
            if [proto] {mutate { copy => { "[proto]"=> "[network][iana_number]" }}}
            if [rcvdbyte] {mutate { copy => { "[rcvdbyte]"=> "[destination][bytes]" }}}
            if [rcvdpkt] {mutate { copy => { "[rcvdpkt]"=> "[destination][packets]" }}}
            if [sentbyte] {mutate { copy => { "[sentbyte]"=> "[source][bytes]" }}}
            if [sentpkt] {mutate { copy => { "[sentpkt]"=> "[source][packets]" }}}
            if [fortios][service] {mutate { copy => { "[fortios][service]"=> "[network][protocol]" }}}
            if [sessionid] {mutate { copy => { "[sessionid]"=> "[network][session_id]" }}}
            if [srcdomain] {mutate { copy => { "[srcdomain]"=> "[source][domain]" }}}
            if [srcintf] {mutate { copy => { "[srcintf]"=> "[observer][ingress][interface][name]" }}}
            if [srcintfrole] {mutate { copy => { "[srcintfrole]"=> "[observer][ingress][interface][role]" }}}
            if [srcip] {mutate { copy => { "[srcip]"=> "[source][ip]" }}}
            if [srcmac] {mutate { copy => { "[srcmac]"=> "[source][mac]" }}}
            if [srcport] {mutate { copy => { "[srcport]"=> "[source][port]" }}}
            if [tranip] {mutate { copy => { "[tranip]"=> "[destination][nat][ip]" }}}
            if [tranport] {mutate { copy => { "[tranport]"=> "[destination][nat][port]" }}}
            if [transip] {mutate { copy => { "[transip]"=> "[source][nat][ip]" }}}
            if [transport] {mutate { copy => { "[transport]"=> "[source][nat][port]" }}}
            if [unauthuser] {mutate { copy => { "[unauthuser]"=> "[source][user][name]" }}}
            if [fortios][url] {mutate { copy => { "[fortios][url]"=> "[url][path]" }}}

            if [dstunauthuser] {mutate { copy => { "[dstunauthuser]"=> "[destination][user][name]" }}}
            if [fortios][user] {mutate { copy => { "[fortios][user]"=> "[source][user][name]" }}}

    # ECS categorization fields
        mutate {
            add_field => { "[event][kind]" => "event" }
            add_field => { "[event][category]" => "network" }
            add_field => { "[event][type]" => "connection" }
        }
        if [action] == "deny" or [utmaction] == "block" {
            mutate { add_field => { "[event][type]" => "denied" } }
        }
        else {
            mutate { add_field => { "[event][type]" => "allowed" } }
        }
        if [action] == "start" {
            mutate { add_field => { "[event][type]" => "start" } }
        }
        else {
            mutate { add_field => { "[event][type]" => "end" } }
        }
        if [action] in [ "dns" , "ip-conn" ] {
            mutate { add_field => { "[event][type]" => "error" } }
        }
        if [network][application] {
            mutate { add_field => { "[event][type]" => "protocol" } }
        }
    }

    # type=dns for version 6.0 and below. On 6.2, dns is subtype of utm

    else if [type] == "utm" or [type] == "dns" {
		if [fortios] {mutate { copy =>{ "[fortios][agent]"=> "[user_agent][original]" }}}
		if [app] {mutate { copy =>{ "[app]"=> "[network][application]" }}}
		if [appcat] {mutate { copy =>{ "[appcat]"=> "[rule][category]" }}}
		if [applist] {mutate { copy =>{ "[applist]"=> "[rule][ruleset]" }}}
		
		if [dir] {mutate { copy =>{ "[dir]"=> "[network][direction]" }}}
		
		if [dst_int] {mutate { copy =>{ "[dst_int]"=> "[observer][egress][interface][name]" }}}
		if [dst_port] {mutate { copy =>{ "[dst_port]"=> "[destination][port]" }}}
		
		if [dstintfrole] {mutate { copy =>{ "[dstintfrole]"=> "[observer][egress][interface][role]" }}}
		if [dstip] {mutate { copy =>{ "[dstip]"=> "[destination][ip]" }}}
		
		if [duration] {mutate { copy =>{ "[duration]"=> "[event][duration]" }}}
		if [fortios][error] {mutate { copy =>{ "[fortios][error]"=> "[error][message]" }}}
		if [errorcode] {mutate { copy =>{ "[errorcode]"=> "[error][code]" }}}
		if [event_id] {mutate { copy =>{ "[event_id]"=> "[event][id]" }}}
		
		if [eventtype] {mutate { copy =>{ "[eventtype]"=> "[event][action]" }}}
		if [filehash] {mutate { copy =>{ "[filehash]"=> "[file][hash][crc32]" }}}
		if [filename] {mutate { copy =>{ "[filename]"=> "[file][name]" }}}
		if [filesize] {mutate { copy =>{ "[filesize]"=> "[file][size]" }}}
		if [filetype] {mutate { copy =>{ "[filetype]"=> "[file][extension]" }}}
		if [fortios][group] {mutate { copy =>{ "[fortios][group]"=> "[source][user][group][name]" }}}
		if [ipaddr]{mutate {split => { "ipaddr" => ", " }}}
		if [ipaddr] {mutate { copy =>{ "[ipaddr]"=> "[dns][resolved_ip]" }}}
		if [msg] {mutate { copy =>{ "[msg]"=> "[message]" }}}
		if [policy_id] {mutate { copy =>{ "[policy_id]"=> "[rule][id]" }}}
		
		if [profile] {mutate { copy =>{ "[profile]"=> "[rule][ruleset]" }}}
		if [proto] {mutate { copy =>{ "[proto]"=> "[network][iana_number]" }}}
		if [qclass] {mutate { copy =>{ "[qclass]"=> "[dns][question][class]" }}}
		if [qname] {mutate { copy =>{ "[qname]"=> "[dns][question][name]" }}}
		if [qtype] {mutate { copy =>{ "[qtype]"=> "[dns][question][type]" }}}
		if [rcvdbyte] {mutate { copy =>{ "[rcvdbyte]"=> "[destination][bytes]" }}}
		if [reason] {mutate { copy =>{ "[reason]"=> "[event][reason]" }}}
		
		if [sentbyte] {mutate { copy =>{ "[sentbyte]"=> "[source][bytes]" }}}
		if [fortios][service] {mutate { copy =>{ "[fortios][service]"=> "[network][protocol]" }}}
		if [session_id] {mutate { copy =>{ "[session_id]"=> "[network][session_id]" }}}
		
		if [src_int] {mutate { copy =>{ "[src_int]"=> "[observer][ingress][interface][name]" }}}
		if [src_port] {mutate { copy =>{ "[src_port]"=> "[source][port]" }}}
		if [srcdomain] {mutate { copy =>{ "[srcdomain]"=> "[source][domain]" }}}
		
		if [srcintfrole] {mutate { copy =>{ "[srcintfrole]"=> "[observer][ingress][interface][role]" }}}
		if [srcip] {mutate { copy =>{ "[srcip]"=> "[source][ip]" }}}
		if [srcmac] {mutate { copy =>{ "[srcmac]"=> "[source][mac]" }}}
		
		if [unauthuser] {mutate { copy =>{ "[unauthuser]"=> "[source][user][name]" }}}

#Inconsistencies in the UTM logging forces us to place the UTM virus URL path into url.full since it contains everything, not just the path.
		if [subtype] =="virus" {
            mutate { copy =>{ "[fortios][url]"=> "[url][full]" }}
        }
        else if [fortios][url] {
            mutate { copy =>{ "[fortios][url]"=> "[url][path]" }}
        }
		
		if [vrf] {mutate { copy =>{ "[vrf]"=> "[network][vrf]" }}}
		if [xid] {mutate { copy =>{ "[xid]"=> "[dns][id]" }}}
		if [hostname] {mutate { copy =>{ "[hostname]"=> "[url][domain]" }}}


		if [catdesc] {mutate { copy =>{ "[catdesc]"=> "[rule][category]" }}}
		if [direction] {mutate { copy =>{ "[direction]"=> "[network][direction]" }}}
		if [dstintf] {mutate { copy =>{ "[dstintf]"=> "[observer][egress][interface][name]" }}}
		if [eventid] {mutate { copy =>{ "[eventid]"=> "[event][id]" }}}
		if [locip] {mutate { copy =>{ "[locip]"=> "[source][ip]" }}}
		if [locport] {mutate { copy =>{ "[locport]"=> "[source][port]" }}}
		if [policyid] {mutate { copy =>{ "[policyid]"=> "[rule][id]" }}}
		if [sessionid] {mutate { copy =>{ "[sessionid]"=> "[network][session_id]" }}}
		if [srcintf] {mutate { copy =>{ "[srcintf]"=> "[observer][ingress][interface][name]" }}}
		if [fortios][user] {mutate { copy =>{ "[fortios][user]"=> "[source][user][name]" }}}
		if [remip] {mutate { copy =>{ "[remip]"=> "[destination][ip]" }}}
		if [remport] {mutate { copy =>{ "[remport]"=> "[destination][port]" }}}


		if [dstport] {mutate { copy =>{ "[dstport]" => "[destination][port]" }}}
		if [srcport] {mutate { copy =>{ "[srcport]" => "[source][port]" }}}

    }
    else if [type] == "event" {

		if [fortios][agent] {mutate { copy =>{ "[fortios][agent]"=> "[user_agent][original]" }}}
		if [daddr] {mutate { copy =>{ "[daddr]"=> "[destination][address]" }}}

		if [direction] {mutate { copy =>{ "[direction]"=> "[network][direction]" }}}
		if [dstip] {mutate { copy =>{ "[dstip]"=> "[destination][ip]" }}}
		if [dstport] {mutate { copy =>{ "[dstport]"=> "[destination][port]" }}}
		if [duration] {mutate { copy =>{ "[duration]"=> "[event][duration]" }}}
		if [fortios][error] {mutate { copy =>{ "[fortios][error]"=> "[error][message]" }}}
		if [error_num] {mutate { copy =>{ "[error_num]"=> "[error][code]" }}}

		if [fortios][file] {mutate { copy =>{ "[fortios][file]"=> "[file][name]" }}}
		if [filesize] {mutate { copy =>{ "[filesize]"=> "[file][size]" }}}
		if [fortios][group] {mutate { copy =>{ "[fortios][group]"=> "[user][group][name]" }}}
		if [hostname] {mutate { copy =>{ "[hostname]"=> "[url][domain]" }}}

		if [msg] {mutate { copy =>{ "[msg]"=> "[message]" }}}
		if [policyid] {mutate { copy =>{ "[policyid]"=> "[rule][id]" }}}
		if [proto] {mutate { copy =>{ "[proto]"=> "[network][iana_number]" }}}
		if [rcvdbyte] {mutate { copy =>{ "[rcvdbyte]"=> "[destination][bytes]" }}}
		if [saddr] {mutate { copy =>{ "[saddr]"=> "[source][address]" }}}
		if [sentbyte] {mutate { copy =>{ "[sentbyte]"=> "[source][bytes]" }}}
		if [fortios][service] {mutate { copy =>{ "[fortios][service]"=> "[network][protocol]" }}}
		if [sess_duration] {mutate { copy =>{ "[sess_duration]"=> "[event][duration]" }}}
		if [source_mac] {mutate { copy =>{ "[source_mac]"=> "[source][mac]" }}}

		if [fortios][user] {mutate { copy =>{ "[fortios][user]"=> "[user][name]" }}}
		if [fortios][url] {mutate { copy =>{ "[fortios][url]"=> "[url][path]" }}}

		if [dst_host] {mutate { copy =>{ "[dst_host]"=> "[destination][address]" }}}

		if [srcmac] {mutate { copy =>{ "[srcmac]"=> "[source][mac]" }}}
		if [srcport] {mutate { copy =>{ "[srcport]"=> "[source][port]" }}}

		if [srcip] {mutate { copy =>{ "[srcip]"=> "[source][ip]" }}}

    }

Ruby exception occurred: can't convert Array into an exact number

Hi @enotspe

I got error in ruby filter when using config

  1. syslog-fortinet-fortigate-input-kv.conf or syslog-fortinet-fortigate-input5424-kv.conf
  2. syslog-fortinet-fortigate_2_ecsv2.conf
  3. syslog-fortinet-common_ecs-output.conf

I'm not change any config, only change port input. and output ES.

[ERROR][logstash.filters.ruby ][main][a87af76ae105d59b87fe27c4e7659d1c6cc7ec07a265cd75c57200456445fbc9] Ruby exception occurred: can't convert Array into an exact number {:class=>"TypeError", :backtrace=>["org/jruby/RubyTime.java:510:in localtime'", "(ruby filter code):5:in block in filter_method'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-ruby-3.1.8/lib/logstash/filters/ruby.rb:96:in inline_script'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-ruby-3.1.8/lib/logstash/filters/ruby.rb:89:in filter'", "/usr/share/logstash/logstash-core/lib/logstash/filters/base.rb:159:in do_filter'", "/usr/share/logstash/logstash-core/lib/logstash/filters/base.rb:178:in block in multi_filter'", "org/jruby/RubyArray.java:1821:in each'", "/usr/share/logstash/logstash-core/lib/logstash/filters/base.rb:175:in multi_filter'", "org/logstash/config/ir/compiler/AbstractFilterDelegatorExt.java:134:in multi_filter'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:300:in block in start_workers'"]}

can you reproduce for this case?

Thank you

checking installation

Hello
How to check that everything right with installation?
I import all dashboards to kibana and
make put requests to import templates for ecs-* and -fortigate-
configure syslog-ng to transfer data to elastic with names that are expected from dashboards
this is part of json message that generates syslog-ng

{"url":{"full":""},"source":{"user":{"email":""},"port":"62945","packets":"1","nat":{"port":"62945","ip":"185.183.174.41"},"mac":"64:31:50:37:fc:59","ip":"10.9.0.206","domain":"","bytes":"52"},"rule":{"uuid":"540e8a82-28a4-51ea-4ed6-a35f5b0063df","ruleset":"traffic","name":"","id":"23","category":"forward"},"observer":{"serial_number":"FG100D3G12801312","name":"FORTI_111","ingress":{"interface":{"role":"lan","name":"9 VLAN"}},"egress":{"interface":{"role":"wan","name":"wan2"}}},"network":{"iana_number":"6","application":""},"message":"","log":{"level":"notice"},"host":{"vendor":"HP","type":"","os":{"version":"7","name":"Windows","family":""},"name":"","mac":"64:31:50:37:fc:59","ip":"10.9.0.206"},"fortios":{"service":"tcp/29622"}}

but there is no in dashboards why?
Screenshot_2021-02-11_17-16-10

deployment to my ELK stack

Hi all,
I'm having some trouble understanding how to deploy your configurations to my ELK.
There are features here that I havent used and im kinda losing my way around all the conf files
any chance someone could provide an explanation as to how to put all the confs, pipelines and mibs on my logstash?
thanks

[Parsing] Error when parsing url on long value

Just detected some parsing issue with this log

<185>date=2020-03-28 time=21:37:11 devname="MASTER_CALLEUNO" devid="FG5H1E5818909999" logid="0419016384" type="utm" subtype="ips" eventtype="signature" level="alert" vd="root" eventtime=1585449431 severity="high" srcip=51.81.126.39 srccountry="United States" dstip=192.168.253.169 srcintf="port1" srcintfrole="wan" dstintf="port2" dstintfrole="lan" sessionid=2060097095 action="dropped" proto=6 service="HTTP" policyid=13 attack="HTTP.URI.SQL.Injection" srcport=58637 dstport=80 hostname="somehostname.com" url="/Miercoles/Portal/MME/descargar.aspx?archivo=A1A44AFA-694A-4264-8F8B-14BA4595D993.PDF AND 1=1 UNION ALL SELECT 1,NULL,'<script>alert("XSS")</script>',table_name FROM information_schema.tables WHERE 2>1--/**/; EXEC xp_cmdshell('cat ../../../etc/passwd')" direction="outgoing" attackid=15621 profile="all_default" ref="http://www.fortinet.com/ids/VID15621" incidentserialno=1846760869 msg="web_misc: HTTP.URI.SQL.Injection," crscore=30 crlevel="high"

The issue is on url="/Miercoles/Portal/MME/descargar.aspx?archivo=A1A44AFA-694A-4264-8F8B-14BA4595D993.PDF AND 1=1 UNION ALL SELECT 1,NULL,'<script>alert("XSS")</script>',table_name FROM information_schema.tables WHERE 2>1--/**/; EXEC xp_cmdshell('cat ../../../etc/passwd')"

it gets parsed like

fortios.url= "/Miercoles/Portal/MME/descargar.aspx?archivo=A1A44AFA-694A-4264-8F8B-14BA4595D993.PDF

I am missing half of the value. I will do some troubleshooting

creating ILM

I'm newbie in ELK. I don't know how to create ILM, Also I don't know how to import your json files.

Remove additional useless fields

Hello,

I have found some additional fields from our fortinet logs which seems useless and can be removed as the N/A one in 40-fortigate_2_ecs :

if [srccountry]=="Reserved" { mutate { remove_field => ["srccountry"] } } if [dstcountry]=="Reserved" { mutate { remove_field => ["dstcountry"] } } if [dstdevcategory]=="None" { mutate { remove_field => ["dstdevcategory"] } }
I don't think the reserved value tell us anything meaningful so do you think those fields should be removed too ?

Best regards.

Attempted to send event to 'syslog-fortinet-common_ecs-output' but that address was unavailable

Running into an issue with my setup. Some background:

  • Running Logstash 8.10
  • Elasticsearch 8.10
  • Removed logstash-input-twitter
  • Installed logstash-filter-tld --version 3.1.3

Installed everything as per instructions but getting this error:

Nov 14 13:32:17 zavpemblogs31 logstash[6565]: [2023-11-14T13:32:17,417][INFO ][logstash.javapipeline ] Pipeline syslog-fortinet-common_ecs-outputis configured withpipeline.ecs_compatibility: v8setting. All plugins in this pipeline will default toecs_compatibility => v8unless explicitly configured otherwise. Nov 14 13:32:17 zavpemblogs31 logstash[6565]: [2023-11-14T13:32:17,515][WARN ][org.logstash.plugins.pipeline.PipelineBus][syslog-fortinet-fortigate-input5424-kv][6aab6188921cec0832a0712bc324ef942bf88174229dcfed0e9b06c29785d59a] Attempted to send event to 'syslog-fortinet-fortigate_2_ecsv2' but that address was unavailable. Maybe the destination pipeline is down or stopping? Will Retry. Nov 14 13:32:17 zavpemblogs31 logstash[6565]: [2023-11-14T13:32:17,555][INFO ][logstash.outputs.elasticsearch][syslog-fortinet-common_ecs-output] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["https://172.17.7.134:9200"]} Nov 14 13:32:17 zavpemblogs31 logstash[6565]: [2023-11-14T13:32:17,573][WARN ][logstash.outputs.elasticsearch][syslog-fortinet-common_ecs-output] You have enabled encryption but DISABLED certificate verification, to make sure your data is secure setssl_verification_mode => fullNov 14 13:32:18 zavpemblogs31 logstash[6565]: [2023-11-14T13:32:18,125][INFO ][logstash.outputs.elasticsearch][syslog-fortinet-common_ecs-output] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://logstash_internal:[email protected]:9200/]}} Nov 14 13:32:18 zavpemblogs31 logstash[6565]: [2023-11-14T13:32:18,514][INFO ][logstash.javapipeline ][syslog-fortinet-fortigate_2_ecsv2] Pipeline Java execution initialization time {"seconds"=>2.38} Nov 14 13:32:18 zavpemblogs31 logstash[6565]: [2023-11-14T13:32:18,516][WARN ][org.logstash.plugins.pipeline.PipelineBus][syslog-fortinet-fortigate-input5424-kv][6aab6188921cec0832a0712bc324ef942bf88174229dcfed0e9b06c29785d59a] Attempted to send event to 'syslog-fortinet-fortigate_2_ecsv2' but that address was unavailable. Maybe the destination pipeline is down or stopping? Will Retry. Nov 14 13:32:18 zavpemblogs31 logstash[6565]: [2023-11-14T13:32:18,524][INFO ][logstash.javapipeline ][syslog-fortinet-fortigate_2_ecsv2] Pipeline started {"pipeline.id"=>"syslog-fortinet-fortigate_2_ecsv2"} Nov 14 13:32:18 zavpemblogs31 logstash[6565]: [2023-11-14T13:32:18,825][WARN ][logstash.outputs.elasticsearch][syslog-fortinet-common_ecs-output] Restored connection to ES instance {:url=>"https://logstash_internal:[email protected]:9200/"} Nov 14 13:32:18 zavpemblogs31 logstash[6565]: [2023-11-14T13:32:18,827][INFO ][logstash.outputs.elasticsearch][syslog-fortinet-common_ecs-output] Elasticsearch version determined (8.10.4) {:es_version=>8} Nov 14 13:32:18 zavpemblogs31 logstash[6565]: [2023-11-14T13:32:18,828][WARN ][logstash.outputs.elasticsearch][syslog-fortinet-common_ecs-output] Detected a 6.x and above cluster: thetypeevent field won't be used to determine the document _type {:es_version=>8} Nov 14 13:32:18 zavpemblogs31 logstash[6565]: [2023-11-14T13:32:18,843][WARN ][logstash.filters.grok ][syslog-fortinet-common_ecs-output] ECS v8 support is a preview of the unreleased ECS v8, and uses the v1 patterns. When Version 8 of the Elastic Common Schema becomes available, this plugin will need to be updated Nov 14 13:32:19 zavpemblogs31 logstash[6565]: [2023-11-14T13:32:19,807][WARN ][org.logstash.plugins.pipeline.PipelineBus][syslog-fortinet-fortigate_2_ecsv2][29a6aa27ca7002ac905931a3f66296c9a559f80ec562f0a6bc6cce6e7d356a3a] Attempted to send event to 'syslog-fortinet-common_ecs-output' but that address was unavailable. Maybe the destination pipeline is down or stopping? Will Retry. Nov 14 13:32:20 zavpemblogs31 logstash[6565]: [2023-11-14T13:32:20,031][INFO ][logstash.filters.geoip.downloadmanager] new database version detected? true Nov 14 13:32:20 zavpemblogs31 logstash[6565]: [2023-11-14T13:32:20,821][WARN ][org.logstash.plugins.pipeline.PipelineBus][syslog-fortinet-fortigate_2_ecsv2][29a6aa27ca7002ac905931a3f66296c9a559f80ec562f0a6bc6cce6e7d356a3a] Attempted to send event to 'syslog-fortinet-common_ecs-output' but that address was unavailable. Maybe the destination pipeline is down or stopping? Will Retry. Nov 14 13:32:21 zavpemblogs31 logstash[6565]: [2023-11-14T13:32:21,822][WARN ][org.logstash.plugins.pipeline.PipelineBus][syslog-fortinet-fortigate_2_ecsv2][29a6aa27ca7002ac905931a3f66296c9a559f80ec562f0a6bc6cce6e7d356a3a] Attempted to send event to 'syslog-fortinet-common_ecs-output' but that address was unavailable. Maybe the destination pipeline is down or stopping? Will Retry.

[Question] Is remove_field useful ?

Hello,

I'm new to logstash, so forgive me if my question is stupid.
But what is the purpose of this line (file 40-fortigate_2_ecs, line 49) :
remove_field => [ "agent", "error", "file", "group", "hash", "host", "interface", "log", "process", "server", "service", "url", "user" ]
The previous rename is replacing the select fields anyway ? No ?

Best regards,

[SUGGESTION] Adding a BLACKLIST pipeline

A small suggestion, if it is aligned with your vision of the project, is to enable people to add bad IPs to there events and modify the event.kind to alert once the bad IP is detected in order to raise it on the SIEM app.
This is specially beneficial for when you have multiple fortiXX instances or many other solution you can centralize your blacklist and enrich your logs even further in a nice and easy way. I can make PR if you want.

image

yml files

Hi,

could you publish a sampe of the the yml files used ? ( 2 entries per file would be great )

Rsyslog Support

Hi, as logstash is very slow and ressource intensive compared to syslog (which is by nature as Java vs C) what about supporting Rsyslog with the omelastic method?

regards
tuxinator

[Question] Fortigate Pipelines Mutate Copy

I wanted to bring this up to see if anyone has experienced the issue of some logs not getting ingested.

So I did a packet capture pre logstash and made sure I was getting the syslog data and I could validate that 100%.

What is the expected behavior of the filter pipeline when using mutate copy => and the data doesn't exist to copy? For example if I had:
mutate{
copy =>{ "[src_port]"=> "[source][port]" }
}
and src_port does not exist as a field in my data, does it just carry on without that data copied to the new [source][port] field? I think I am missing logs because I have a lot of mutate copies and sometime the data doesn't exist in the log, therefor the doc doesn't ever make it to the output. Is that likely?

After I added a if [src_port] { then mutate cody as it was before } then I finally got the log I was looking for. I still have many other missing logs, but maybe I need to check for every single field to see if it exists prior to the copy to ensure the document can make it to Elastic.

I did not see any errors with docs getting dropped, but to be fair I was using log level info. Any thoughts would be greatly appreciated!

I went ahead and added checks to most of the mutates to see if this increases the log volume and it appeared so. More to come on this.

Rework pipelines to start with numbers

Hi,

I think it is hard to know what and when which pipeline will be run. I suggest the following architecture for logstash:

10-input.conf
20-enhancement.conf
30-filter.conf
50-dootherstuff.conf
90-output.conf

then it is more easy to quickly understand how those pipelines are working together.

Fortigate logs to Elastic SIEM App

Hello,
ECS version: 1.5
Elastic stack 7.6.2

I don't know why I am finding this repo until today, such a great work.
I haven't tried this mapping yet since I created my own but I noticed that if you want this to work with Elastic SIEM to have a more complete and centralized visibility with other firewall/endpoint logs, you would need to change/add few things :

  • The Timeline Event Renderer won't give you this view :

image

Instead you would have flat events. This is because of the even.category is set to network even though that this is what is recommended in the documentation but I had to change it to network_traffic to get the event renderer working for my fortigate logs.

More info in this reddit thread here

  • If you want to see top source and destination countries on the Network panel of your Elastic SIEM app you would need to use the GeoIP processor of an ingest node and not logstash, since the one logstash uses is not ecs friendly because it creates a country_code1 or country_code2... etc instead of country_iso_code which the one inspected by Elastic SIEM . From the kibana developper panel create an ingest pipeline :
PUT _ingest/pipeline/geoip-info-fortinet
{
    "description": "Add geoip info",
    "processors": [
        {
            "geoip": {
                "field": "srcip",
                "target_field": "source.geo"
            }
        },
        {
            "geoip": {
                "field": "dstip",
                "target_field": "destination.geo"
            }
        }
    ]
}

Make sure that the node you creating this into is an node.ingest : true

Hope it helps someone and thanks for the great work.

documentation

Hello enotspe,
Is there a documentation? I m not an ELK knowledge master ^^
thanks.
Regards.

Pipeline from is down

Hi could you help to resolve this issue.
I have copied all the template, dashboard json, pipelines.yml, all the .conf files to my vm that already fresh-installed Elasticsearch and Kibana. There is an issue on starting Logstash service, there is a warning

logstash[8167]: [2022-11-17T13:20:39,686][WARN ][org.logstash.plugins.pipeline.PipelineBus][syslog-fortinet-fortigate_2_ecs][320c3995cf79ebc0724f34bd360b3e9193d7d44220d69c92749327fb9930cde9] Attempted to send event to 'syslog-fortinet-common_ecs-output' but that address was unavailable. Maybe the destination pipeline is down or stopping? Will Retry.

I've already modified the output {} and point it to my elasticsearch server gave the user and password also enabled the SSL and point the certificate to elasticsearch's http_ca.crt as well.

Field and Document level security Integration

Hello

I'd like to ask if there are any future plans on the roadmap to integrate Field and Document level security for the data ingested by Fortinet-2-elasticsearch. We have a UTM with many different customer integrations, the only differentiator being the Security Profile names.

I have seen with some of the dashboards it's possible to filter by these security profiles or even the subnets allocated to a customer but I would like to expand on that and create a "Space" for that customer granting read only access to the needed Indices. I have asked the elasticsearch team on how one can prevent a user from seeing data that's not relevant to them and I was shown the below:

image_720

I have however noticed that this option (Grant Access to specific fields) does not exist in our ELK Stack, I'd like to clarify if this is due to the way the data is ingested or is this due to our subscription level (Currently Free and open Basic) ?

General Info:
Fortigate Version: v7.0.11
ELK Stack Version: 8.12.1

Installation problem: Getting "illegal_argument_exception" error in logstash

Hello,

After following your implementation guide, I am getting the following error in my logstash logs:

{"create"=>{"_index"=>"logs-fortinet.fortigate.traffic,traffic,traffic-default", "_id"=>nil, "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"data_stream [logs-fortinet.fortigate.traffic,traffic,traffic-default] must not contain the following characters ['\\','/','*','?','\"','<','>','|',' ',',']"}}

I am running FortiOS v7.2.4 and a fresh installation of Logstash on Ubuntu.

Am I reading the above error correctly and Logstash is trying to insert into an index named "logs-fortinet.fortigate.traffic,traffic,traffic-default"? Any ideas on what I can do to troubleshoot?

Missing config file

In /logstash/pipelines.yml there is a reference to a config file which is not found in /logstash/conf.d:

- pipeline.id: syslog-fortinet-fortigate_2_ecs
  path.config: "/etc/logstash/conf.d/syslog-fortinet-fortigate_2_ecs.conf"

It comes from f7203d4

Could it be this one?

Help with Issues

Hello, first of all congrats for the solution, it's amazing!

I'm trying to deploy it but I get some errors:

[ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:syslog-fortinet-common_ecs-output, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [ \t\r\n], "#", "input", "filter", "output" at line 1, column 1 (byte 1)", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:32:in compile_imperative'", "org/logstash/execution/AbstractPipelineExt.java:234:in initialize'", "org/logstash/execution/AbstractPipelineExt.java:168:in initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:48:in initialize'", "org/jruby/RubyClass.java:911:in new'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:50:in execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:386:in `block in converge_state'"]}

[WARN ][org.logstash.plugins.pipeline.PipelineBus][syslog-fortinet-fortigate_2_ecsv2][3f0de7dc0df9b79955e21fe0954f4615326ef0e002822839720b6337d266eb85] Attempted to send event to 'syslog-fortinet-common_ecs-output' but that address was unavailable. Maybe the destination pipeline is down or stopping? Will Retry.

Can you help my with this errors?

Thanks!

invalid composite mappings for [logs-fortinet.fortigate.event]

When run

PUT _index_template/logs-fortinet.fortigate.event
{
  "priority": 200,
  "index_patterns": [
    "logs-fortinet.fortigate.event*"
  ],
  "data_stream": {
    "hidden": false,
    "allow_custom_routing": false
  },
  "composed_of": [
    "ecs-base",
    "ecs-user",
    "ecs-user_agent",
    "ecs-observer",
    "ecs-destination",
    "ecs-source",
    "ecs-network",
    "ecs-error",
    "ecs-url",
    "ecs-rule",
    "ecs-data_stream",
    "ecs-organization",
    "ecs-ecs",
    "ecs-host",
    "logs-fortinet.fortigate.event@ilm",
    "strings_as_keyword@mappings",
    "auto_expand_replicas@settings",
    "refresh_interval@settings",
    "logs-fortinet.fortigate@mappings",
    "ecs-log-modified",
    "ecs-event-modified",
    "ecs-file-modified",
    "synthetic_source@mappings"
  ]
}

i got error

{
  "error": {
    "root_cause": [
      {
        "type": "illegal_argument_exception",
        "reason": "composable template [logs-fortinet.fortigate.event] template after composition with component templates [ecs-base, ecs-user, ecs-user_agent, ecs-observer, ecs-destination, ecs-source, ecs-network, ecs-error, ecs-url, ecs-rule, ecs-data_stream, ecs-organization, ecs-ecs, ecs-host, logs-fortinet.fortigate.event@ilm, strings_as_keyword@mappings, auto_expand_replicas@settings, refresh_interval@settings, logs-fortinet.fortigate@mappings, ecs-log-modified, ecs-event-modified, ecs-file-modified, synthetic_source@mappings] is invalid"
      }
    ],
    "type": "illegal_argument_exception",
    "reason": "composable template [logs-fortinet.fortigate.event] template after composition with component templates [ecs-base, ecs-user, ecs-user_agent, ecs-observer, ecs-destination, ecs-source, ecs-network, ecs-error, ecs-url, ecs-rule, ecs-data_stream, ecs-organization, ecs-ecs, ecs-host, logs-fortinet.fortigate.event@ilm, strings_as_keyword@mappings, auto_expand_replicas@settings, refresh_interval@settings, logs-fortinet.fortigate@mappings, ecs-log-modified, ecs-event-modified, ecs-file-modified, synthetic_source@mappings] is invalid",
    "caused_by": {
      "type": "illegal_argument_exception",
      "reason": "invalid composite mappings for [logs-fortinet.fortigate.event]",
      "caused_by": {
        "type": "illegal_argument_exception",
        "reason": "field [error.stack_trace] of type [wildcard] doesn't support synthetic source"
      }
    }
  },
  "status": 400
}

after i remove

"synthetic_source@mappings"

this success.

my Question

Why component templates "synthetic_source@mappings" give above error.
i check, i'm already add this component templates.

Thank you

Output pipeline not working properly

Sorry if i'm mistaken, but for what i'm seeing, the line

output{ pipeline { send_to => "drop" }

isn't working properly, I forwarded a log from a host that isn't on the "host_org.yml" dictionary
and supposedly, the order should be

10-input_syslog.conf -> 20-observer_enrichment.conf -> 70-drop.conf -> 80-output.conf

But according to the line
[2020-04-06T12:41:41,278][DEBUG][logstash.util.decorators ][main] filters/LogStash::Filters::Mutate: adding value to field {"field"=>"[ecs][version]", "value"=>["1.2.0"]} and by the debug that made, the tag is added in the file 21-snmp_cpu_fortigate_2_ecs.conf

So, by the debug i've made, instead of going through the pipelines, logstash is going through the file names.

DEBUG LOGS:

image

PS: For the porpuse of putting the logs in here i changed the field host to another IP in the 10-input_syslog.conf file withe the following line:
add_field => {"host" => "10.0.1.254"}

Integración

@enotspe Excelente proyecto, estamos revisando para un tema académico, te comento que estoy intentando hacer las pruebas de tu proyecto, de la información ofrecida en el issue de instalación la arquitectura corresponte a un colector Logstash local y una Elastic + Kibana Cloud.

He seguido las recomendaciones, pero aún no puedo hacer funcionar el colector con los pipelines (aparentemente me faltan plugins). Por favor me podrías ayudar con los plugins que se deben instalar en el colector y las ubicaciones de los archivos (hearbets, index, ingest, MIBs).

Aprecio mucho tu tiempo y ayuda.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.