falcosecurity / falcosidekick Goto Github PK
View Code? Open in Web Editor NEWConnect Falco to your ecosystem
License: Apache License 2.0
Connect Falco to your ecosystem
License: Apache License 2.0
Describe the bug
I'm using falcosidekick to send host os events and Kubernetes audit events to Grafana Loki. If a host os rule is triggered, the event is send to Loki as expected. If I trigger a Kubernetes Audit Event the push to Loki failed with 400 - Header Missing.
Working example for Host Rule:
The Example described in the Documentation https://falco.org/docs/event-sources/kubernetes-audit/#example failed:
2021/02/06 19:26:16 [DEBUG] : Falco's payload : {"output":"20:25:53.706802944: Notice K8s ConfigMap Deleted (user=system:serviceaccount:cattle-system:kontainer-engine configmap=my-config ns=stage resp=200 decision=allow reason=RBAC: allowed by ClusterRoleBinding \"globaladmin-user-r62pf\" of ClusterRole \"cluster-admin\" to User \"user-r62pf\")","priority":"Notice","rule":"K8s ConfigMap Deleted","time":"2021-02-06T19:25:53.706802944Z","output_fields":{"jevt.time":"20:25:53.706802944","ka.auth.decision":"allow","ka.auth.reason":"RBAC: allowed by ClusterRoleBinding \"globaladmin-user-r62pf\" of ClusterRole \"cluster-admin\" to User \"user-r62pf\"","ka.response.code":"200","ka.target.name":"my-config","ka.target.namespace":"stage","ka.user.name":"system:serviceaccount:cattle-system:kontainer-engine","source":"falco"}}
2021/02/06 19:26:16 [DEBUG] : Loki payload : {"streams":[{"labels":"{katargetname=\"my-config\",katargetnamespace=\"stage\",kausername=\"system:serviceaccount:cattle-system:kontainer-engine\",source=\"falco\",jevttime=\"20:25:53.706802944\",kaauthdecision=\"allow\",kaauthreason=\"RBAC: allowed by ClusterRoleBinding \"globaladmin-user-r62pf\" of ClusterRole \"cluster-admin\" to User \"user-r62pf\"\",karesponsecode=\"200\",rule=\"K8s ConfigMap Deleted\",priority=\"Notice\"}","entries":[{"ts":"2021-02-06T19:25:53Z","line":"20:25:53.706802944: Notice K8s ConfigMap Deleted (user=system:serviceaccount:cattle-system:kontainer-engine configmap=my-config ns=stage resp=200 decision=allow reason=RBAC: allowed by ClusterRoleBinding \"globaladmin-user-r62pf\" of ClusterRole \"cluster-admin\" to User \"user-r62pf\")"}]}]}
2021/02/06 19:26:16 [ERROR] : Loki - Header missing (400)
2021/02/06 19:26:16 [ERROR] : Loki - Header missing
How to reproduce it
Expected behaviour
Sends Kubernetes Audit Events successfully to Loki
Screenshots
See the issue description
Environment
Falco 0.27.0
Driver version: 5c0b863ddade7a45568c0ac97d037422c9efb750
{
"machine": "x86_64",
"nodename": "dev-node-3",
"release": "5.11.0-051100rc6-generic",
"sysname": "Linux",
"version": "#202101312230 SMP Sun Jan 31 22:33:58 UTC 2021"
}
Motivation
Falcosidekick
is used more and more, and even if we review all PR, we faced some regressions in code base. For avoiding to impact customers, we need to implement unit tests for every outputs, not only the most used.
Feature
A good test framework
Unit tests for all outputs
Alternatives
N/A
Additional context
N/A
https://docs.aws.amazon.com/fr_fr/securityhub/1.0/APIReference/Welcome.html
Still in Preview at AWS
Motivation
Google Chat(former Google Hangout Chat) adoption is increasing these days.
Feature
Send notification like Slack and Mattermost
Alternatives
Additional context
May I add this feature?
I would like to contribute.
What to document
The OWNERS responsible of approving and reviewing the code changes in falcosidekick.
For sure the first one is @Issif
Would you suggest other maintainers?
I can volunteer for keeping an eye on it when I have some spare time, btw.
Motivation
Falco's last releases (since 0.18.0) provide a new gRPC API to gather events.
Feature
Use this gRPC API as source of events.
Alternatives
N/A
Additional context
N/A
What to document
As falcosidekick
is becoming bigger and bigger, the documentation can't be only in a long readme. I would like to create a dedicated section in documentation of falco.org that will be easier to read. The readme here will only been a description and links to according section in official documentation.
Motivation
PagerDuty's API documentation states that the Events v2 API should be used by monitoring tools instead of the Rest API.
The REST API provides a way for third parties to connect to a PagerDuty account and access or manipulate configuration data on that account. It is not for connecting your monitoring tools to send events to PagerDuty; for that, use the Events API.
Feature
Replace the existing PagerdutyCreateIncident
call with one that sends a POST request to the https://events.pagerduty.com/v2/enqueue endpoint. This would remove the APIKey
, Asignee
, EscalationPolicy
, and Service
configuration options, and consolidate alert routing/authentication via a provided routing key (see API syntax here)
Alternatives
Additional context
I've spiked out an initial changeset. If the proposed change makes sense, I'd be happy to submit a PR.
Motivation
Add Authorization
header for Webhook
output.
Feature
This header is mandatory for some services.
Alternatives
N/A
Additional context
Motivation
In any PaaS/SaaS running falco, as a cluster-admin I want to see all falco-events, but as a tenant-admin (client with admin permissions over specific resources) I only want to see falco events related to my tenant (i.e events in my k8s namespaces if we are taking about k8s)
This can be achieved using software like Prometheus, AlertManager, OpsGenie... but it would be great if falco didn't need any of these to get the same goal.
Feature
Add some route logic could be inserted in config.yaml so an alert will be sent to an specific output depending on the value of some of its labels. For example:
slack:
webhookurl: "" # Slack WebhookURL (ex: https://hooks.slack.com/services/XXXX/YYYY/ZZZZ), if not empty, Slack output is enabled
outputformat: "all" # all (default), text, fields
minimumpriority: "debug"
messageformat: "Alert : rule *{{ .Rule }}* triggered by user *{{ index .OutputFields \"user.name\" }}*"
filterexpr: "{{ .k8s_ns_name }} == "mytenant" and {{ .Rule }} == "Launch Privileged Container"
This would trigger the Slack webhook only if filterexpr is True
Alternatives
You can get the same behaviour with specific monitoring tools like Prometheus and AlertManager.
Additional context
Motivation
I would like to use falcosidekick to expose metrics to Prometheus in the same manner as falco-exporter
, which will allow me to use Grafana to visualize as well as create AlertManager rules based on the Prometheus data (e.g. statistical analysis of event count) as opposed to relaying Falco event payloads directly to AlertManager.
The reason I am looking at falcosidekick for this, is it seems to be designed as a sidecar, and also does not (currently) enforce mTLS to communicate with the Falco gRPC server to acquire event stream.
Feature
falcosidekick implements /metrics
endpoint with a Prometheus Counter type for events; ideally labelled with Falco rule, priority and pod's hostname. Summary or Histogram types may be appropriate for different fleet sizes or analysis.
Alternatives
n/a
Additional context
https://github.com/falcosecurity/falco-exporter/blob/master/pkg/exporter/exporter.go#L37-L47
Describe the bug
Falcosidekick does not notify on slack Webhooks when it is installed using Helm
How to reproduce it
Create a K3s cluster
Install Falco and Falcosidekick using Helm: helm install falco falcosecurity/falco -n falco --set falco.jsonOutput=true --set falco.jsonIncludeOutputProperty=true --set falco.httpOut.enabled=true --set falco.httpOutput.url=http://falcosidekick:2801/ --set falco.programOutput.program=""jq '{text: .output}' | curl -d @- -X POST https://hooks.slack.com/services/T01k9A05X55/B01KM16MAG5/KCYBsvMPDeHyJ3KhmSeN93nY\"" --set falcosidekick.enabled=true --set config.slack.webhookurl="https://hooks.slack.com/services/T01k9A05X55/B01KM16MAG5/KCYBsvMPDeHyJ3KhmSeN93nY" --set config.slack.minimumpriority="debug" --set config.debug=true
Spawn a falco pod. Falcosidekick does not sent a notification to the Slack workspace.
Expected behaviour
Environment
Additional context
Motivation
using another logging library like logrus we can use some features, like enable log level and other things and not need to write the type of the log by hand as we do today.
since this can be a breaking change for people that are parsing the logs and the format will change a bit we can make a major release
/milestone 3.0.0
Motivation
It would make sense to make the chart release so it can be available in the Helm Hub. This allows the project to be used without needing the GitHub project or having to copy the current deployment Helm files in this repo.
Feature
Chart available in Helm Hub.
Falcosidekick is now a part of falcosecurity organization, so we need to migrate docker images too. Currently it's on my own account : https://hub.docker.com/r/issif/falcosidekick/.
What would you like to be added:
native Microsoft Teams integration to allow for pushing alerts to the chat platform
Why is this needed:
Currently the program_output example isn't accepted by MS teams:
[plundering-grasshopper-falco-jfjp9] Invalid webhook request - Empty Payload18:16:22.370929225: Notice A shell was spawned in a container with an attached terminal (user=<NA> k8s.ns=default k8s.pod=edgy-iguana-mariadb-0 container=791fa4a27927 shell=bash parent=docker-runc cmdline=bash terminal=34817 container_id=791fa4a27927 image=bitnami/mariadb) [plundering-grasshopper-falco-jfjp9] parse error: Expected string key before ':' at line 1, column 3
As an office 365 user, I'd like to be able to configure falcosidekick to send emails on my behalf for notifications
Currently, there are not any documented configs for enabling tls or smarttls for SMTP output. which is required for using office 365's SMTP servers.
I'd like to add an option to enable tls for smtp output
It would be useful to add a new output that uses the alertmanager API (https://prometheus.io/docs/alerting/clients/) to send alerts.
Add influxdb
as available output.
Describe the bug
As documented the message formatting works as expected:
messageformat: "Alert : rule *{{ .Rule }}* triggered by user *{{ index .OutputFields \"user.name\" }}*"
In an attempt to be more dynamic it appears that interpolation completely fails:
messageformat: "*{{ index.OutputFields \"priority\"}}* : rule *{{ .Rule }}* triggered by user *{{ index .OutputFields \"user.name\" }}*"
How to reproduce it
Pass in the above Slack Message Formatting and test. Using the above format will remove all format text and logging will not indicate a failure.
Expected behaviour
Since we can pull selected fields into the message format normally I expected that instead of hard coding alert we could instead get Warning/Critical/Alert etc.
Environment
{
"machine": "x86_64",
"nodename": "qa-kare-falco-xblfq",
"release": "4.14.146-119.123.amzn2.x86_64",
"sysname": "Linux",
"version": "#1 SMP Mon Sep 23 16:58:43 UTC 2019"
}
PRETTY_NAME="Debian GNU/Linux bullseye/sid"
NAME="Debian GNU/Linux"
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
Linux falco-xblfq 4.14.146-119.123.amzn2.x86_64 #1 SMP Mon Sep 23 16:58:43 UTC 2019 x86_64 GNU/Linux
Falco: Kubernetes Upstream Helm Chart
Sidekick: Kubernetes Upstream Helm Chart
Additional context
Motivation
I'm using falco to capture events where some of the fields values are numbers (network ports) and would like that information in the alertmanager alert fields
Feature
Currently anything not a string is ignored, for at least integer values it should be simple enough to convert to a string and add to the alert labels.
Assuming this is accepted I'm happy to make a PR for it.
Describe the bug
The helm chart does not contain a pod security policy, a clusterrole and an assignment.
How to reproduce it
Try to deploy it in k8s with PSPs enabled.
Expected behaviour
helm charts deployes without errors. and pod starts.
Screenshots
Environment
Falco version:
0.18.0
System info:
Cloud provider or hardware configuration:
AWS
OS:
kops based k8s with falcosidekick container
Kernel:
Installation method:
Additional context
What would you like to be added:
Add NATS as available output.
Why is this needed:
Falcosidekick is now an official way to handle events triggered by falco, it may be relevant to replace the brick used by https://github.com/falcosecurity/kubernetes-response-engine by it
Can we please have an open ended call to start planning development on Falco sidekick?
Describe the bug
I'm trying to forward falco alerts to Alertmanager which is running an SSL proxy sidecar. This sidecar allows to only communicate via HTTPS. A custom CA cert is being generated and used by Alertmanager.
When FalcoSideKick tries to forward alert to Alertmanager it throws the following error:
2020/07/29 08:18:40 [ERROR] : AlertManager - Post "https://alertmanager-main.openshift-monitoring.svc:9092/api/v1/alerts": x509: certificate signed by unknown authority
How to reproduce it
Use an Alertmanager with SSL enabled generated via custom CA
Expected behaviour
Falco alert reaches the Alertmanager successfully over HTTPS
Screenshots
Environment
Falco version: 0.24.0
Driver version: 85c88952b018fdbce2464222c3303229f5bfcfad
{
"machine": "x86_64",
"nodename": "ip-10-0-196-186",
"release": "4.18.0-193.12.1.el8_2.x86_64",
"sysname": "Linux",
"version": "#1 SMP Thu Jul 2 15:48:14 UTC 2020"
}
Linux ip-10-0-196-186 4.18.0-193.12.1.el8_2.x86_64 #1 SMP Thu Jul 2 15:48:14 UTC 2020 x86_64 GNU/Linux
Openshift/Kubernetes
Additional context
Currently, if your falco rule output is fairly long, each slack alert by falcosidekick will spit out a really ugly json blob (even though that information is already captured in the slack event right below).
I think it may make sense to change it to be just the rule name or configurable via environment variables.
Falco provides native sends in http, for now, falcosidekick usee program output + curl. @mfdii proposed me to benchmark both to choose best.
Add in cloud queue as available output.
Example :
SQS (AWS)
pub/sub (Google)
Add email
as available output.
Daemon will not send emails directly but will use net/smtp
package to use a remote smtp server. As other inputs, config will be with env vars.
Motivation
Refactoring proposal for v3.x
Feature
Using the resolver pattern to manage the creation of different kinds of outputs in one place.
Using a generic Interface for output-clients to handle all outputs the same way.
Alternatives
Additional context
example implementation https://github.com/fjogeleit/policy-reporter/blob/main/pkg/config/resolver.go
Hi !
Thank you all for bringing falco to the community.
I opened a PR on the project and saw that the PR template reference a CONTRIBUTING.md that doesn't exit on the repository.
Could it be copied from this : https://github.com/falcosecurity/.github/blob/master/CONTRIBUTING.md ?
Describe the bug
Alerts are not sent, not enough log details about actual bug. Additionally this raises we shall export prometheus metrics about recieved/sent events and health state, as suggested in: #60
2020/09/03 09:45:14 [ERROR] : AlertManager - Header missing (400)
2020/09/03 09:45:19 [ERROR] : AlertManager - Header missing (400)
2020/09/03 09:47:33 [INFO] : AlertManager - Post OK (200)
2020/09/03 09:47:47 [ERROR] : AlertManager - Header missing (400)
2020/09/03 09:48:50 [ERROR] : AlertManager - Header missing (400)
2020/09/03 09:49:19 [ERROR] : AlertManager - Header missing (400)
Caused by:
level=error ts=2020-09-03T13:47:41.979Z caller=api.go:781 component=api version=v1 msg="API error" err="bad_data: \"proc_aname[2]\" is not a valid label name"
level=error ts=2020-09-03T13:48:14.139Z caller=api.go:781 component=api version=v1 msg="API error" err="bad_data: \"proc_aname[2]\" is not a valid label name"
Will provide MR with fix.
Motivation
In my company, we use PagerDuty as a pager system.
If there are high-level priority detection, I want to get a call.
Feature
Page Pagerduty team if the output of Falco is higher than the minimunpriority.
Alternatives
Additional context
Hi @Issif . Could you please push the 2.10.0 image to docker hub when you have some time?
Thanks.
It is really helpful to have indices that can group data on a daily, monthly or annually basis. Here we have some examples:
Daily index: sample-2019.06.24
Monthly index: sample-2019.06
Annual index: sample-2019
Describe the bug
Several outputs are not configured for updating metrics for statsd/dogstatsd :
How to reproduce it
Expected behaviour
All outputs should export their metrics
Screenshots
Environment
All releases of falcosidekick
are concerned
Additional context
Motivation
Slack Attachments are now legacy way to post rich messages to Slack. The next generation is Block Kits.
Not only it is legacy, Attachments has limit that the emojis are displayed as "emojis" like below and kinda ugly now(See :true:
emoji):
We can disable this and show as plain text if we use the latest generation Block kit.
Feature
Use Slack Block Kit as posting rich messages.
Alternatives
Additional context
If it looks fine to the maintainers, I would be happy to work on this ๐
Before integrating new outputs I'm planning some changes (and possible breaks) for a v2.0.0. Here what and why.
Until now, configuration is only possible through environment variables, I would like to add a config file as the number of option is increasing as long I'm adding new outputs. My idea is to use https://github.com/spf13/viper and use its capacity to handle different methods for configuration with a hierarchy "env vars > yaml config file > default values".
For that, I need to change some methods, some of outputs have endpoint of their services which are hard coded. For adding tests, I will add function to create Client
object with as much modifiable parameters that needed.
Automatic code coverage test will run on https://coveralls.io/.
โ๏ธ #18
/checkpayload
is not usefull anymore, DEBUG
config option will print input and outputs in stdout.
โ๏ธ #18
We have currently :
2018/10/11 08:53:25 [INFO] : Outputs configuration : Slack=enabled, Datadog=disabled, Alertmanager=disabled
That will be :
2018/10/11 08:53:25 [INFO] : Enabled Outputs : Slack
2018/10/11 08:53:25 [INFO] : Disabled Outputs : Datadog, Alertmanager
This syntax is more concise.
Add /stats
handler that will return a json with classic metrics (number of goroutines, heap, etc) from expvar
package and custom ones :
โ๏ธ #18
refer #15
refer #16
All in the title !
I can do it soon
Motivation
I am a heavy user of GCP and also serverless products(Cloud Functions & Cloud Run).
Many documents use Cloud PubSub + Cloud Functions(Playbook). But we must treat the event messages with care because Cloud PubSub message semantics are at least once and there might be duplicated messages.
Therefore, the backend Cloud Functions or the Cloud Run has to handle the duplication and it's not that easy. In many cases, it requires a database to share context between the instances of Cloud Functions or Cloud Run services.
I want the Falcosidekick to HTTP request with OAuth token directly to Cloud Function or Cloud Run to make the backend easier to implement.
Feature
Support Cloud Function and Cloud Run. It will use the service account key for authorization.
I want to add the config like below
gcp:
credentials: "" # The base64-encoded JSON key file for the GCP service account
+ functions:
+ webhookurl: "" # The URL of the function
+ # minimumpriority: "debug" # minimum priority of event for using this output, order is emergency|alert|critical|error|warning|notice|informational|debug or "" (default)
pubsub:
projectid: "" # The GCP Project ID containing the Pub/Sub Topic
topic: "" # The name of the Pub/Sub topic
# minimumpriority: "debug" # minimum priority of event for using this output, order is emergency|alert|critical|error|warning|notice|informational|debug or "" (default)
+ run:
+ webhookurl: "" # The URL of the Cloud Run service
+ # minimumpriority: "debug" # minimum priority of event for using this output, order is emergency|alert|critical|error|warning|notice|informational|debug or "" (default)
Alternatives
Additional context
The webhook client can't be used because the OAuth token expires and can't be hardcoded.
Motivation
The current library that we are using for kafka (https://github.com/confluentinc/confluent-kafka-go) introduces a dependency to a C library on all Go code that uses the package, and make the build a bit complex.
Since we don't use advanced features for kafka and we "just" post a message, maybe we can switch to use another lib like
https://github.com/segmentio/kafka-go
Last release has added support for StatsD as Output with usage of this package from Datadog.
It works well with Dogstatsd but it's not full functionnal with more classic implementation of StatsD protocol. The glitch is on all metrics with tags, that's a custom feature which is only available in Dogstatsd, classic statsd doesn't accept them.
cc @actgardner
Add an endpoint /test
to check communications with enabled outputs.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.