GithubHelp home page GithubHelp logo

falco-trace's Introduction

Falco Trace

Falco Running with ptrace(2) for Kernel Events container image

This repository is designed to bootstrap running Falco with pdig.

Given the way this project uses Falco, we are able to run falco in the following way.

  • NO Linux Kernel headers required
  • NO Compiling / Downloading a kernel module
  • NO BPF probe
  • Falco running as a daemon with logs going to STDOUT
  • Falco running against a process

Running bash with Falco and pdig in a container

docker run -it -p 443:443 krisnova/falco-trace:latest /bin/bash
falco -u --pidfile /var/run/falco.pid --daemon
pdig -a /bin/bash
  # Do nasty things here
  cat /etc/shadow
  touch /usr/bin/scary
  exit
cat /var/log/falco.log

Kubernetes

You can run Falco in Kubernetes without needing to privilege escalate or manage a kernel module.

kubectl run falco --image krisnova/falco-trace:latest
kubectl logs falco -f
kubectl delete po falco

You can run a vulnerable server in Kubernetes and show Falco working

kubectl run vs --image krisnova/falco-trace-vulnerableserver:latest --expose --port 443
sudo kubectl port-forward svc vs 443:443
nc -nv 127.0.0.1 443
cat /etc/shadow
exit
kubectl logs falco -f

SSH

You can run the SSH image for easy backend access to a container via SSH.

docker run -p 1313:22 krisnova/falco-trace-ssh:latest

Then from another shell

ssh [email protected] -p 1313
password: falco

In fargate just use the following container image.

registry.hub.docker.com/krisnova/falco-trace-ssh:latest

Vulnerable Server Application

You can run the vulnerable server image to run a vulnerable web server that can give you a remote shell and simulate a hacker.

docker run -p 443:443 krisnova/falco-trace-vulnerableserver:latest 

In another shell you can "hack" into the server using the following command

ncat -nv 127.0.0.1 443

In fargate just use the following container image, and use the public IP

registry.hub.docker.com/krisnova/falco-trace-vulnerableserver:latest

Building the container image

docker build -t yourorg/falco-trace:latest .
docker push yourorg/falco-trace:latest

Example Application

There is a example-apps/SkeletonApplication example that has more documentation and you can clone that directory to get started with Falco and pdig

FROM krisnova/falco-trace:latest
CMD ["pdig", "-a", "./init.sh"]

AWS ECS/Fargate

This has been tested and is working in AWS Fargate. Set up a Fargate cluster, this part was easy and the docs were helpful for me.

Below is a tutorial on running the falco-trace-vulnerableserver image in AWS ECS/Fargate and exploiting the image to have Falco alert you in CloudWatch.

Create a Task Definition

I used what I thought were sane defaults and if given a choice I always went as small as possible on resources.

Create Container

You need to create a container associated with your task. This is like how a Kubernetes Pod relates to a Deployment.

Here we define the workload at the container level.

Container name: falco-trace-vulnerableserver

Image: registry.hub.docker.com/krisnova/falco-trace-vulnerableserver:latest

Port Mappings: 443 tcp

Command: # You can override the falco-trace containers if needed, but for this example leave this blank

Logs: check Auto-configure CloudWatch logs if you want Falco logs plumed to CloudWatch # Note: You can setup splunk or other logging backends if you want.

Leave everything else blank and save and exit back to the Task Definition screen

Configure JSON

Note: This is required for Falco to work!

Scroll down and find the button below Volumes labelled Configure via JSON

Paste the following linuxParameters block into your JSON

            "linuxParameters": {
                "capabilities": {
                    "add": [
                        "SYS_PTRACE"
                    ],
                    "drop": null
                },

The full output of mine looks like this

{
    "ipcMode": null,
    "executionRoleArn": "arn:aws:iam::059797578166:role/ecsTaskExecutionRole",
    "containerDefinitions": [
        {
            "dnsSearchDomains": null,
            "logConfiguration": {
                "logDriver": "awslogs",
                "options": {
                    "awslogs-group": "/ecs/nova-hacks",
                    "awslogs-region": "us-east-1",
                    "awslogs-stream-prefix": "ecs"
                }
            },
            "entryPoint": null,
            "portMappings": [
                {
                    "hostPort": 443,
                    "protocol": "tcp",
                    "containerPort": 443
                }
            ],
            "command": null,
            "linuxParameters": {
                "capabilities": {
                    "add": [
                        "SYS_PTRACE"
                    ],
                    "drop": null
                },
                "sharedMemorySize": null,
                "tmpfs": null,
                "devices": null,
                "maxSwap": null,
                "swappiness": null,
                "initProcessEnabled": null
            },
            "cpu": 0,
            "environment": null,
            "resourceRequirements": null,
            "ulimits": null,
            "dnsServers": null,
            "mountPoints": null,
            "workingDirectory": null,
            "secrets": null,
            "dockerSecurityOptions": null,
            "memory": null,
            "memoryReservation": null,
            "volumesFrom": null,
            "stopTimeout": null,
            "image": "registry.hub.docker.com/krisnova/falco-trace-vulnerableserver:latest",
            "startTimeout": null,
            "firelensConfiguration": null,
            "dependsOn": null,
            "disableNetworking": null,
            "interactive": null,
            "healthCheck": null,
            "essential": true,
            "links": null,
            "hostname": null,
            "extraHosts": null,
            "pseudoTerminal": null,
            "user": null,
            "readonlyRootFilesystem": null,
            "dockerLabels": null,
            "systemControls": null,
            "privileged": null,
            "name": "falco-trace-vulnerableserver",
            "repositoryCredentials": {
                "credentialsParameter": ""
            }
        }
    ],
    "memory": "4096",
    "taskRoleArn": "arn:aws:iam::059797578166:role/ecsTaskExecutionRole",
    "family": "falco-trace-vulnerablewebserver",
    "pidMode": null,
    "requiresCompatibilities": [
        "FARGATE"
    ],
    "networkMode": "awsvpc",
    "cpu": "1024",
    "inferenceAccelerators": [],
    "proxyConfiguration": null,
    "volumes": [],
    "tags": []
}

Then click create.

Create Service

Click the Actions drop-down and Create Service.

Note: This is important to get right or Falco will not work!

First select your cluster (third field) otherwise the UI will reset if you try to do this later.

Launch Type: Fargate

Platform: 1.4.0 or greater

Service Name: falco-trace-vulnerableserver

Number of tasks: 1

Leave everything else alone and click Next Step

SCROLL UP! You have to scroll up to the top of the page now.

Cluster VPC: Just pick one you would like to use

Subnets: Whatever you want, we will be poking a hole in the Security Group later

Security Group: Please practice good administrative discipline here. Open TCP 443 on CIDR X.X.X.X/32 for your public IP. curl ifconfig.me

LoadBalancer: None

Leave everything else blank and click Next Step

I do not use autoscaling. Next step

Create Service

Hacking into Fargate

Now you can simulate a hack by connecting to your known vulnerable web server.

Click on your running task to find it's public IP address.

You can now "hack" into your application using the following command

ncat -nv <PUBLIC_IP> <443>

You should now have a remote shell in ECS and from here you can get up to plenty of mischief.

Here are some handy commands that will trigger Falco alerts and warnings you can issue to make sure everything is working.

# Touching files in known executable directories
touch /usr/bin/1
touch /usr/bin/2
touch /usr/bin/3

# Execute a READ on /etc/shadow
cat /etc/shadow > /dev/null 2>&1

# Creating files in /etc/
touch /etc/1
touch /etc/2
touch /etc/3

Feel free to play around and see what Falco has to say about it.

Falco logs in Cloud Watch

On the same page where you found the public IP of your task, you can view the logs.

If you have logging enabled (like CloudWatch) you can continue to explore more there.

falco-trace's People

Contributors

krisnova avatar

Stargazers

namihey avatar  avatar Jonathan Gautheron avatar Barun Acharya avatar  avatar  avatar tfxidian avatar Rafael  avatar Daniel I. Khan Ramiro avatar  avatar Eric Brumfield avatar Siriwat Jiwsuwan avatar Edson Ferreira avatar  avatar Marc Tamsky avatar koarakko avatar Tomoya Amachi avatar Alexander avatar Matteo Baiguini avatar Dane Springmeyer avatar Yonatan Koren avatar Paul Hirsch avatar Daniel Chan avatar  avatar Ilmari Vacklin avatar Nathan Randall avatar Ata Fatahi avatar Paavan avatar Samuele V avatar  avatar Uzair Anwar avatar Olopez avatar Serhii Zhuha avatar Noel Georgi avatar Alexey Ugnichev avatar Yadav Lamichhane avatar

Watchers

 avatar  avatar Alexander Stein (Inactive) avatar

falco-trace's Issues

GitHub username

Hi Kris!

This is an interesting project I am looking forward to try it out and possibly help contributing.

I noticed your GitHub username kris-nova is different than the ones referenced in Falco documentation (and falco-trace documentation itself), spelled without the hyphen:

https://falco.org/docs/event-sources/drivers

Some other places I found using GitHub search:

https://github.com/search?q=org%3Afalcosecurity+krisnova&type=code

I was hoping to have this corrected since other people in the community may get a little bit lost.

Thank you for your open source contributions and I hope you're staying safe!

How to use kris-nova/falco-trace image for EKS fargate, can it be used as a sidecar?

I would like to know about exact steps to use falco-trace image on EKS fargate.

The steps I tried are as follows:
In fargate pod, deployed 2 containers - out of which one is sidecar built with a wrapper to falco-trace image using the sample from example-apps dir.

  1. init.sh contains entry to tail /var/log/falco.log
  2. Dockerfile contains CMD to invoke pdig with process running from another container which needs to be examined for alerts. [pdig -a ]

This leaves sidecar container of falco into CrashLoopBackOff state and logs show the following entry
ptrace(PTRACE_TRACEME, 0, NULL, NULL) failed at /falco-trace/pdig/pdig.cc:393 with -1 (errno Operation not permitted)

This is because SYS_PTRACE capability cannot be added to EKS fargate pod containers.

I would love to know the detailed steps, to get falco on EKS(and not on ECS) faregate to be able to deploy it as a runtime violation detection tool.

Any help is much appreciated.

Thanks,
Madhura

Running an Application on Fargate with PTRACE

Hi Kris,
I am trying to run redis-server (or another application) with Falco on Fargate. I have the following Dockerfile:

FROM krisnova/falco-trace:latest
RUN apt-get update && apt-get install redis-server -y
COPY . .
CMD ["pdig", "-a", "./init.sh"]
EXPOSE 6379

and this is the init.sh file I'm using:

#!/bin/bash
falco -u --pidfile /var/run/falco.pid --daemon
tail -f /var/log/falco.log &
echo "Running app..."
redis-server

I created a task definition with PTRACE enabled and created a service from that task definition. The task enters the RUNNING state, but dies shortly thereafter. Sadly, no logs are logged to CloudWatch even though the task is configured to send logs to there.

Below is my task def:

{
  "ipcMode": null,
  "executionRoleArn": "arn:aws:iam::account:role/ecsTaskExecutionRole",
  "containerDefinitions": [
    {
      "dnsSearchDomains": null,
      "environmentFiles": null,
      "logConfiguration": {
        "logDriver": "awslogs",
        "secretOptions": null,
        "options": {
          "awslogs-group": "/ecs/ptrace",
          "awslogs-region": "us-west-2",
          "awslogs-stream-prefix": "ecs"
        }
      },
      "entryPoint": null,
      "portMappings": [
        {
          "hostPort": 80,
          "protocol": "tcp",
          "containerPort": 80
        }
      ],
      "command": null,
      "linuxParameters": {
        "capabilities": {
          "add": [
            "SYS_PTRACE"
          ],
          "drop": null
        },
        "sharedMemorySize": null,
        "tmpfs": null,
        "devices": null,
        "maxSwap": null,
        "swappiness": null,
        "initProcessEnabled": null
      },
      "cpu": 0,
      "environment": [],
      "resourceRequirements": null,
      "ulimits": null,
      "dnsServers": null,
      "mountPoints": [],
      "workingDirectory": null,
      "secrets": null,
      "dockerSecurityOptions": null,
      "memory": null,
      "memoryReservation": null,
      "volumesFrom": [],
      "stopTimeout": null,
      "image": "account.dkr.ecr.us-west-2.amazonaws.com/ptrace",
      "startTimeout": null,
      "firelensConfiguration": null,
      "dependsOn": null,
      "disableNetworking": null,
      "interactive": null,
      "healthCheck": null,
      "essential": true,
      "links": null,
      "hostname": null,
      "extraHosts": null,
      "pseudoTerminal": null,
      "user": null,
      "readonlyRootFilesystem": null,
      "dockerLabels": null,
      "systemControls": null,
      "privileged": null,
      "name": "ptrace"
    }
  ],
  "placementConstraints": [],
  "memory": "1024",
  "taskRoleArn": null,
  "compatibilities": [
    "EC2",
    "FARGATE"
  ],
  "taskDefinitionArn": "arn:aws:ecs:us-west-2::task-definition/ptrace:3",
  "family": "ptrace",
  "requiresAttributes": [
    {
      "targetId": null,
      "targetType": null,
      "value": null,
      "name": "com.amazonaws.ecs.capability.logging-driver.awslogs"
    },
    {
      "targetId": null,
      "targetType": null,
      "value": null,
      "name": "ecs.capability.execution-role-awslogs"
    },
    {
      "targetId": null,
      "targetType": null,
      "value": null,
      "name": "com.amazonaws.ecs.capability.ecr-auth"
    },
    {
      "targetId": null,
      "targetType": null,
      "value": null,
      "name": "com.amazonaws.ecs.capability.docker-remote-api.1.19"
    },
    {
      "targetId": null,
      "targetType": null,
      "value": null,
      "name": "ecs.capability.execution-role-ecr-pull"
    },
    {
      "targetId": null,
      "targetType": null,
      "value": null,
      "name": "com.amazonaws.ecs.capability.docker-remote-api.1.18"
    },
    {
      "targetId": null,
      "targetType": null,
      "value": null,
      "name": "ecs.capability.task-eni"
    }
  ],
  "pidMode": null,
  "requiresCompatibilities": [
    "FARGATE"
  ],
  "networkMode": "awsvpc",
  "cpu": "512",
  "revision": 3,
  "status": "ACTIVE",
  "inferenceAccelerators": null,
  "proxyConfiguration": null,
  "volumes": []
}

Do you have an idea of what might be wrong with my configuration? It seems like it should work. I'd also like to know how you build to krisnova/falco-trace:latest image. I'd like to use Alpine as my base layer.

We need a name.

We need a name for this project, and we probably want to combine this with the pdig tree as well?

Internet! Help us pick out a name for this!

Unable to do docker build

Trying to follow the README to do a docker build these are the errors I faced -

  1. Downloading Falco - ERROR 404: Not Found. Resolved it by replacing the falco downloading link to https://download.falco.org/packages/bin/x86_64/falco-0.22.1-x86_64.tar.gz
  2. Cmake on Pdig - cp: cannot stat 'pdig': No such file or directory. I have tried cloning the pdig repositories and the libs directory but I am only able to get this to built on Debian which isn't an option at my employer.

Alternatively I could use krisnova/falco-trace:latest but that isn't covered by a license. Please help.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.