GithubHelp home page GithubHelp logo

aquasecurity / starboard Goto Github PK

View Code? Open in Web Editor NEW
1.3K 27.0 197.0 72.73 MB

Moved to https://github.com/aquasecurity/trivy-operator

Home Page: https://aquasecurity.github.io/starboard/

License: Apache License 2.0

Go 98.03% Shell 0.99% Makefile 0.68% Dockerfile 0.08% Smarty 0.20% HTML 0.02%
starboard kubernetes security cloud-native custom-resource-definition

starboard's Introduction

Starboard is joining Trivy.

We've announced our plans to discontinue Starboard and merge it into Trivy.

Starboard CLI has been reintroduced as trivy kubernetes command and starboard-operator with a focus on trivy capabilities is available as Trivy-Operator.

We will not be accepting new features/pull requests/issues. we encourage you to contribute to Trivy-Operator and Trivy CLI and influence the future of Trivy Kubernetes.

for more info and discussions

Checkout the latest : What's next for Trivy Q&A

Starboard logo

Kubernetes-native security toolkit.

GitHub Release Build Action Release snapshot Action Coverage Status Go Report Card License GitHub All Releases Docker Pulls Starboard Docker Pulls Starboard Operator

Introduction

There are lots of security tools in the cloud native world, created by Aqua and by others, for identifying and informing users about security issues in Kubernetes workloads and infrastructure components. However powerful and useful they might be, they tend to sit alongside Kubernetes, with each new product requiring users to learn a separate set of commands and installation steps in order to operate them and find critical security information.

Starboard attempts to integrate heterogeneous security tools by incorporating their outputs into Kubernetes CRDs (Custom Resource Definitions) and from there, making security reports accessible through the Kubernetes API. This way users can find and view the risks that relate to different resources in what we call a Kubernetes-native way.

Starboard provides:

  • Automated vulnerability scanning for Kubernetes workloads.
  • Automated configuration audits for Kubernetes resources with predefined rules or custom Open Policy Agent (OPA) policies.
  • Automated infrastructures scanning and compliance checks with CIS Benchmarks published by the Center for Internet Security (CIS).
  • Automated compliance report - NSA, CISA Kubernetes Hardening Kubernetes Guidance v1.0
  • Penetration test results for a Kubernetes cluster.
  • Custom Resource Definitions and a Go module to work with and integrate a range of security scanners.
  • The Octant Plugin and the Lens Extension that make security reports available through familiar Kubernetes interfaces.

Starboard Overview

Starboard can be used:

  • As a Kubernetes operator to automatically update security reports in response to workload and other changes on a Kubernetes cluster - for example, initiating a vulnerability scan when a new Pod is started or running CIS Benchmarks when a new Node is added.

  • As a command, so you can trigger scans and view the risks in a kubectl-compatible way or as part of your CI/CD pipeline.

Status

Although we are trying to keep new releases backward compatible with previous versions, this project is still incubating, and some APIs and Custom Resource Definitions may change.

Documentation

The official Documentation provides detailed installation, configuration, troubleshooting, and quick start guides.

Learn how to install the Starboard command From the Binary Releases and follow the Getting Started guide to generate your first vulnerability and configuration audit reports.

You can install the Starboard Operator with Static YAML Manifests and follow the Getting Started guide to see how vulnerability and configuration audit reports are generated automatically.

Read more about the motivations for the project in the Starboard: The Kubernetes-Native Toolkit for Unifying Security blog.

Contributing

At this early stage we would love your feedback on the overall concept of Starboard. Over time, we'd love to see contributions integrating different security tools so that users can access security information in standard, Kubernetes-native ways.

  • See Contributing for information about setting up your development environment, and the contribution workflow that we expect.
  • See Roadmap for tentative features in a 1.0.0 release.

Starboard is an Aqua Security open source project.
Learn about our Open Source Work and Portfolio.
Join the community, and talk to us about any matter in GitHub Discussions or Slack.

starboard's People

Contributors

anupamtamrakar avatar bgoareguer avatar cgroschupp avatar champness avatar chen-keinan avatar czunker avatar danielpacak avatar danielsagi avatar dependabot[bot] avatar deven0t avatar dirien avatar dockerpac avatar elchenberg avatar hypnoglow avatar josedonizetti avatar kiranbodipi avatar krisctl avatar krol3 avatar ksashikumar avatar lizrice avatar mlevesquedion avatar mmorel-35 avatar mozillazg avatar py-go avatar shaardie avatar shaunmclernon avatar sisheogorath avatar sumindar avatar vf-mbrauer avatar xyoxo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

starboard's Issues

error: no Auth Provider found for name "gcp"

When I run starboard init alone or with kubectl I get this error:

error: no Auth Provider found for name "gcp"

Here is the users section of my kube config

users:
- name: gke_barry-williams_us-west2-b_my-cluster
  user:
    auth-provider:
      config:
        cmd-args: config config-helper --format=json
        cmd-path: /usr/lib/google-cloud-sdk/bin/gcloud
        expiry-key: '{.credential.token_expiry}'
        token-key: '{.credential.access_token}'
      name: gcp

Note: getting basic API objects with kubectl works.

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:53:57Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.10-gke.36", GitCommit:"34a615f32e9a0c9e97cdb9f749adb392758349a6", GitTreeState:"clean", BuildDate:"2020-04-06T16:33:17Z", GoVersion:"go1.12.12b4", Compiler:"gc", Platform:"linux/amd64"}

Add subcommand to fetch vulnerability reports

Instead of using label selectors, which are implementation specific, we could provide a subcommand to fetching vulnerability reports for a given workload:

Background:

$ kubectl starboard find vulns deploy/nginx

Before:

$ kubectl get vulnerabilities \
    -l starboard.resource.kind=Deployment \
    -l starboard.resource.name=nginx \
    -o yaml

After:

$ kubectl starboard get vulns deploy/nginx -o yaml

The get subcommand should use label selectors to filter custom resources pertinent to the specified workload

/cc @jerbia @lizrice

Kube-hunter cannot overwrite report named "cluster" when running in a CronJob

What steps did you take and what happened:

It seems I can't run kube-hunter in a cron job because on the second run it complains that cluster exists. I was hoping I could run kube-hunter continuously on a set interval in case starboard ever got updated and there were new check.

Love the tool thanks so much!

What did you expect to happen:

That the next cron run will succeed just like kube-bench

Anything else you would like to add:

N/A

Environment:

  • Starboard version (use starboard version): v0.2.5
  • Kubernetes version (use kubectl version): v1.15.10
  • OS (macOS 10.15, Windows 10, Ubuntu 19.10 etc): 18.04

Panic observed when docker's config.json has a server entry that uses credential helper

What steps did you take and what happened:
Executed starboard find vulnerabilities deployment/nginx --namespace dev command and observed that system went into panic mode due to a runtime error.

panic: runtime error: index out of range [1] with length 1

This is happening because my docker configuration JSON file had a server line that uses credentials helper and has a structure like below in the config.json file.

{
  "auths": {
    "<my-server>": {}
  }
}

What did you expect to happen:
I was expecting that those server entries would be skipped and starboard will continue looping through other server entries in the config file.

Anything else you would like to add:

I have a subsequent PR to fix this issue.

Environment:

  • Starboard version (use starboard version): v0.2.5
  • Kubernetes version (use kubectl version): Client -> 1.17, Server -> 1.14+
  • OS (macOS 10.15, Windows 10, Ubuntu 19.10 etc): Debian 4.9.144-3.1

Bring starboard at par with docker apropos image's server handling

Describe the problem/challenge you have
Pre-requisite: Docker's config.json file has the server field as http:// and the container's image is pulled from //

In this situation, when we run the command:
$ starboard find vulnerabilities pod/my-pod, I get the below error:
E0711 18:33:27.054420 19828 manager.go:177] Container my-pod terminated with Error: 2020-07-11T22:33:26.292Z FATAL unable to initialize a scanner: unable to initialize a docker scanner: 2 errors occurred:
* unable to inspect the image (//): Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
* GET https:///v2//: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:/ Type:repository]]

error: running scan job: job failed: BackoffLimitExceeded: Job has reached the specified backoff limit

Describe the solution you'd like

Docker accepts http://, just , https:// as the server field in the auths section of the config.json. starboard should also respect that since we are ideally depending upon the docker's config.json for authentication purposes.

Anything else you would like to add:
I have a subsequent pull request.

Environment:

  • Starboard version (use starboard version): v0.2.6
  • Kubernetes version (use kubectl version): Client Version: v1.17.0, Server Version: v1.15.11
  • OS (macOS 10.15, Windows 10, Ubuntu 19.10 etc): Debian GNU/Linux 9 (stretch)

Include links for tool-specific policy details

As a user, I might use Starboard tools to learn whether workloads are compliant or not compliant with policies. Those policies are enforced by third-party tools. Starboard CRDs could include a field for a link for discovering further details that can be populated by the third-party tool (or an adapter for it). For example, if an image doesn't meet an Aqua image policy, the vulnerability CRD for that image could link to the Aqua UI screen where that policy is configured.

The cleanup command should wait for the starboard namespace termination

Describe the problem/challenge you have

The starboard cleanup command sends a request to Kubernetes API, but does not wait for the starboard namespace termination.

For example, running $ starboard init immediately after $ starboard cleanup might fail.

Describe the solution you'd like

The command should exit only if the underlying namespace is terminated

Environment:

  • Starboard version (use starboard version): 0.2.6
  • Kubernetes version (use kubectl version): Any
  • OS (macOS 10.15, Windows 10, Ubuntu 19.10 etc): Any

Allow specifying timeout for scan jobs

Currently the timeouts are hardcoded to some opinionated/random values . We should allow users to override the default timeouts for scan job runners:

$ starboard find vulns deploy/nginx -n dev --scan-job-timeout 3min
$ starboard polaris --scan-job-timeout 60s
...

Similar to the kubectl --request-timeout flag we should default to 0, i.e. do not timeout

Add subcommand to generate audit reports

Describe the problem/challenge you have

Today we cannot read offline reports generated by Starboard in a bus or train.

Describe the solution you'd like

We'd like to have a subcommand to generate a PDF report for the specified workload:

$ k starbaord get audit report deployment/nginx -n dev > report.pdf

Anything else you would like to add:

To deliver this functionality in smaller chunks, I think we can start with a simple report listing vulnerabilities for the specified workload.
We can take it from there and add config audit / Polaris reports.

Eventually we might put remediations section. And one day it should be possible to generate the report for the whole namespace or cluster.

Environment:

  • Starboard version (use starboard version): Any
  • Kubernetes version (use kubectl version): Any
  • OS (macOS 10.15, Windows 10, Ubuntu 19.10 etc): Any

Handle Kubernetes cluster config errors

Similarly to the kubectl command, we should display a nice message when there's a problem reading cluster config files:

error: Missing or incomplete configuration info.  Please point to an existing, complete config file:

  1. Via the command-line flag --kubeconfig
  2. Via the KUBECONFIG environment variable
  3. In your home directory as ~/.kube/config

Runner received timeout error

encountered the above error when trying to scan for vulnerabilities for existing deployment objects.

However if I created a new nginx deployment based on the document and invoked the same command, there was not error.

starboard_log.txt

Cannot find vulnerabilities in scratch images

What steps did you take and what happened:

Running starboard find vuln command on a workload with container that has image based on scratch, Starboard CLI fails.

What did you expect to happen:

Scan scratch images and handle JSON parsing in cases of no vulnerabilities

Anything else you would like to add:

N/A

Environment:

  • Starboard version (use starboard version): 0.2.1
  • Kubernetes version (use kubectl version): Any
  • OS (macOS 10.15, Windows 10, Ubuntu 19.10 etc): Any

Call out what components work in air-gapped environments.

I assume something like Trivy needs ability to pull latest CVE DB info, but it would be great to know what does (or doesn't) work in air-gapped clusters.

Going further, there might be some additional config / docs needed for those types of environments.

Find vulnerability got stucked and get vulnerability return blank yaml

I am trying to find vulnerability and get vulnerability of deployment/nginx.
Getting below error:
bash-4.4# kubectl starboard --kubeconfig admin.conf init
bash-4.4# kubectl --kubeconfig admin.conf get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 0/1 1 0 5d20h

bash-4.4# kubectl starboard --kubeconfig admin.conf get vulnerabilities deployment/nginx --namespace default
error: exit status 1

bash-4.4# kubectl starboard --kubeconfig admin.conf find vulnerabilities deploy/nginx -n default -v 3
I0825 06:11:09.435023 15858 scanner.go:56] Getting Pod template for workload: {Deployment nginx default}
I0825 06:11:09.458662 15858 scanner.go:71] Scanning with options: {ScanJobTimeout:0s DeleteScanJob:true}
I0825 06:11:09.459671 15858 runner.go:79] Running task and waiting forever
I0825 06:11:09.459876 15858 runnable_job.go:47] Creating runnable job: starboard/e4b32fa7-3a9a-4dae-a05c-f89e1aa32086
I0825 06:11:09.483017 15858 reflector.go:207] Starting reflector *v1.Job (30m0s) from pkg/mod/k8s.io/[email protected]/toolscache/reflector.go:156
I0825 06:11:09.483064 15858 reflector.go:243] Listing and watching *v1.Job from pkg/mod/k8s.io/[email protected]/tools/cachereflector.go:156

it got stucked in this page.

Environment:

  • Starboard version (use starboard version): {Version:0.2.6 Commit:d43faefc56021ae55d4574054ce7de13175ca206 Date:2020-07-09T20:30:45Z}

  • Kubernetes version (use kubectl version): Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.9", GitCommit:"2e808b7cb054ee242b68e62455323aa783991f03", GitTreeState:"clean", BuildDate:"2020-01-18T23:24:23Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}

  • OS (macOS 10.15, Windows 10, Ubuntu 19.10 etc):
    bash-4.4# cat /etc/alpine-release
    3.9.6

starboard find vulns deployment/some_deployment stuck

# starboard find vulnerabilities deployment/nginx -v 3
I0710 18:23:06.513704    9492 scanner.go:56] Getting Pod template for workload: {Deployment nginx default}
I0710 18:23:06.543574    9492 scanner.go:71] Scanning with options: {ScanJobTimeout:0s DeleteScanJob:true}
I0710 18:23:06.543714    9492 runner.go:79] Running task and waiting forever
I0710 18:23:06.543783    9492 runnable_job.go:47] Creating runnable job: starboard/d0b993ac-4713-4c1b-b4e2-d139d867b99f
I0710 18:23:06.559944    9492 reflector.go:207] Starting reflector *v1.Job (30m0s) from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:156
I0710 18:23:06.559976    9492 reflector.go:243] Listing and watching *v1.Job from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:156

I did the Getting Started section from starboard git page and I stucked on this step, because this command never end. Where I could see what is wrong about it?

I'm using Kind (https://github.com/kubernetes-sigs/kind) to make the Kubernetes cluster.

Environment:

  • Starboard Version: {Version:0.2.6 Commit:d43faefc56021ae55d4574054ce7de13175ca206 Date:2020-07-09T20:30:45Z}
  • Kubernetes version:
    Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.5", GitCommit:"e6503f8d8f769ace2f338794c914a96fc335df0f", GitTreeState:"clean", BuildDate:"2020-06-26T03:47:41Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-30T20:19:45Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
  • OS: CentOS Linux release 7.7.1908 (Core)

Run kube-bench on all cluster nodes

Currently the kube-bench job is only scheduled on a single worker node. However, we should run it for all nodes. What's more we should tell kube-bench to run different checks on master and worker nodes respectively 1 .

starboard cleanup leaves namespace drool behind

What steps did you take and what happened:
starboard init, followed by starboard cleanup.

What did you expect to happen:
I was expecting that what all was initialized as part of starboard init would be cleaned up when I execute starboard cleanup but found that namespace and other resources in the namespace like service accounts, secret, jobs, etc were left behind.

Environment:

  • Starboard version (use starboard version): v0.2.5
  • Kubernetes version (use kubectl version): Client 1.17, Server 1.15+
  • OS (macOS 10.15, Windows 10, Ubuntu 19.10 etc): Debian GNU/Linux 9 (stretch)

Notarize starboard through Apple Developer channel

Describe the problem/challenge you have

On macOS running starboard CLI requires adding the binary to security exceptions.

Describe the solution you'd like

We should notarize our binaries as trusted provider and integrate such process with GoReleaser

Anything else you would like to add:

N/A

Environment:

  • Starboard version (use starboard version): 0.2.1
  • Kubernetes version (use kubectl version): Any
  • OS (macOS 10.15, Windows 10, Ubuntu 19.10 etc): macOS

Run Polaris for the specified workload

Describe the problem/challenge you have

Currently, we're running Polaris audit subcommand to scans all workloads in the cluster.

$ starboard polaris
$ starboard get configaudit deploy/nginx -n dev

Describe the solution you'd like

However, we should be able to scan a single workload like we do for vulnerabilities:

$ starboard polaris deploy/nginx -n dev
$ starboard polaris sts/my-app -n staging

Anything else you would like to add:

  • This is mainly required for multi tenant environments and workloads protected by RBAC permissions. Beyond that, scanning all workloads requires running Polaris as Kubernetes Job with ServiceAccount that's not least privileged.

Automate krew plugin updates

Describe the problem/challenge you have

Each time there's a new release of Starboard CLI, we have to bump up versions and checksums in the krew manifest and open a PR to the krew-index repository. This is repetitive and mundane task for us and krew-index maintainers.

Describe the solution you'd like

Integrated krew-release-bot with the GitHub Actions release workflow.

Anything else you would like to add:

IIRC We have to submit the initial manifest to krew-index manually, i.e. we can work on that only after https://github.com/kubernetes-sigs/krew-index/pull/647/files is merged.

Environment:

  • Starboard version (use starboard version):
  • Kubernetes version (use kubectl version):
  • OS (macOS 10.15, Windows 10, Ubuntu 19.10 etc):

403 API rate limit exceeded when running starboard find vulnerabilities command

Trivy downloads trivy.db file from GtiHub release page which is subject to rate limit. Beyond that we do not cache the DB file.

2020-06-03T08:45:29.803Z INFO Need to update DB
2020-06-03T08:45:29.803Z INFO Downloading DB...
2020-06-03T08:45:29.962Z FATAL failed to download vulnerability DB: failed to download vulnerability DB: failed to list releases: GET https://api.github.com/repos/aquasecurity/trivy-db/releases: 403 API rate limit exceeded for X.X.X.X. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.) [rate reset in 32m22s]

error message following install guide

$ starboard find vulnerabilities deployment/nginx --namespace dev -v 3
I0602 19:53:04.523020 20733 scanner.go:55] Getting Pod template for workload: Deployment/nginx
I0602 19:53:04.538831 20733 runnable_job.go:45] Creating runnable job: starboard/f46f480a-ac61-4a7e-891a-29bb21fddc3b
I0602 19:53:04.549248 20733 reflector.go:150] Starting reflector *v1.Job (30m0s) from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:105
I0602 19:53:04.549272 20733 reflector.go:185] Listing and watching *v1.Job from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:105
I0602 19:53:11.615280 20733 runnable_job.go:68] Stopping runnable job on task completion with status: Complete
I0602 19:53:11.615321 20733 runner.go:60] Stopping runner on task completion with error:
I0602 19:53:11.615334 20733 scanner.go:89] Scan job completed: starboard/f46f480a-ac61-4a7e-891a-29bb21fddc3b
I0602 19:53:11.623286 20733 scanner.go:216] Getting logs for nginx container in job: starboard/f46f480a-ac61-4a7e-891a-29bb21fddc3b
I0602 19:53:11.645638 20733 scanner.go:82] Deleting job: starboard/f46f480a-ac61-4a7e-891a-29bb21fddc3b
error: json: cannot unmarshal number into Go value of type []trivy.ScanReport

Any idea on how to solve this or pointer?

thx

Revisit verbs used in the CLI

The currently used find verb is not intuitive for everyone. What's more it can be confusing:

kubectl starboard find vuln deploy/my-deploy -n my-namespace
kubectl starborad find risks deploy/my-deploy -n my-namespace

Let's think about sth else. These are some other ideas to consider:

kubectl starboard scan deploy/my-deploy
kubectl starboard assess vulns deploy/my-deploy
kubectl starboard assess risks deploy/my-deploy
# with a catchy alias
kubectl starboard ass vulns deploy/my-deploy

kubectl starboard check vulns deploy/my-deploy
kubectl starboard check risks deploy/my-deploy

kubectl starboard inspect vulns deploy/my-deploy
kubectl starboard insp vulns deploy/my-deploy
kubectl starboard inspect risks deploy/my-deploy

/cc @jerbia @lizrice @itaysk

Namespaced CRDs

As a developer I may have access to only one k8s namespace for my project. I should be able to query security CRDs related to that namespace.

As an admin I have access to multiple k8s namespaces. I should be able to query security CRDs across all of them.

Allow configuring HTTP proxy with Trivy scanner

ISSUE

starboard find vulns deployment/some_deployment --namespace default -v 3
I0708 14:43:53.094626    3804 scanner.go:55] Getting Pod template for workload: {Deployment core-scanandpay-failover-v2 default}
I0708 14:43:53.117105    3804 scanner.go:70] Scanning with options: {ScanJobTimeout:0s DeleteScanJob:true}
I0708 14:43:53.118220    3804 runner.go:79] Running task and waiting forever
I0708 14:43:53.118288    3804 runnable_job.go:47] Creating runnable job: starboard/2e7c4482-df93-4057-a5b1-dbc1a18ce353
I0708 14:43:53.137715    3804 reflector.go:207] Starting reflector *v1.Job (30m0s) from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:156
I0708 14:43:53.138252    3804 reflector.go:243] Listing and watching *v1.Job from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:156
I0708 14:43:55.586801    3804 runnable_job.go:73] Stopping runnable job on task failure with status: Failed
I0708 14:43:55.587514    3804 runner.go:83] Stopping runner on task completion with error: job failed: BackoffLimitExceeded: Job has reached the specified backoff limit
E0708 14:43:55.723818    3804 manager.go:177] Container 2e7c4482-df93-4057-a5b1-dbc1a18ce353 terminated with Error: 2020-07-08T12:43:54.563Z                                                                       INFO     Need to update DB
2020-07-08T12:43:54.564Z        INFO    Downloading DB...
2020-07-08T12:43:54.730Z        FATAL   failed to download vulnerability DB: failed to download vulnerability DB: failed to list releases: Get https://api.github.com/repos/aquasecurity/trivy-db/releases: x509: certificate signed by unknown authority
error: running scan job: job failed: BackoffLimitExceeded: Job has reached the specified backoff limit

TECHNICAL DETAILS

  • K8S version: 1.14.6
  • starboard version: 0.2.5
  • K8S runs on a VMWare platform on-prem
    • Behind a corporate FW and proxy.
  • kubectl version: 1.17.3
  • Running it in Windows Subsystem for Linux v1. Using the Ubuntu v18.04 distro

EXPECTATIONS

To be able to execute starboard find vulns deployment/some_deployment without this issue

DEBUGGING - TRIED

  • Tried using: --insecure-skip-tls-verify the issue remained
  • Tried removing the certificate-authority-data sections from the kubeconfig file. The issue remained
    • Tried with and without the --insecure-skip-tls-verify parameter. Same issue

OTHER

I'm posting this issue in this repo as the challenge seems to be related to specifically trivy-db. As that is being called via starboard.

I created the same issue on the trivy-db repo as well. But was asked to create it here as well. See the issue here.

Looking forward to any tips and pointers. Let me know if more info is needed.

Thank you very much.

Create or update CISKubeBenchReport resources

Describe the problem/challenge you have

In the initial version of Starboard we wanted to store historical benchmark reports. However, now we think that we should store only the latest report as a CR instance.

Describe the solution you'd like

Simplify the logic and implement create or update logic for CISKubeBenchReport writer.

Anything else you would like to add:

N/A

Environment:

  • Starboard version (use starboard version): 0.2.6
  • Kubernetes version (use kubectl version): Any
  • OS (macOS 10.15, Windows 10, Ubuntu 19.10 etc): Any

[operator] Aqua scanner fails when we use digest as an image identifier

$ kubectl run nginx --image core.harbor.domain/library/nginx@sha256:2963fc49cc50883ba9af25f977a9997ff9af06b45c12d968b7985dc1e9254e4b
$ scannercli scan --checkonly \
--dockerless \
--host=http://csp-console-svc.aqua:8080 \
--user=****** \
--password=****** \
--registry Harbor \
library/nginx@sha256:2963fc49cc50883ba9af25f977a9997ff9af06b45c12d968b7985dc1e9254e4b
OK
$ scannercli scan --checkonly \
--dockerless \
--host=http://csp-console-svc.aqua:8080 \
--user=****** \
--password=****** \
--local \
core.harbor.domain/library/nginx:1.16
OK
$ scannercli scan --checkonly \
--dockerless \
--host=http://csp-console-svc.aqua:8080 \
--user=****** \
--password=****** \
--local \
core.harbor.domain/library/nginx@sha256:2963fc49cc50883ba9af25f977a9997ff9af06b45c12d968b7985dc1e9254e4b
KO
ERROR: Failed scanning image: failed getting manifest of image :core.harbor.domain/library/nginx@sha256:2963fc49cc50883ba9af25f977a9997ff9af06b45c12d968b7985dc1e9254e4b: Error response from daemon: no such image: core.harbor.domain/library/nginx:sha256:2963fc49cc50883ba9af25f977a9997ff9af06b45c12d968b7985dc1e9254e4b: invalid reference format

Implement the `rbac` subcommand

Starboard is Kubernetes-native security application which stores vulnerability reports as custom security resources. To create such resources and risk scanners certain RBAC permissions are required.

Instead of listing all required roles in README or YAML, I thought that we may provide the command, which is supposed to be run by cluster admins, to grant permission to a given subject:

kubectl starborad rbac \
  --user=dave \
  --user=loper \
  --group=terra-nowa \
   -n my-namespace | kubectl apply -f -

Using custom Trivy image

Describe the problem/challenge you have

As my environment is running without external access, I have to use images that are available in internal repository. Currently there is no option to specify custom Trivy image used for security scanning.

Describe the solution you'd like

It would be great to have flag when you run e.g. kubectl starboard find vunlerabilities to specify custom image.

Anything else you would like to add:

It would be great to have that flexibility with every image that is used.

Environment:

  • Starboard version (use starboard version): 0.2.1
  • Kubernetes version (use kubectl version): 1.11.0
  • OS (macOS 10.15, Windows 10, Ubuntu 19.10 etc): RHEL 7.7.

Support private container registries for Pods which do not specifying image pull secrets

What steps did you take and what happened:

Followed install instructions. Tried to run scan using below on command.

$ kubectl starboard  find vulnerabilities deployment/XXX-qa-web -n XXXX-qa -v 3
I0814 12:52:38.530373   28112 scanner.go:56] Getting Pod template for workload: {Deployment XXXX-qa-web XXXX-qa}
I0814 12:52:40.842053   28112 scanner.go:71] Scanning with options: {ScanJobTimeout:0s DeleteScanJob:true}
I0814 12:52:41.183767   28112 runner.go:79] Running task and waiting forever
I0814 12:52:41.183840   28112 runnable_job.go:47] Creating runnable job: starboard/b75ba5e8-82c9-4915-ad35-4b35c37987ab
I0814 12:52:41.535929   28112 reflector.go:207] Starting reflector *v1.Job (30m0s) from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:156
I0814 12:52:41.535978   28112 reflector.go:243] Listing and watching *v1.Job from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:156
I0814 12:52:50.479003   28112 runnable_job.go:73] Stopping runnable job on task failure with status: Failed
I0814 12:52:50.479115   28112 runner.go:83] Stopping runner on task completion with error: job failed: BackoffLimitExceeded: Job has reached the specified backoff limit
E0814 12:52:52.784556   28112 manager.go:177] Container default terminated with Error: 2020-08-14T07:22:49.629Z FATAL   unable to initialize a scanner: unable to initialize a docker scanner: 2 errors occurred:
        * unable to inspect the image (us.gcr.io/XXXX-1/XXXX:116579-23d73da-release-2019-10): Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
        * GET https://us.gcr.io/v2/token?scope=repository%3xxxl-1%2FXXXX%3Apull&service=us.gcr.io: UNKNOWN: Unable to parse json key.


error: running scan job: job failed: BackoffLimitExceeded: Job has reached the specified backoff limit

What did you expect to happen:
Scan should completed without error

Anything else you would like to add:
trivy support gcr. But I am not to able find way to pass custom ENV to trivy using starboard.

Environment:

  • Starboard version (use starboard version): Starboard Version: {Version:0.2.6 Commit:d43faefc56021ae55d4574054ce7de13175ca206 Date:2020-07-09T20:30:45Z}
  • Kubernetes version (use kubectl version): client:v1.17.10, server: v1.17.2
  • OS (macOS 10.15, Windows 10, Ubuntu 19.10 etc): Ubuntu 18.04

Pin kube-bench and kube-hunter to a fixed version

Describe the problem/challenge you have

We run kube-bench and kube-hunter as Kubernetes Job using latest Docker images of the underlying scanner. The latest version of each scanner might not be compatible with the Starboard's code.

Describe the solution you'd like

At least pin to a fixed version. We might think about Starboard CLI flag, which allows overriding version for those who want to experiment with the latest release of kube-bench or kube-hunter.

Anything else you would like to add:

It is somehow related to #56, i.e. an enhancement to allow overriding the whole Docker image reference of the underlying scanner, not just versions.

Environment:

  • Starboard version (use starboard version):
  • Kubernetes version (use kubectl version):
  • OS (macOS 10.15, Windows 10, Ubuntu 19.10 etc):

Starboard could provide better error reporting when jobs fail

What steps did you take and what happened:

starboard kube-bench hangs if the jobs fail due to admissions controllers

Specifically I ran starboard kube-bench and the command "hung" (i.e. no new prompt and no logs)

Checking kubectl describe jobs -n starboard cb61d98b-bd1e-47fa-8ed7-efd5baee3f31 I saw

Warning  FailedCreate  36s (x3 over 66s)  job-controller  Error creating: pods "cb61d98b-bd1e-47fa-8ed7-efd5baee3f31-" is forbidden: unable to validate against any pod security policy: [spec.securityContext.hostPID: Invalid value: true: Host PID is not allowed to be used spec.volumes[0]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[1]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[2]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[3]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[4]: Invalid value: "hostPath": hostPath volumes are not allowed to be used]

which is obviously my fault for not applying a more permissive policy to the starboard namespace but I think starboard cli could do a better job of reporting that failure.

What did you expect to happen:

Any sort of error, ideally telling me what the issue was.

Anything else you would like to add:

Thanks for this tool! In the middle of security auditing at the moment and it's making my life way easier.

Environment:

  • Starboard version (use starboard version): Starboard Version: {Version:0.2.1 Commit:02c0c55260c0331c590602dba624819813c97d64 Date:2020-06-05T23:15:17Z}
  • Kubernetes version (use kubectl version): 1.18.3 client talking to 1.17.6 server
  • OS (macOS 10.15, Windows 10, Ubuntu 19.10 etc): Windows 10 & Ubuntu 20.04 (wsl2)

Try getting error message from failing scan jobs

It's inevitable that some scan jobs will fail. In such case the Scanner CLI does not provide informative error message beyond Job status. What's more, Starboard cleans the Job so it's not possible to check Pod logs and see what was the root cause of the problem.

Output init manifests

Describe the problem/challenge you have

To install starboard via GitOps, we need the YAML manifests to install. Would be great if the init command could optionally just output the manifests to stdout.

Describe the solution you'd like

Anything else you would like to add:

Happy to help with a PR if this sounds good and there's a preferable cmd line argument to trigger this (e.g. starboard manifests)

Environment:

  • Starboard version (use starboard version):
  • Kubernetes version (use kubectl version):
  • OS (macOS 10.15, Windows 10, Ubuntu 19.10 etc):

Set ownerReference for vulnerabilities resource

By design vulnerabilities are associated with native Kubernetes workloads such as Deployments or Stateful Sets via labels and label selectors. To leverage Kubernetes Garbage Collection we can set the ownerReference for each vulnerabilities resource when we create it.

kube-bench and kube-hunter scanners do not respect the --delete-scan-job flag

What steps did you take and what happened:

When I run starboard kube-bench --delete-scan-job=false or starboard kube-hunter --delete-scan-job=false, Starboard CLI deleted the underlying K8S jobs anyway.

What did you expect to happen:

kube-hunter and kube-bench scanner should respect the --delete-scan-job=false flag.

Anything else you would like to add:

N/A

Environment:

  • Starboard version (use starboard version): 0.2.1
  • Kubernetes version (use kubectl version): Any
  • OS (macOS 10.15, Windows 10, Ubuntu 19.10 etc): Any

Hacking.md needs an update on how to run test after addition of integration tests

What steps did you take and what happened:
Tried to follow HACKING.md to run the tests using make test. This used to work when we only had unit tests but after the addition of integration tests, we ended up creating two targets in the Makefile (unit-tests & integration-tests).

What did you expect to happen:
make test block should still be present in the Makefile and should call the unit-tests and integration-tests target.

Environment:

  • Starboard version (use starboard version): Latest dev
  • Kubernetes version (use kubectl version): Any
  • OS (macOS 10.15, Windows 10, Ubuntu 19.10 etc): Any

Upgrade Polaris to 1.0

Describe the problem/challenge you have

Currently we pinned Polaris image to quay.io/fairwinds/polaris:cfc0d213cd603793d8e36eecfb0def1579a34997 and we're running polaris audit --log-level error command as a Kubernetes Job to check all workloads and create configauditreports resources (see an example).

Describe the solution you'd like

We'd like to:

  • Upgrade to the latest and greatest Polaris
  • Review and adopt configauditreports definition to store configuration audits
    • Add OpenAPI Spec to validate configauditreports payload compatible with Polaris 1.0 output
  • Keep the configauditreports definition flexible for other vendors to integrate with Starboard

Anything else you would like to add:

This might be related to #29

Define resource requests and limits on scan jobs

Describe the problem/challenge you have

Currently the scan jobs do not set resource requests and limits. In some environments this might prevents scanners from running.

Describe the solution you'd like

Each scanner job should define resource requests and limits.

Anything else you would like to add:

Consider how to make the limits configurable with flags or Starbor config map?

Environment:

  • Starboard version (use starboard version): 0.2.1
  • Kubernetes version (use kubectl version): Any
  • OS (macOS 10.15, Windows 10, Ubuntu 19.10 etc): Any

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.