GithubHelp home page GithubHelp logo

jfrog / kubexray Goto Github PK

View Code? Open in Web Editor NEW
25.0 17.0 9.0 108 KB

JFrog KubeXray scanner on Kubernetes

License: Apache License 2.0

Dockerfile 1.50% Makefile 0.98% Smarty 7.91% Go 89.61%
xray scan devops security kubernetes kubernetes-operator

kubexray's Issues

Move to admission controller

To be more officiant and have more control of pods livecycle it makes more sense to move of using admission controllers.

slice bounds out of range panic on pod from daemonset

When running kubexray on our cluster we get the following panic:

/usr/local/go/src/runtime/panic.go:522                                                                                                                                                 [39/267]
/usr/local/go/src/runtime/panic.go:54
/build/kubexray/handler.go:527
/build/kubexray/handler.go:359
/build/kubexray/controller.go:127
/build/kubexray/controller.go:56
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:134
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88
/build/kubexray/controller.go:47
/usr/local/go/src/runtime/asm_amd64.s:1337
E0329 18:50:01.174685       1 runtime.go:69] Observed a panic: "slice bounds out of range" (runtime error: slice bounds out of range)
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:76
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:65
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:51
/usr/local/go/src/runtime/panic.go:522
/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:58
/usr/local/go/src/runtime/panic.go:522
/usr/local/go/src/runtime/panic.go:54
/build/kubexray/handler.go:527
/build/kubexray/handler.go:359
/build/kubexray/controller.go:127
/build/kubexray/controller.go:56
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:134
/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88
/build/kubexray/controller.go:47
/usr/local/go/src/runtime/asm_amd64.s:1337
panic: runtime error: slice bounds out of range [recovered]
        panic: runtime error: slice bounds out of range [recovered]
        panic: runtime error: slice bounds out of range

goroutine 24 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:58 +0x105
panic(0x10b3d60, 0x1cf0b90)
        /usr/local/go/src/runtime/panic.go:522 +0x1b5
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:58 +0x105
panic(0x10b3d60, 0x1cf0b90)
        /usr/local/go/src/runtime/panic.go:522 +0x1b5
main.checkResource(0x13d4880, 0xc000112700, 0xc0002cdc00, 0x1, 0xb, 0xf)
        /build/kubexray/handler.go:527 +0x52c
main.(*HandlerImpl).ObjectCreated(0xc00026a480, 0x13d4880, 0xc000112700, 0x11d28c0, 0xc0002cdc00)
        /build/kubexray/handler.go:359 +0xce
main.(*Controller).processNextQueueItem(0xc000209ef0, 0xc000631e00)
        /build/kubexray/controller.go:127 +0x2cf
main.(*Controller).runWorker(0xc000209ef0)
        /build/kubexray/controller.go:56 +0xcb
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc00005d788)
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x54
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000631f88, 0x3b9aca00, 0x0, 0x1, 0xc00008a900)
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:134 +0xf8
k8s.io/apimachinery/pkg/util/wait.Until(...)
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88
main.(*Controller).Run(0xc000209ef0, 0xc00008a900)
        /build/kubexray/controller.go:47 +0x2f1
created by main.main
        /build/kubexray/main.go:165 +0x758

I believe this happens because we have a pod from a daemonset that doesn't adhere to a naming scheme that is expected in the checkResource function:

func checkResource(client kubernetes.Interface, pod *core_v1.Pod) (string, ResourceType) {
	subs1 := strings.LastIndexByte(pod.Name, '-')
	subs2 := strings.LastIndexByte(pod.Name[:subs1], '-')
	sets := client.AppsV1().StatefulSets(pod.Namespace)
	_, err := sets.Get(pod.Name[:subs1], meta_v1.GetOptions{})
	if err == nil {
		return pod.Name[:subs1], StatefulSet
	}
	log.Debugf("Resource for pod %s is not stateful set %s: %v", pod.Name, pod.Name[:subs1], err)
	deps := client.AppsV1().Deployments(pod.Namespace)
	_, err = deps.Get(pod.Name[:subs2], meta_v1.GetOptions{})
	if err == nil {
		return pod.Name[:subs2], Deployment
	}
	log.Debugf("Resource for pod %s is not deployment %s: %v", pod.Name, pod.Name[:subs2], err)
	return "", Unrecognized
}

From the debug logs:

time="2019-03-29T19:02:59Z" level=debug msg=HandlerImpl.ObjectCreated
time="2019-03-29T19:02:59Z" level=debug msg="Resource for pod falco-wzdl4 is not stateful set falco: statefulsets.apps \"falco\" not found"
E0329 19:02:59.457975       1 runtime.go:69] Observed a panic: "slice bounds out of range" (runtime error: slice bounds out of range)

The falco-wzdl4 pod doesn't contain two dashes. Please advise on this issue.

whitelistNamespaces in values.yaml should be 1 level up

What?
whitelistNamespaces in values.yaml should be global and probably directly under scanPolicy.

Why?
whitelistNamespaces is applicable to all policies. In the default values.yaml it is not clear that this key is applicable to all policies or only the unscanned ones.

Existing sample -

scanPolicy:
unscanned:
# Set for unscanned deployments delete/scaledown/ignore
deployments: ignore
# Set for unscanned statefulsets delete/scaledown/ignore
statefulSets: ignore
# Whitelist namespaces
whitelistNamespaces: "kube-system,kube-xray"

Add support for SNS

We'd like to automate actions based on the scan results. Being able to push to SNS would be very useful.

Whitelisting namespaces are not honored

The whitelisting is not honored and no matter the order in which its provided, there is not any help. Xray version is Xray 2.3.3 Revision: 6b3b534

Also the response from api has issues and licenses as separate attributes
{"artifacts":[{........},"issues":[],"licenses":[{"name":"Unknown","full_name":"Unknown license","components":[........]}]}]}

But the code looks for security and license under the issues. Is this due to any version mismatch.

Using the latest version of kubexray helm chart

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.