GithubHelp home page GithubHelp logo

shopify / kubeaudit Goto Github PK

View Code? Open in Web Editor NEW
1.9K 413.0 183.0 13.24 MB

kubeaudit helps you audit your Kubernetes clusters against common security controls

License: MIT License

Makefile 0.50% Go 98.44% Shell 0.34% Dockerfile 0.71%
kubernetes audit computers

kubeaudit's Issues

Client Version Not Available

version attempts to print the Kubernetes client version, but that information isn't available so it only prints:

INFO[0000] Kubernetes client version                     Major= Minor= Platform=darwin/amd64

This is because client-go reports its version with the function https://github.com/kubernetes/client-go/blob/03bfb9bdcfe5482795b999f39ca3ed9ad42ce5bb/pkg/version/version.go#L28-L30. We don't use the k8s builder that would set those at build time, so the values fall back to https://github.com/kubernetes/client-go/blob/03bfb9bdcfe5482795b999f39ca3ed9ad42ce5bb/pkg/version/base.go#L42-L62.

Since the imported client-go will always be the same for any build of kubeaudit, I think we should add build and platform info to the kubeaudit version and stop attempting to report the kubernetes client version.

Which Kubernetes versions are supported?

I want to know which specific Kubernetes versions are supported

For example, I am using the apps/v1 (from kubernetes v1.9) resource type for Deployments but I can not check it because the tool don't support it...

Will it be supported soon?

Thank you very much for this helpful tool!

Versioning/Releases and Tags

πŸ‘‹ @jinankjain @jonpulsifer

Do we want to version kubeaudit?

  • Create releases and tags in Github
  • add a flag to the kubeaudit binary that reports the current version.

So that you can install specific version via e.g. glide and check which version is currently install with e.g. --version.

Audit for (nix) namespaces

The use of host's networking | hostNetwork
The use of host’s PID namespace | hostPID
The use of host’s IPC namespace | hostIPC

Check that all capabilities are dropped

with #34 we now have a list of all the capabilities that are dropped, now we should establish a way of making sure all possible capabilities are dropped. #33 would give us the functionality to specify that a cap wasn't dropped and that's intentional.

False positive when all capabilities dropped

Running kubeaudit caps returns a lot of ERRO[0003] Capability not dropped messages for the pods with the following effective settings:

    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - all
      readOnlyRootFilesystem: true

From inside the container everything is clearly OK:

grep ^Cap /proc/1/status
CapInh:    0000000000000000
CapPrm:    0000000000000000
CapEff:    0000000000000000
CapBnd:    0000000000000000
CapAmb:    0000000000000000

Autofix broken for multiple containers

When autofixing

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  name: cababilitiesAdded
  namespace: fakeDeploymentSC
spec:
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        apps: fakeSecurityContext
    spec:
      containers:
      - name: fakeContainerSC1
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - AUDIT_WRITE
      - name: fakeContainerSC2

The resulting YAML is

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  name: cababilitiesAdded
  namespace: fakeDeploymentSC
spec:
  selector: null
  strategy: {}
  template:
    metadata:
      annotations:
        container.apparmor.security.beta.kubernetes.io/fakeContainerSC1: runtime/default
        container.apparmor.security.beta.kubernetes.io/fakeContainerSC2: runtime/default
        seccomp.security.alpha.kubernetes.io/pod: runtime/default
      creationTimestamp: null
      labels:
        apps: fakeSecurityContext
    spec:
      automountServiceAccountToken: false
      containers:
      - name: fakeContainerSC1
        resources: {}
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - AUDIT_WRITE
            - CHOWN
            - DAC_OVERRIDE
            - FOWNER
            - FSETID
            - KILL
            - MKNOD
            - NET_BIND_SERVICE
            - NET_RAW
            - SETFCAP
            - SETGID
            - SETPCAP
            - SETUID
            - SYS_CHROOT
          privileged: false
          readOnlyRootFilesystem: true
          runAsNonRoot: true
      - name: fakeContainerSC2
        resources: {}
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - AUDIT_WRITE
            - CHOWN
            - DAC_OVERRIDE
            - FOWNER
            - FSETID
            - KILL
            - MKNOD
            - NET_BIND_SERVICE
            - NET_RAW
            - SETFCAP
            - SETGID
            - SETPCAP
            - SETUID
            - SYS_CHROOT
status: {}

which has

privileged: false
readOnlyRootFilesystem: true
runAsNonRoot: true

added for the first container but not the second. The example above is actually the test file fixtures/autofix_v1.yml. It tests against the expected output fixtures/autofix-fixed_v1.yml. The expected output has

privileged: false
readOnlyRootFilesystem: true
runAsNonRoot: true

for both containers yet the test still passes...

Version Command Panics

kubeaudit version returns the wrong version number and then panics.

INFO[0000] Kubeaudit                                     Version=0.1.0
Running inside cluster, using the cluster config
ERRO[0000] unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1059878]

Feature Request: Filter dropped caps

In the current situation kubeaudit just audit for the fact that any capability is dropped or not. It does not take into account any specific capability.

This feature will introduce a flag through which a user would be able to specify that which caps should be dropped necessarily. And kubeaudit will error if those caps are not dropped instead of just giving a warning.

What do you say @jonpulsifer @klautcomputing ?

Wrong default config on Linux?

Versions:

Kubectl version

Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:39:04Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}

Kubeaudit

v0.3.0
URL: https://github.com/Shopify/kubeaudit/releases/download/v0.3.0/kubeaudit_0.3.0_linux_amd64.tar.gz

When running kubectl-audit all
the following error is observed:

ERRO[0000] unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined                                                           
panic: runtime error: invalid memory address or nil pointer dereference                                                                                                           
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x457818]                                                                                                            
                                                                                                                                                                                  
goroutine 1 [running]:                                                                                                                                                            
github.com/Shopify/kubeaudit/vendor/k8s.io/client-go/kubernetes.NewForConfig(0x0, 0x1, 0x1, 0x116de20)                                                                            
        /Users/shane.lawrence/src/github.com/Shopify/kubeaudit/vendor/k8s.io/client-go/kubernetes/clientset.go:399 +0x4e                                                          
github.com/Shopify/kubeaudit/cmd.kubeClient(0x0, 0x0, 0xc0000f39f0, 0x4d6fdd, 0x1049d60)                                                                                          
        /Users/shane.lawrence/src/github.com/Shopify/kubeaudit/cmd/kubernetes.go:40 +0xe5                                                                                         
github.com/Shopify/kubeaudit/cmd.getResources(0xefb520, 0xc0002864c0, 0x0, 0x0, 0xffffffffffffffff)                                                                               
        /Users/shane.lawrence/src/github.com/Shopify/kubeaudit/cmd/util.go:226 +0x9a                                                                                              
github.com/Shopify/kubeaudit/cmd.runAudit.func1(0x1a4fd20, 0x1a78dd0, 0x0, 0x0)                                                                                                   
        /Users/shane.lawrence/src/github.com/Shopify/kubeaudit/cmd/util.go:294 +0x75                                                                                              
github.com/Shopify/kubeaudit/vendor/github.com/spf13/cobra.(*Command).execute(0x1a4fd20, 0x1a78dd0, 0x0, 0x0, 0x1a4fd20, 0x1a78dd0)                                               
        /Users/shane.lawrence/src/github.com/Shopify/kubeaudit/vendor/github.com/spf13/cobra/command.go:760 +0x2cc                                                                
github.com/Shopify/kubeaudit/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x1a514e0, 0x1a514e0, 0xc0000f3f30, 0x1)                                                           
        /Users/shane.lawrence/src/github.com/Shopify/kubeaudit/vendor/github.com/spf13/cobra/command.go:846 +0x2fd                                                                
github.com/Shopify/kubeaudit/vendor/github.com/spf13/cobra.(*Command).Execute(0x1a514e0, 0x4056a0, 0xc000086058)                                                                  
        /Users/shane.lawrence/src/github.com/Shopify/kubeaudit/vendor/github.com/spf13/cobra/command.go:794 +0x2b                                                                 
github.com/Shopify/kubeaudit/cmd.Execute()                                                                                                                                        
        /Users/shane.lawrence/src/github.com/Shopify/kubeaudit/cmd/root.go:37 +0x2d                                                                                               
main.main()                                                                                                                                                                       
        /Users/shane.lawrence/src/github.com/Shopify/kubeaudit/main.go:6 +0x20 

The solution is to define the configuration file location:
kubectl-audit all -c ~/.kube/config

Enhance autofix's yaml handling

When running autofix it reads the yaml file correctly, the resource gets fixed and the write works as well. Yet, there are 3 things that could be improved upon:

Empty resources get added to the yaml file:

status:
  loadBalancer: {}

Yaml comments disappear

This is a know issue in Yaml and has been an open issue in go-yaml for more than 2 years, see:

There is an initial version of a patch out there but it was never finished:

Seems like there is one parser in python that can preserve comments

Order is not preserved

Support for yaml.MapSlice was added in go-yaml.v2 see:

The whole discussion about the feature can be found here

Refactoring of tests

Once #19 is merged we could get rid of all fakeaudit/fakeResource.go files with one helper function in utils.go that just traverses the test folder and builds very thing it needs on the fly. What do you think about this @jinankjain

open config/capabilities-drop-list.yml: no such file or directory

running

kubeaudit -l -n test all

I get:

ERRO[0000] This should not have happened, if you are on kubeaudit master please consider to report: open config/capabilities-drop-list.yml: no such file or directory KubeType=pod Name=test-775c4c6459-wwjbf Namespace=test

Installed from master just today

3a363010d61aecd9d8c26fe7b26763facb956f97

relevant config part for the given pod:

          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1000
            runAsNonRoot: true
            privileged: false
            capabilities:
              drop:
                - all

`-c` is ignored when `-l` is used

When both -c and -l are used, the expected behaviour is to use the config file specified by -c. That switch is currently ignored and -l forces the use of the default $HOME/.kube/config.

./kubeaudit -l version
INFO[0000] Kubeaudit                                     Version=0.1.0
INFO[0000] Kubernetes server version                     Major=1 Minor=10+ Platform=linux/amd64
INFO[0000] Kubernetes client version                     Major= Minor= Platform=darwin/amd64

./kubeaudit -l -c /notarealfile version
INFO[0000] Kubeaudit                                     Version=0.1.0
INFO[0000] Kubernetes server version                     Major=1 Minor=10+ Platform=linux/amd64
INFO[0000] Kubernetes client version                     Major= Minor= Platform=darwin/amd64

Bug: auditing yml does not work

There is a bug in auditing yml as auditSecurityContext is invoked everywhere in image.go, runAsNonRoot.go etc instead of specific function

#38 breaks :GoRename

#38 breaks :GoRename I haven't figured out why but since it was merged GoRename fails with:

/github.com/Shopify/kubeaudit/cmd/types.go|11| 10: expected type, found '=' (and 10 more errors)
/github.com/Shopify/kubeaudit/cmd/util.go|87| 16: undeclared name: Capability
/github.com/Shopify/kubeaudit/cmd/util.go|89| 16: undeclared name: Capability
/github.com/Shopify/kubeaudit/cmd/kubernetes.go|57| 55: undeclared name: DeploymentList
/github.com/Shopify/kubeaudit/cmd/kubernetes.go|66| 56: undeclared name: StatefulSetList
/github.com/Shopify/kubeaudit/cmd/kubernetes.go|75| 54: undeclared name: DaemonSetList
/github.com/Shopify/kubeaudit/cmd/kubernetes.go|84| 48: undeclared name: PodList
/github.com/Shopify/kubeaudit/cmd/kubernetes.go|93| 66: undeclared name: ReplicationControllerList

We should find out why :)

Support PodSecurityPolicies

Several things which kubeaudit checks for (such as privileges and capabilities) can also be controlled using PodSecurityPolicies (PSPs). Add support for auditing PSPs which takes into account override order with annotations and security contexts.

Some notes:

  • PSPs are cluster wide and as such will require different logic than is currently used for all of the resource specific settings.
  • As an additional side effect, PSPs may already live in a cluster unbeknownst to someone adding resources to that cluster. Kubeaudit should have the option to account for this even when auditing in "local mode" using kubernetes configuration files (which currently does not connect to a live cluster).

Json is sad

When logging to JSON the output gets mangled and instead of getting nice info like this:

ERRO[0000] Not all of the recommended capabilities were dropped! Please drop the mentioned capabiliites. CapsNotDropped="[NET_BIND_SERVICE]" KubeType=deployment Name=foo

only the following is shown:

{"CapsNotDropped":{},"KubeType":{},"Name":{},"level":"error","msg":"Not all of the recommended capabilities were dropped! Please drop the mentioned capabiliites.","time":"2017-10-30T13:44:14-04:00"}

unknown command "all" for "kubeaudit"

Running kubeaudit all should perform all available audits, but the command is not recognized.

➜  ./kubeaudit -l all
Error: unknown command "all" for "kubeaudit"

Did you mean this?
    allowpe

New (binary) release?

I would be interested in kubeaudit, but the latest release is from November 2017. Do you have plans to have a new release cut in the near future and providing the binary to download?

Authenticate to cluster

I am trying to use kubeaudit with my kubernetes cluster. How do I specify an OIDC token in the header for authentication or is this capability not supported at this time?

kubeaudit_0.2.0_darwin_amd64 shenoyk$ ./kubeaudit -l rootfs
ERRO[0000] No Auth Provider found for name "oidc"
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x1a71aa7]

goroutine 1 [running]:
github.com/Shopify/kubeaudit/vendor/k8s.io/client-go/kubernetes.(*Clientset).AppsV1beta1(...)
/Users/leex/go/src/github.com/Shopify/kubeaudit/vendor/k8s.io/client-go/kubernetes/clientset.go:154
github.com/Shopify/kubeaudit/cmd.getDeployments(0x0, 0xc420112c00)
/Users/leex/go/src/github.com/Shopify/kubeaudit/cmd/kubernetes.go:48 +0x37
github.com/Shopify/kubeaudit/cmd.getKubeResources(0x0, 0x1, 0x1, 0x2341320)
/Users/leex/go/src/github.com/Shopify/kubeaudit/cmd/util.go:320 +0x40
github.com/Shopify/kubeaudit/cmd.runAudit.func1(0x23a9620, 0xc420321170, 0x0, 0x1)
/Users/leex/go/src/github.com/Shopify/kubeaudit/cmd/util.go:409 +0x4ce
github.com/Shopify/kubeaudit/vendor/github.com/spf13/cobra.(*Command).execute(0x23a9620, 0xc420321140, 0x1, 0x1, 0x23a9620, 0xc420321140)
/Users/leex/go/src/github.com/Shopify/kubeaudit/vendor/github.com/spf13/cobra/command.go:702 +0x2c6
github.com/Shopify/kubeaudit/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x23a9840, 0x23a9840, 0xc4203bbf18, 0x1)
/Users/leex/go/src/github.com/Shopify/kubeaudit/vendor/github.com/spf13/cobra/command.go:783 +0x30e
github.com/Shopify/kubeaudit/vendor/github.com/spf13/cobra.(*Command).Execute(0x23a9840, 0x0, 0x1b2dcc0)
/Users/leex/go/src/github.com/Shopify/kubeaudit/vendor/github.com/spf13/cobra/command.go:736 +0x2b
github.com/Shopify/kubeaudit/cmd.Execute()
/Users/leex/go/src/github.com/Shopify/kubeaudit/cmd/root.go:32 +0x31
main.main()
/Users/leex/go/src/github.com/Shopify/kubeaudit/main.go:6 +0x20

NetworkPolicy check not implemented?

In https://github.com/Shopify/kubeaudit#audit-network-policies is the following describe:

It checks that every namespace should have a default deny network policy installed. 
See Kubernetes Network Policies for more information:

But actually the code https://github.com/Shopify/kubeaudit/blob/master/cmd/networkPolicies.go only iterates over existing networkPolicies and doesn't check if the default-deny policy is set. Also currently only the default allow all policy is checked (which leads to an warning).

Version info should move to version command

This

{"Major":"1","Minor":"7+","Platform":"linux/amd64","level":"info","msg":"Kubernetes server version","time":"2017-10-21T15:35:26-04:00"}
{"Major":"","Minor":"","Platform":"darwin/amd64","level":"info","msg":"Kubernetes client version","time":"2017-10-21T15:35:26-04:00"}

should only be shown when kubeaudit version is called and not every time kubeaudit -l is invoked.

labels don't seem to be working?

Relevant config:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-nfs-server
  labels:
    name: test-nfs-server
    kubeaudit.allow.privilegeEscalation: "true"
    kubeaudit.allow.privileged: "true"
    kubeaudit.allow.capability: "true"
    kubeaudit.allow.runAsRoot: "true"
    kubeaudit.allow.readOnlyRootFilesystemFalse: "true"
spec:
  selector:
    matchLabels:
      name: t test-nfs-server
  replicas: 1
  template:
    metadata:
      labels:
        name:  test-nfs-server
        kubeaudit.allow.privilegeEscalation: "true"
        kubeaudit.allow.privileged: "true"
        kubeaudit.allow.capability: "true"
        kubeaudit.allow.runAsRoot: "true"
        kubeaudit.allow.readOnlyRootFilesystemFalse: "true"

running

kubeaudit -l -v ERROR -n test all

Gives output:

time="2018-07-27T13:19:34+03:00" level=error msg="AllowPrivilegeEscalation not set which allows privilege escalation, please set to false" KubeType=deployment Name=test-nfs-server Namespace=test
time="2018-07-27T13:19:34+03:00" level=error msg="ReadOnlyRootFilesystem not set which results in a writable rootFS, please set to true" KubeType=deployment Name=test-nfs-server Namespace=test
time="2018-07-27T13:19:34+03:00" level=error msg="RunAsNonRoot is not set, which results in root user being allowed!" KubeType=deployment Name=test-nfs-server Namespace=test
time="2018-07-27T13:19:34+03:00" level=error msg="Privileged set to true! Please change it to false!" KubeType=deployment Name=test-nfs-server Namespace=test
time="2018-07-27T13:19:34+03:00" level=error msg="This should not have happened, if you are on kubeaudit master please consider to report: open config/capabilities-drop-list.yml: no such file or directory" KubeType=deployment Name=test-nfs-server Namespace=test
time="2018-07-27T13:19:34+03:00" level=error msg="AllowPrivilegeEscalation not set which allows privilege escalation, please set to false" KubeType=pod Name=test-nfs-server-6ff457c44c-zvjfc Namespace=test
time="2018-07-27T13:19:34+03:00" level=error msg="ReadOnlyRootFilesystem not set which results in a writable rootFS, please set to true" KubeType=pod Name=test-nfs-server-6ff457c44c-zvjfc Namespace=test
time="2018-07-27T13:19:34+03:00" level=error msg="RunAsNonRoot is not set, which results in root user being allowed!" KubeType=pod Name=test-nfs-server-6ff457c44c-zvjfc Namespace=test
time="2018-07-27T13:19:34+03:00" level=error msg="Privileged set to true! Please change it to false!" KubeType=pod Name=test-nfs-server-6ff457c44c-zvjfc Namespace=test
time="2018-07-27T13:19:34+03:00" level=error msg="This should not have happened, if you are on kubeaudit master please consider to report: open config/capabilities-drop-list.yml: no such file or directory" KubeType=pod Name=test-nfs-server-6ff457c44c-zvjfc Namespace=test

am I missing something?

Update all dependencies

Kubeaudit currently has a couple of older dependencies. Would be great to make Kubeaudit run on the never versions of all the dependencies.

Autofix skips and drops resources

When kubeaudit is run with autofix -f file.yaml and the to be autofixed file contains resources that kubeaudit doesn't know about like e.g. ingress and service the following happens:

WARN[0000] Skipping unsupported resource type extensions/v1beta1, Kind=Ingress
WARN[0000] Skipping unsupported resource type /v1, Kind=Service

Kubeaudit skips and drops them. What it should do is skip and keep.

Allow labels don't support multiple containers

Problem: The current implementation implementation of labels doesn't allow to specify for which container the deviation is allowed. E.g. kubeaudit.allow.capability.chown: "true" has no information whether it is the first or second container if we have more than one container in a resource.

      containers:
      - name: frist
      - name: second

Solution Change labels to have a container that they refer to.

2x your capapility drops!

Currently, autofix does not detect that caps have already been dropeed, so it drops them again.
I haven't had a look at why, but this is the result:

          capabilities:
            drop:
            - AUDIT_WRITE
            - CHOWN
            - DAC_OVERRIDE
            - FOWNER
            - FSETID
            - KILL
            - MKNOD
            - NET_BIND_SERVICE
            - SETGID
            - SETFCAP
            - SETPCAP
            - SETUID
            - SYS_CHROOT
            - AUDIT_WRITE
            - CHOWN
            - DAC_OVERRIDE
            - FOWNER
            - FSETID
            - KILL
            - MKNOD
            - NET_BIND_SERVICE
            - NET_RAW
            - SETFCAP
            - SETGID
            - SETPCAP
            - SETUID
            - SYS_CHROOT

Audit PodSecurityPolicy, AppArmor, and Seccomp

Running pods (if they're using psp/apparmor/seccomp) will bare n of the fol annotations:

metadata:
  annotations:
    # podsecuritypolicy
    kubernetes.io/psp: name

    # seccomp
    seccomp.security.alpha.kubernetes.io/pod: <profile>
    container.seccomp.security.alpha.kubernetes.io/<container name>: <profile>

    # apparmor
    apparmor.security.beta.kubernetes.io/pod: <profile>
    container.apparmor.security.beta.kubernetes.io/<container name>: <profile>

possible seccomp profiles:

  • docker/default
  • localhost/customprofilename
  • unconfined

possible apparmor profiles:

  • runtime/default
  • localhost/customprofilename
  • unconfined

pod security policies are referenced by their metadata.name

RunAsNonRoot can be inherited from PodSecurityContext

Current check covers only Container SecurityContext, but RunAsNonRoot, RunAsUser, RunAsGroup and SELinuxOptions are all inherited from the PodSecurityContext unless they are defined explicitly container-wise.

Please consider adding PodSecurityContext to the list of checked values.

Ref:

func checkRunAsNonRoot(container Container, result *Result) {

Extra newline generated by autofix on manifest starting with comment after yaml separator

ISSUE TYPE
  • Bug Report
  • Feature Idea

BUG REPORT

SUMMARY

Extra newline is generated by Autofix on manifest starting with comment after yaml separator.

ENVIRONMENT
  • Kubeaudit version: 0.4.1 (branch autofix)
  • Kubeaudit install method: -
STEPS TO REPRODUCE

Create a manifest file with the following structure

---
#This is a comment 3
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: test-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
  creationTimestamp: null
spec:
  rules: #This is a comment 1
  - http:
      paths:
      - backend:
          serviceName: test
          servicePort: 80
        path: /testpath
status:
  loadBalancer: {}
#This is a comment 5

run

kubeaudit autofix -f /path/to/manifest.yml
EXPECTED RESULTS

There should not be extra newline after yaml separator

ACTUAL RESULTS

changes the file to

---
  
#This is a comment 3
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: test-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
  creationTimestamp: null
spec:
  rules: #This is a comment 1
  - http:
      paths:
      - backend:
          serviceName: test
          servicePort: 80
        path: /testpath
status:
  loadBalancer: {}
#This is a comment 5

CI

Do we want it? And if yes what do we want? Travis or Circle?

Build Fails on Alpine Linux

The version sort feature was added to GNU sort relatively recently. Busybox and older versions of Linux don't support it.

sort: unrecognized option: V
BusyBox v1.28.4 (2018-07-17 15:21:40 UTC) multi-call binary.

Usage: sort [-nrugMcszbdfiokt] [-o FILE] [-k start[.offset][opts][,end[.offset][opts]] [-t CHAR] [FILE]...

Another refactor issue

The code here https://github.com/Shopify/kubeaudit/blob/master/cmd/runAsNonRoot.go#L78-L80
should be refactored to something like this:

	var results []Result
	for _, resource := range resources {
		results = append(results, auditRunAsNonRoot(resource))
	}

Why I am I saying something like that? Because we want to keep the go and that might require channels.
We want to do this because the print here https://github.com/Shopify/kubeaudit/blob/master/cmd/runAsNonRoot.go#L36-L38 is totally out of place and the audit functions get used in other places where the printing doesn't make sense.
Obviously, the print needs be put back in, e.g.

	var results []Result
	for _, resource := range resources {
		results = append(results, auditRunAsNonRoot(resource))
	}
	for _, result := range results {
		result.Print()
	}

pardon my pseudo code

Introduce logging level in kubeaudit

In the current scenario kubeaudit emit logs when there is an error/warning but there might be other use cases.

Like getting more information about the healthy k8s resources like the once which are not violating any security policies laid out by kubeaudit.

So for this we need to have different log level for example:

INFO: This would be the most verbose log level
ERROR/WARNING: This would be default log level

@klautcomputing

Multi-* tests

Learning from #88 #87 we need to introduce more tests. Especially, some that test multiple resources per config file and multiple containers per resource.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.