shopify / kubeaudit Goto Github PK
View Code? Open in Web Editor NEWkubeaudit helps you audit your Kubernetes clusters against common security controls
License: MIT License
kubeaudit helps you audit your Kubernetes clusters against common security controls
License: MIT License
version
attempts to print the Kubernetes client version, but that information isn't available so it only prints:
INFO[0000] Kubernetes client version Major= Minor= Platform=darwin/amd64
This is because client-go reports its version with the function https://github.com/kubernetes/client-go/blob/03bfb9bdcfe5482795b999f39ca3ed9ad42ce5bb/pkg/version/version.go#L28-L30. We don't use the k8s builder that would set those at build time, so the values fall back to https://github.com/kubernetes/client-go/blob/03bfb9bdcfe5482795b999f39ca3ed9ad42ce5bb/pkg/version/base.go#L42-L62.
Since the imported client-go will always be the same for any build of kubeaudit, I think we should add build and platform info to the kubeaudit version and stop attempting to report the kubernetes client version.
I want to know which specific Kubernetes versions are supported
For example, I am using the apps/v1 (from kubernetes v1.9) resource type for Deployments but I can not check it because the tool don't support it...
Will it be supported soon?
Thank you very much for this helpful tool!
π @jinankjain @jonpulsifer
Do we want to version kubeaudit?
So that you can install specific version via e.g. glide and check which version is currently install with e.g. --version
.
We neglected to add this repo to our OSS website, let's do that.
The use of host's networking | hostNetwork
The use of hostβs PID namespace | hostPID
The use of hostβs IPC namespace | hostIPC
Running kubeaudit caps
returns a lot of ERRO[0003] Capability not dropped
messages for the pods with the following effective settings:
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- all
readOnlyRootFilesystem: true
From inside the container everything is clearly OK:
grep ^Cap /proc/1/status
CapInh: 0000000000000000
CapPrm: 0000000000000000
CapEff: 0000000000000000
CapBnd: 0000000000000000
CapAmb: 0000000000000000
When autofixing
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
name: cababilitiesAdded
namespace: fakeDeploymentSC
spec:
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
apps: fakeSecurityContext
spec:
containers:
- name: fakeContainerSC1
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- AUDIT_WRITE
- name: fakeContainerSC2
The resulting YAML is
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
name: cababilitiesAdded
namespace: fakeDeploymentSC
spec:
selector: null
strategy: {}
template:
metadata:
annotations:
container.apparmor.security.beta.kubernetes.io/fakeContainerSC1: runtime/default
container.apparmor.security.beta.kubernetes.io/fakeContainerSC2: runtime/default
seccomp.security.alpha.kubernetes.io/pod: runtime/default
creationTimestamp: null
labels:
apps: fakeSecurityContext
spec:
automountServiceAccountToken: false
containers:
- name: fakeContainerSC1
resources: {}
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- AUDIT_WRITE
- CHOWN
- DAC_OVERRIDE
- FOWNER
- FSETID
- KILL
- MKNOD
- NET_BIND_SERVICE
- NET_RAW
- SETFCAP
- SETGID
- SETPCAP
- SETUID
- SYS_CHROOT
privileged: false
readOnlyRootFilesystem: true
runAsNonRoot: true
- name: fakeContainerSC2
resources: {}
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- AUDIT_WRITE
- CHOWN
- DAC_OVERRIDE
- FOWNER
- FSETID
- KILL
- MKNOD
- NET_BIND_SERVICE
- NET_RAW
- SETFCAP
- SETGID
- SETPCAP
- SETUID
- SYS_CHROOT
status: {}
which has
privileged: false
readOnlyRootFilesystem: true
runAsNonRoot: true
added for the first container but not the second. The example above is actually the test file fixtures/autofix_v1.yml
. It tests against the expected output fixtures/autofix-fixed_v1.yml
. The expected output has
privileged: false
readOnlyRootFilesystem: true
runAsNonRoot: true
for both containers yet the test still passes...
Given the current influx of contributors I would like to thank all of them! I like what https://github.com/goreleaser/goreleaser#contributors does, but I am open for other suggestions.
kubeaudit version
returns the wrong version number and then panics.
INFO[0000] Kubeaudit Version=0.1.0
Running inside cluster, using the cluster config
ERRO[0000] unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1059878]
In the current situation kubeaudit just audit for the fact that any capability is dropped or not. It does not take into account any specific capability.
This feature will introduce a flag through which a user would be able to specify that which caps should be dropped necessarily. And kubeaudit will error if those caps are not dropped instead of just giving a warning.
What do you say @jonpulsifer @klautcomputing ?
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:39:04Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
v0.3.0
URL: https://github.com/Shopify/kubeaudit/releases/download/v0.3.0/kubeaudit_0.3.0_linux_amd64.tar.gz
When running kubectl-audit all
the following error is observed:
ERRO[0000] unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x457818]
goroutine 1 [running]:
github.com/Shopify/kubeaudit/vendor/k8s.io/client-go/kubernetes.NewForConfig(0x0, 0x1, 0x1, 0x116de20)
/Users/shane.lawrence/src/github.com/Shopify/kubeaudit/vendor/k8s.io/client-go/kubernetes/clientset.go:399 +0x4e
github.com/Shopify/kubeaudit/cmd.kubeClient(0x0, 0x0, 0xc0000f39f0, 0x4d6fdd, 0x1049d60)
/Users/shane.lawrence/src/github.com/Shopify/kubeaudit/cmd/kubernetes.go:40 +0xe5
github.com/Shopify/kubeaudit/cmd.getResources(0xefb520, 0xc0002864c0, 0x0, 0x0, 0xffffffffffffffff)
/Users/shane.lawrence/src/github.com/Shopify/kubeaudit/cmd/util.go:226 +0x9a
github.com/Shopify/kubeaudit/cmd.runAudit.func1(0x1a4fd20, 0x1a78dd0, 0x0, 0x0)
/Users/shane.lawrence/src/github.com/Shopify/kubeaudit/cmd/util.go:294 +0x75
github.com/Shopify/kubeaudit/vendor/github.com/spf13/cobra.(*Command).execute(0x1a4fd20, 0x1a78dd0, 0x0, 0x0, 0x1a4fd20, 0x1a78dd0)
/Users/shane.lawrence/src/github.com/Shopify/kubeaudit/vendor/github.com/spf13/cobra/command.go:760 +0x2cc
github.com/Shopify/kubeaudit/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x1a514e0, 0x1a514e0, 0xc0000f3f30, 0x1)
/Users/shane.lawrence/src/github.com/Shopify/kubeaudit/vendor/github.com/spf13/cobra/command.go:846 +0x2fd
github.com/Shopify/kubeaudit/vendor/github.com/spf13/cobra.(*Command).Execute(0x1a514e0, 0x4056a0, 0xc000086058)
/Users/shane.lawrence/src/github.com/Shopify/kubeaudit/vendor/github.com/spf13/cobra/command.go:794 +0x2b
github.com/Shopify/kubeaudit/cmd.Execute()
/Users/shane.lawrence/src/github.com/Shopify/kubeaudit/cmd/root.go:37 +0x2d
main.main()
/Users/shane.lawrence/src/github.com/Shopify/kubeaudit/main.go:6 +0x20
The solution is to define the configuration file location:
kubectl-audit all -c ~/.kube/config
When running autofix it reads the yaml file correctly, the resource gets fixed and the write works as well. Yet, there are 3 things that could be improved upon:
status:
loadBalancer: {}
This is a know issue in Yaml and has been an open issue in go-yaml for more than 2 years, see:
There is an initial version of a patch out there but it was never finished:
Seems like there is one parser in python that can preserve comments
Support for yaml.MapSlice
was added in go-yaml.v2
see:
The whole discussion about the feature can be found here
Once #19 is merged we could get rid of all fakeaudit/fakeResource.go
files with one helper function in utils.go
that just traverses the test
folder and builds very thing it needs on the fly. What do you think about this @jinankjain
running
kubeaudit -l -n test all
I get:
ERRO[0000] This should not have happened, if you are on kubeaudit master please consider to report: open config/capabilities-drop-list.yml: no such file or directory KubeType=pod Name=test-775c4c6459-wwjbf Namespace=test
Installed from master just today
3a363010d61aecd9d8c26fe7b26763facb956f97
relevant config part for the given pod:
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1000
runAsNonRoot: true
privileged: false
capabilities:
drop:
- all
When both -c
and -l
are used, the expected behaviour is to use the config file specified by -c
. That switch is currently ignored and -l
forces the use of the default $HOME/.kube/config
.
./kubeaudit -l version
INFO[0000] Kubeaudit Version=0.1.0
INFO[0000] Kubernetes server version Major=1 Minor=10+ Platform=linux/amd64
INFO[0000] Kubernetes client version Major= Minor= Platform=darwin/amd64
./kubeaudit -l -c /notarealfile version
INFO[0000] Kubeaudit Version=0.1.0
INFO[0000] Kubernetes server version Major=1 Minor=10+ Platform=linux/amd64
INFO[0000] Kubernetes client version Major= Minor= Platform=darwin/amd64
There is a bug in auditing yml as auditSecurityContext is invoked everywhere in image.go, runAsNonRoot.go etc instead of specific function
#38 breaks :GoRename
I haven't figured out why but since it was merged GoRename
fails with:
/github.com/Shopify/kubeaudit/cmd/types.go|11| 10: expected type, found '=' (and 10 more errors)
/github.com/Shopify/kubeaudit/cmd/util.go|87| 16: undeclared name: Capability
/github.com/Shopify/kubeaudit/cmd/util.go|89| 16: undeclared name: Capability
/github.com/Shopify/kubeaudit/cmd/kubernetes.go|57| 55: undeclared name: DeploymentList
/github.com/Shopify/kubeaudit/cmd/kubernetes.go|66| 56: undeclared name: StatefulSetList
/github.com/Shopify/kubeaudit/cmd/kubernetes.go|75| 54: undeclared name: DaemonSetList
/github.com/Shopify/kubeaudit/cmd/kubernetes.go|84| 48: undeclared name: PodList
/github.com/Shopify/kubeaudit/cmd/kubernetes.go|93| 66: undeclared name: ReplicationControllerList
We should find out why :)
Since this isn't Rust π we need make sure our type switches do the thing correctly.
Several things which kubeaudit checks for (such as privileges and capabilities) can also be controlled using PodSecurityPolicies (PSPs). Add support for auditing PSPs which takes into account override order with annotations and security contexts.
Some notes:
Right now autofix uses this part of the code to write back the resources to file:
https://github.com/Shopify/kubeaudit/blob/master/cmd/k8sruntime_util.go#L96-L109
Which means only the last resources is actually in the file afterwards.
When logging to JSON the output gets mangled and instead of getting nice info like this:
ERRO[0000] Not all of the recommended capabilities were dropped! Please drop the mentioned capabiliites. CapsNotDropped="[NET_BIND_SERVICE]" KubeType=deployment Name=foo
only the following is shown:
{"CapsNotDropped":{},"KubeType":{},"Name":{},"level":"error","msg":"Not all of the recommended capabilities were dropped! Please drop the mentioned capabiliites.","time":"2017-10-30T13:44:14-04:00"}
Running kubeaudit all
should perform all available audits, but the command is not recognized.
β ./kubeaudit -l all
Error: unknown command "all" for "kubeaudit"
Did you mean this?
allowpe
I would be interested in kubeaudit
, but the latest release is from November 2017. Do you have plans to have a new release cut in the near future and providing the binary to download?
Having the waitgroup code inside the function and outside the function is really ugly.
I am talking about this:
https://github.com/Shopify/kubeaudit/blob/master/cmd/runAsNonRoot.go#L39
and then
https://github.com/Shopify/kubeaudit/blob/master/cmd/runAsNonRoot.go#L76-L81
It would be nice if we could find a better solution for that.
I am trying to use kubeaudit with my kubernetes cluster. How do I specify an OIDC token in the header for authentication or is this capability not supported at this time?
kubeaudit_0.2.0_darwin_amd64 shenoyk$ ./kubeaudit -l rootfs
ERRO[0000] No Auth Provider found for name "oidc"
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x1a71aa7]
goroutine 1 [running]:
github.com/Shopify/kubeaudit/vendor/k8s.io/client-go/kubernetes.(*Clientset).AppsV1beta1(...)
/Users/leex/go/src/github.com/Shopify/kubeaudit/vendor/k8s.io/client-go/kubernetes/clientset.go:154
github.com/Shopify/kubeaudit/cmd.getDeployments(0x0, 0xc420112c00)
/Users/leex/go/src/github.com/Shopify/kubeaudit/cmd/kubernetes.go:48 +0x37
github.com/Shopify/kubeaudit/cmd.getKubeResources(0x0, 0x1, 0x1, 0x2341320)
/Users/leex/go/src/github.com/Shopify/kubeaudit/cmd/util.go:320 +0x40
github.com/Shopify/kubeaudit/cmd.runAudit.func1(0x23a9620, 0xc420321170, 0x0, 0x1)
/Users/leex/go/src/github.com/Shopify/kubeaudit/cmd/util.go:409 +0x4ce
github.com/Shopify/kubeaudit/vendor/github.com/spf13/cobra.(*Command).execute(0x23a9620, 0xc420321140, 0x1, 0x1, 0x23a9620, 0xc420321140)
/Users/leex/go/src/github.com/Shopify/kubeaudit/vendor/github.com/spf13/cobra/command.go:702 +0x2c6
github.com/Shopify/kubeaudit/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x23a9840, 0x23a9840, 0xc4203bbf18, 0x1)
/Users/leex/go/src/github.com/Shopify/kubeaudit/vendor/github.com/spf13/cobra/command.go:783 +0x30e
github.com/Shopify/kubeaudit/vendor/github.com/spf13/cobra.(*Command).Execute(0x23a9840, 0x0, 0x1b2dcc0)
/Users/leex/go/src/github.com/Shopify/kubeaudit/vendor/github.com/spf13/cobra/command.go:736 +0x2b
github.com/Shopify/kubeaudit/cmd.Execute()
/Users/leex/go/src/github.com/Shopify/kubeaudit/cmd/root.go:32 +0x31
main.main()
/Users/leex/go/src/github.com/Shopify/kubeaudit/main.go:6 +0x20
Since in k8s 1.8 some of the resources moved to apps/extensions v1beta2. We should support auditing against those cases too. For more context: https://kubernetes.io/docs/reference/workloads-18-19/
Problem: ---
disappear when using --autofix
Solution: Add them back in
ref #84
In https://github.com/Shopify/kubeaudit#audit-network-policies is the following describe:
It checks that every namespace should have a default deny network policy installed.
See Kubernetes Network Policies for more information:
But actually the code https://github.com/Shopify/kubeaudit/blob/master/cmd/networkPolicies.go only iterates over existing networkPolicies
and doesn't check if the default-deny
policy is set. Also currently only the default allow all
policy is checked (which leads to an warning).
This
{"Major":"1","Minor":"7+","Platform":"linux/amd64","level":"info","msg":"Kubernetes server version","time":"2017-10-21T15:35:26-04:00"}
{"Major":"","Minor":"","Platform":"darwin/amd64","level":"info","msg":"Kubernetes client version","time":"2017-10-21T15:35:26-04:00"}
should only be shown when kubeaudit version
is called and not every time kubeaudit -l
is invoked.
Relevant config:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-nfs-server
labels:
name: test-nfs-server
kubeaudit.allow.privilegeEscalation: "true"
kubeaudit.allow.privileged: "true"
kubeaudit.allow.capability: "true"
kubeaudit.allow.runAsRoot: "true"
kubeaudit.allow.readOnlyRootFilesystemFalse: "true"
spec:
selector:
matchLabels:
name: t test-nfs-server
replicas: 1
template:
metadata:
labels:
name: test-nfs-server
kubeaudit.allow.privilegeEscalation: "true"
kubeaudit.allow.privileged: "true"
kubeaudit.allow.capability: "true"
kubeaudit.allow.runAsRoot: "true"
kubeaudit.allow.readOnlyRootFilesystemFalse: "true"
running
kubeaudit -l -v ERROR -n test all
Gives output:
time="2018-07-27T13:19:34+03:00" level=error msg="AllowPrivilegeEscalation not set which allows privilege escalation, please set to false" KubeType=deployment Name=test-nfs-server Namespace=test
time="2018-07-27T13:19:34+03:00" level=error msg="ReadOnlyRootFilesystem not set which results in a writable rootFS, please set to true" KubeType=deployment Name=test-nfs-server Namespace=test
time="2018-07-27T13:19:34+03:00" level=error msg="RunAsNonRoot is not set, which results in root user being allowed!" KubeType=deployment Name=test-nfs-server Namespace=test
time="2018-07-27T13:19:34+03:00" level=error msg="Privileged set to true! Please change it to false!" KubeType=deployment Name=test-nfs-server Namespace=test
time="2018-07-27T13:19:34+03:00" level=error msg="This should not have happened, if you are on kubeaudit master please consider to report: open config/capabilities-drop-list.yml: no such file or directory" KubeType=deployment Name=test-nfs-server Namespace=test
time="2018-07-27T13:19:34+03:00" level=error msg="AllowPrivilegeEscalation not set which allows privilege escalation, please set to false" KubeType=pod Name=test-nfs-server-6ff457c44c-zvjfc Namespace=test
time="2018-07-27T13:19:34+03:00" level=error msg="ReadOnlyRootFilesystem not set which results in a writable rootFS, please set to true" KubeType=pod Name=test-nfs-server-6ff457c44c-zvjfc Namespace=test
time="2018-07-27T13:19:34+03:00" level=error msg="RunAsNonRoot is not set, which results in root user being allowed!" KubeType=pod Name=test-nfs-server-6ff457c44c-zvjfc Namespace=test
time="2018-07-27T13:19:34+03:00" level=error msg="Privileged set to true! Please change it to false!" KubeType=pod Name=test-nfs-server-6ff457c44c-zvjfc Namespace=test
time="2018-07-27T13:19:34+03:00" level=error msg="This should not have happened, if you are on kubeaudit master please consider to report: open config/capabilities-drop-list.yml: no such file or directory" KubeType=pod Name=test-nfs-server-6ff457c44c-zvjfc Namespace=test
am I missing something?
Kubeaudit currently has a couple of older dependencies. Would be great to make Kubeaudit run on the never versions of all the dependencies.
ref kubernetes/kubernetes#47019
in 1.8 we now have the ability to define allowPrivilegeEscalation: false
in our container security contexts
let's audit for this @klautcomputing π
After #48, we can now refactor the commands even further take a look at: https://github.com/Shopify/kubeaudit/blob/master/cmd/runAsNonRoot.go#L56-L81
the same code will show up in basically all the commands and could be refactored into one function in cmd/util.go
which additionally takes a function pointer as a parameter and then just calls that function in https://github.com/Shopify/kubeaudit/blob/master/cmd/runAsNonRoot.go#L79
When kubeaudit is run with autofix -f file.yaml
and the to be autofixed file contains resources that kubeaudit doesn't know about like e.g. ingress and service the following happens:
WARN[0000] Skipping unsupported resource type extensions/v1beta1, Kind=Ingress
WARN[0000] Skipping unsupported resource type /v1, Kind=Service
Kubeaudit skips and drops them. What it should do is skip and keep.
Problem: The current implementation implementation of labels doesn't allow to specify for which container the deviation is allowed. E.g. kubeaudit.allow.capability.chown: "true"
has no information whether it is the first
or second
container if we have more than one container in a resource.
containers:
- name: frist
- name: second
Solution Change labels to have a container that they refer to.
Currently, autofix does not detect that caps have already been dropeed, so it drops them again.
I haven't had a look at why, but this is the result:
capabilities:
drop:
- AUDIT_WRITE
- CHOWN
- DAC_OVERRIDE
- FOWNER
- FSETID
- KILL
- MKNOD
- NET_BIND_SERVICE
- SETGID
- SETFCAP
- SETPCAP
- SETUID
- SYS_CHROOT
- AUDIT_WRITE
- CHOWN
- DAC_OVERRIDE
- FOWNER
- FSETID
- KILL
- MKNOD
- NET_BIND_SERVICE
- NET_RAW
- SETFCAP
- SETGID
- SETPCAP
- SETUID
- SYS_CHROOT
Add a template for both Pull Requests and Issues to standardize how we contribute to the project
Running pods (if they're using psp/apparmor/seccomp) will bare n
of the fol annotations:
metadata:
annotations:
# podsecuritypolicy
kubernetes.io/psp: name
# seccomp
seccomp.security.alpha.kubernetes.io/pod: <profile>
container.seccomp.security.alpha.kubernetes.io/<container name>: <profile>
# apparmor
apparmor.security.beta.kubernetes.io/pod: <profile>
container.apparmor.security.beta.kubernetes.io/<container name>: <profile>
possible seccomp profiles:
docker/default
localhost/customprofilename
unconfined
possible apparmor profiles:
runtime/default
localhost/customprofilename
unconfined
pod security policies are referenced by their metadata.name
Current check covers only Container SecurityContext, but RunAsNonRoot
, RunAsUser
, RunAsGroup
and SELinuxOptions
are all inherited from the PodSecurityContext unless they are defined explicitly container-wise.
Please consider adding PodSecurityContext
to the list of checked values.
Ref:
Line 10 in 87446f2
Extra newline is generated by Autofix on manifest starting with comment after yaml separator.
Create a manifest file with the following structure
---
#This is a comment 3
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
creationTimestamp: null
spec:
rules: #This is a comment 1
- http:
paths:
- backend:
serviceName: test
servicePort: 80
path: /testpath
status:
loadBalancer: {}
#This is a comment 5
run
kubeaudit autofix -f /path/to/manifest.yml
There should not be extra newline after yaml separator
changes the file to
---
#This is a comment 3
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
creationTimestamp: null
spec:
rules: #This is a comment 1
- http:
paths:
- backend:
serviceName: test
servicePort: 80
path: /testpath
status:
loadBalancer: {}
#This is a comment 5
Do we want it? And if yes what do we want? Travis or Circle?
The version sort feature was added to GNU sort relatively recently. Busybox and older versions of Linux don't support it.
sort: unrecognized option: V
BusyBox v1.28.4 (2018-07-17 15:21:40 UTC) multi-call binary.
Usage: sort [-nrugMcszbdfiokt] [-o FILE] [-k start[.offset][opts][,end[.offset][opts]] [-t CHAR] [FILE]...
The code here https://github.com/Shopify/kubeaudit/blob/master/cmd/runAsNonRoot.go#L78-L80
should be refactored to something like this:
var results []Result
for _, resource := range resources {
results = append(results, auditRunAsNonRoot(resource))
}
Why I am I saying something like that? Because we want to keep the go
and that might require channels.
We want to do this because the print here https://github.com/Shopify/kubeaudit/blob/master/cmd/runAsNonRoot.go#L36-L38 is totally out of place and the audit
functions get used in other places where the printing doesn't make sense.
Obviously, the print needs be put back in, e.g.
var results []Result
for _, resource := range resources {
results = append(results, auditRunAsNonRoot(resource))
}
for _, result := range results {
result.Print()
}
pardon my pseudo code
In the current scenario kubeaudit emit logs when there is an error/warning but there might be other use cases.
Like getting more information about the healthy k8s resources like the once which are not violating any security policies laid out by kubeaudit.
So for this we need to have different log level for example:
INFO: This would be the most verbose log level
ERROR/WARNING: This would be default log level
kubeaudit
lowercase-with-dashes
audit.kubernetes.io/key: value
Repeat the refactor form #64 in test_util.go
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.