instrumenta / kubeval Goto Github PK
View Code? Open in Web Editor NEWValidate your Kubernetes configuration files, supports multiple Kubernetes versions
Home Page: https://kubeval.com
License: Other
Validate your Kubernetes configuration files, supports multiple Kubernetes versions
Home Page: https://kubeval.com
License: Other
Hi, this looks like a very interesting project.
I am wondering if this tool take into consideration of the current running context.
Thanks and have a good day!
I've try to run kubeval
with a simple kubeval xxx.yaml
and go the following error:
1 error occurred:
- Problem loading schema from the network at https://raw.githubusercontent.com/garethr/kubernetes-json-schema/master/master-standalone/list.json: Could not read schema from HTTP, response status is 404 Not Found
wget confirms that it's not there
Issuing the command
docker run -it -v `pwd`/Files:/fixtures garethr/kubeval fixtures/*
throws a 'Could not open file fixtures/*' error on my MacBook.
Choosing a specific file works like a charm.
So I think the problem is with the asterisk.
Any hints?
Apologies if this should be created against https://github.com/garethr/kubernetes-json-schema instead.
Attempting to validate a apiextensions.k8s.io/v1beta1 CustomResourceDefinition
resource fails as the schema file in $VERSION-standalone
is empty:
1 error occurred:
* Problem loading schema from the network at https://raw.githubusercontent.com/garethr/kubernetes-json-schema/master/v1.8.5-standalone/customresourcedefinition.json: EOF
[mattbrown@mattmbp kubernetes-json-schema]$ wc -c v1.*-standalone/customresourcedefinition.json
0 v1.8.0-standalone/customresourcedefinition.json
0 v1.8.1-standalone/customresourcedefinition.json
0 v1.8.2-standalone/customresourcedefinition.json
0 v1.8.3-standalone/customresourcedefinition.json
0 v1.8.4-standalone/customresourcedefinition.json
0 v1.8.5-standalone/customresourcedefinition.json
0 v1.8.6-standalone/customresourcedefinition.json
0 v1.9.0-standalone/customresourcedefinition.json
0 total
Is this intentional? It seems impossible in the current form to lint any CustomResourceDefinition
s. The kubernetes-json-schema
repo does have non-0 byte versions of the schema in the non-standalone directories (i.e. in /v1.8.0/
) but kubeval
is hardcoded to load the -standalone
flavor of each schema.
Hi,
First of all great tooling, highly looked after something similar. I have noticed a problem however, we have many different types of OpenShift configurations which are of the quite generic parental kind List. Attempting to validate those configuration files with the default schema location results in a 404 - given as no list.json exists within (in example) https://github.com/garethr/openshift-json-schema/tree/master/v3.6.0-standalone
Is it possible to load schema from local not from remote URL location?
I recenetly just implemented https://github.com/jetstack/cert-manager which comes with a few custom resources.....
Now my CI fails using kubeval with https://raw.githubusercontent.com/garethr/kubernetes-json-schema/master/v1.8.6-standalone/clusterissuer.json
obviously, because why would you have a definition for that?
What is the best way to handle this. my CI/CD checks for every yaml and then tries to validate it hence why I am getting this.. I could exclude it there with some trickery but I thought there might be a better way and to ask here?
Thanks
Hi,
I clone the repo(https://github.com/garethr/kubernetes-json-schema) to my local location and try to valid the k8s yaml file offline. I run the command like below:
./kubeeval temp_yaml --schema-location /root/output/service-ctrl
And get the error message like below:
> 2 errors occurred:
>
> * Problem loading schema from the network at /root/output/service-ctrl/kubernetes-json-schema/master/master-standalone/deployment.json: Reference {0xc420152780 {[]} false true false false true} must be canonical
> * Problem loading schema from the network at /root/output/service-ctrl/kubernetes-json-schema/master/master-standalone/service.json: Reference {0xc420152880 {[]} false true false false true} must be canonical
These two files are exist. Does anyone know why?
I don't see version 0.6.0
tag on Dockerhub https://hub.docker.com/r/garethr/kubeval/tags/
Seems like 0.6.0
was cut 5 days ago and last update on Dockerhub was 11 days ago, so I'm guessing the offline
tag is not for 0.6.0
For folks on Mac it would be nice to simply install with homebrew.
Some of our specs use a valid multi-document YAML format:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: admin-access
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
- nonResourceURLs: ["*"]
verbs: ["*"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: kubelet-role-binding
subjects:
- kind: User
name: kubelet
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: admin-access
It seems like the YAML library used in kubeval doesn't support this yet: https://github.com/go-yaml/yaml#compatibility, so at the very least kubeval should warn the user that this is not supported (in testing, only the first YAML document in the file is parsed and the rest silently discarded).
Something like:
kubeval --version
Hi,
Since about 10 days ago we are getting failures validating manifests which have not changed in weeks, specifically with horizontalPodAutoscaler:
$ kubeval --strict horizontalPodAutoscaler.yaml
The document horizontalPodAutoscaler.yaml contains an invalid HorizontalPodAutoscaler
---> targetCPUUtilizationPercentage: Additional property targetCPUUtilizationPercentage is not allowed
$ kubeval --version
Version: 0.7.0
Git commit: 2fcbe11d06671ae19210067529cb0fecf336f630
Built: 2017-09-16 04:46:25 UTC
Go version: go1.8.3
OS/Arch: linux/amd64
This is a property which has indeed been dropped from the master
schemas but kubeval is failing even when specifying our actual k8s version (e.g. --kubernetes-version 1.7.8
), which should accept it: https://github.com/garethr/kubernetes-json-schema/blob/master/v1.7.8-standalone-strict/horizontalpodautoscalerspec.json#L40
I am triying to use kubeval on docker without TTY to validate YML on CI but give me this error "The document stdin appears to be empty". How i can use it without TTY?
I may be misunderstanding the differences between the various flavors of schemas in https://github.com/garethr/kubernetes-json-schema but I was surprised that when running kubeval
with --kubernetes-version=1.8.5 --strict
that the schema for DaemonSet could not be found:
1 error occurred:
* Problem loading schema from the network at https://raw.githubusercontent.com/garethr/kubernetes-json-schema/master/v1.8.5-standalone-strict/daemonset.json: Could not read schema from HTTP, response status is 404 Not Found
I ran a yaml file through kubeval and didn't get any errors. Later when I tried to apply the config to Minikube I got the below
Error from server (Invalid): error when creating "kubernetes.yaml": Service "Foo" is invalid: metadata.name: Invalid value: "Foo": a DNS-1035 label must consist of lower case alphanumeric characters or '-', start with an alphabetic character, and end with an alphanumeric character (e.g. 'my-name', or 'abc-123', regex used for validation is '[a-z]([-a-z0-9]*[a-z0-9])?')
Error from server (Invalid): error when creating "kubernetes.yaml": Deployment.apps "Bar" is invalid: metadata.name: Invalid value: "Bar": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
Changing the service/deployment 'metadata' > 'name'
per the errors allowed me to apply the yaml file to Minikube.
Kubeval should check service/deployment configurations against the regex listed in the above error.
I'm trying to use kubeval docker image with Teamcity.
When -it
is not specified I'm getting the following error:
The document stdin appears to be empty
The command I used:
docker run --rm -v $(pwd)/namespaces:/namespaces garethr/kubeval:offline namespaces/*
kubeval v0.7.0
produces a panic when validating below file:
# test.yml
kind:
panic: interface conversion: interface {} is nil, not string
goroutine 1 [running]:
github.com/garethr/kubeval/kubeval.validateResource(0xc420088d80, 0x5, 0x205, 0x7ffc6055df6f, 0x11, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/garethr/kubeval/kubeval/kubeval.go:131 +0x9fd
github.com/garethr/kubeval/kubeval.Validate(0xc420088d80, 0x5, 0x205, 0x7ffc6055df6f, 0x11, 0x0, 0x0, 0x0, 0xc4200cb900, 0x0)
/go/src/github.com/garethr/kubeval/kubeval/kubeval.go:174 +0x1e9
github.com/garethr/kubeval/cmd.glob..func1(0xa94160, 0xc420112e40, 0x1, 0x1)
/go/src/github.com/garethr/kubeval/cmd/root.go:67 +0x1f6
github.com/garethr/kubeval/vendor/github.com/spf13/cobra.(*Command).execute(0xa94160, 0xc42000c110, 0x1, 0x1, 0xa94160, 0xc42000c110)
/go/src/github.com/garethr/kubeval/vendor/github.com/spf13/cobra/command.go:654 +0x299
github.com/garethr/kubeval/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xa94160, 0xc4200cb680, 0x0, 0x0)
/go/src/github.com/garethr/kubeval/vendor/github.com/spf13/cobra/command.go:729 +0x339
github.com/garethr/kubeval/vendor/github.com/spf13/cobra.(*Command).Execute(0xa94160, 0x0, 0x6e)
/go/src/github.com/garethr/kubeval/vendor/github.com/spf13/cobra/command.go:688 +0x2b
github.com/garethr/kubeval/cmd.Execute()
/go/src/github.com/garethr/kubeval/cmd/root.go:99 +0x31
main.main()
/go/src/github.com/garethr/kubeval/main.go:6 +0x20
I was hoping to add kubeval to the path in the docker images just because I was trying to use it in a CI job and found it unintuitive to not have it in the path. I would leave it at /kubeval as well, in order to maintain backwards compatibility, if you are ok with this change I'll submit a PR.
I get
---> apiGroup: Additional property apiGroup is not allowed
when running
cat "$file" | kubeval -f="$file" --strict
on a clusterRoleBinding.yaml
file
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin
subjects:
- kind: User
name: user001
apiGroup: ""
roleRef:
# this is referring to the default ClusterRole 'cluster-admin'
kind: ClusterRole
name: cluster-admin
apiGroup: ""
The sha256 checksums published with the releases are in all-caps. For example, release 0.7.1 shows the binary kubeval-linux-amd64.tar.gz
with a sha256 of 8259D462BD19E5FC2DB2EA304E51ED4DB928BE4343F6C9530F909DBA66E15713
but when attempting to check the tarball:
openssl sha -sha256 kubeval-linux-amd64.tar.gz | awk '{print $2}'
8259d462bd19e5fc2db2ea304e51ed4db928be4343f6c9530f909dba66e15713
which uses lowercase a-f.
I have a configmap which contains a certificate file:
apiVersion: v1
kind: ConfigMap
metadata:
name: kubeval-test-config
data:
my.crt: |-
-----BEGIN CERTIFICATE-----
REDACTED
-----END CERTIFICATE-----
Running kubeval
against this file fails with the following error:
$ kubeval a.configmap.yaml
1 error occurred:
* Missing a kind key
I believe this is due to the multi-document YAML support (#9) erroneously splitting the document with the line ending ----
.
With the current latest kubeval 0.7.1 (from Homebrew) I get the following error:
$ kubeval example.yaml
1 error occurred:
* Problem loading schema from the network at https://raw.githubusercontent.com/garethr/kubernetes-json-schema/master/master-standalone/issuer.json: Could not read schema from HTTP, response status is 404 Not Found
Checking the mentioned URL in a browser indeed returns a 404 as well.
kind: Pod
metadata:
name: demo
labels:
role: myrole
spec:
containers:
- name: bad_name
a DNS-1123 label must consist of lower case alphanumeric characters or '-', and must start and end with an alphanumeric character (e.g. 'my-name', or '123-abc', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?')]
I have a yaml file containing configurations for multiple Kubernetes resources
when i run kubeval
I get a one line report!:
* Missing a kind key
and it does not specify where in file the error has occured
Example
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: nginx-ds
spec:
replicas: 2
template:
spec:
containers:
- image: nginx
name: nginx
kubeval verifies the yaml
kubeval test.yaml
The document test.yaml contains a valid DaemonSet
replicas is invalid for a DaemonSet
kubectl create -f test.yaml
error: error validating "test.yaml": error validating data: found invalid field replicas for v1beta1.DaemonSetSpec; if you choose to ignore these errors, turn validation off with --validate=false
I didn't look at the code (I'm assuming it a straight forward fix) but wanted to report it in case I don't get to it.
Hi, I have a (correct) horizontal pod autoscaler definition, running on Kubernetes 1.7.11
When I launch kubeval with the -strict, even with -v 1.7.11, I get
core-api.yaml contains an invalid HorizontalPodAutoscaler
---> targetCPUUtilizationPercentage: Additional property targetCPUUtilizationPercentage is not allowed
But the property is correct. By removing the -strict the file is considered correct.
I need the -strict because I must be able to understand if some yaml contains values rejected by kubectl (e.g. adding a real not supported property)
Any ideas on why it tells me it is wrong? This is preventing us from adding this very useful script to our CI/CD release pipeline.
I am trying to use kubeval library in my project.
calling the validate function:
kubeval.Validate([]byte("v1.7.2"), "D:/Playground/nginx-deployment.yaml")
Throws the following error :
Missing a kind key
Whats could be the reason for the failure? am I calling the validate function in the right way?
the deployment file is valid:
apiVersion: apps/v1beta1 # for versions before 1.6.0 use extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
For instance in deployment: https://github.com/garethr/kubernetes-json-schema/blob/master/master-standalone/deployment.json#L3511-L3514
Reported here for further context. kubernetes/kompose#717
Other useful links:
https://kubernetes.io/docs/api-reference/v1.7/#objectmeta-v1-meta
https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#metadata
I am executing the command ./kubeval --openshift --kubernetes-version 1.5.0 yaml/* and getting the error message:
The document yaml/deployment-template.yaml contains an invalid Template
---> Raw: Raw is required
---> Raw: Raw is required
---> Raw: Raw is required
Sample file :
apiVersion: v1
kind: Template
metadata:
name: bar
parameters:
- name: foo
displayName: The name of the REST application. It will be part of the exposed route.
value: bar
objects:
- apiVersion: v1
kind: DeploymentConfig
metadata:
labels:
app: ${foo}
name: ${foo}
spec:
replicas: 1
selector:
app: ${foo}
deploymentconfig: ${foo}
template:
metadata:
labels:
app: ${foo}
deploymentconfig: ${foo}
spec:
containers:
- env:
- name: LOG_LEVEL
value: DEBUG
image: ${foo}
imagePullPolicy: Always
name: ${foo}
livenessProbe:
httpGet:
path: /api/healthcheck
port: 8080
initialDelaySeconds: 300
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /api/healthcheck
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 5
resources:
requests:
cpu: 500m
memory: 500Mi
limits:
cpu: 1000m
memory: 1Gi
ports:
- containerPort: 8080
name: http
protocol: TCP
- containerPort: 8778
name: jolokia
protocol: TCP
terminationMessagePath: /dev/termination-log
dnsPolicy: ClusterFirst
restartPolicy: Always
test: false
- apiVersion: v1
kind: ImageStream
metadata:
labels:
build: ${foo}
name: ${foo}
spec:
tags:
- from:
kind: DockerImage
name: ${foo}:latest
My local minikube by default has a deployment kube-dns
in kube-system
namespace. If I get it as json and try to validate with kubeval
without changing anything, I get:
kubeval kube-dns.json
The document kube-dns.json contains an invalid Deployment
---> spec.template.metadata.creationTimestamp: Invalid type. Expected: string, given: null
I guess kubeval
thinks it's invalid because of creationTimestamp definition. But having creationTimestamp as null must be valid.
This looks like regression #16
As well as being a CLI tool, kubeval
should be available as a library, so other Go tools could easily integrate the core functionality.
The following spec passes kubeval but fails kubectl apply
with the error "ValidationError(CronJob.spec.jobTemplate.spec.template.spec): unknown field "env" in io.k8s.api.core.v1.PodSpec" as the env
spec doesn't have the right indentation.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: my-cron
spec:
schedule: "0 2 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: my-cron
image: my-image:latest
command: ["...]
env:
- name: VAR
value: test
Hi, I make a fake yaml like below
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: lq
load_balancer: AAA
name: AAA
namespace: algorithm
name: AAA
namespace: algorithm
**specASDUWIUE**:
replicas: 1
revisionHistoryLimit: 1
selector:
matchLabels:
load_balancer: AAA
the highlight key specASDUWIUE should be spec. But this tool still pass the validation.
It would be great if the project had a version of the binary or docker image that included the schema needed for validation and didn't do any network calls as part of the execution.
In our use case, we want to validate our k8s templates offline with the docker image, but currently it needs network access to do anything.
Let me know if this would be feasible, maybe some references to where you download external dependencies at runtime?
Thanks!
Hi @garethr, let me thank you again for this!
I've run kubeval against our specs and noticed that it complains about Expected: string, given: integer
for several fields that Kubernetes happily accepts:
--> spec.template.spec.containers.0.livenessProbe.httpGet.port: Invalid type. Expected: string, given: integer
--> spec.strategy.rollingUpdate.maxSurge: Invalid type. Expected: string, given: integer
--> spec.strategy.rollingUpdate.maxUnavailable: Invalid type. Expected: string, given: integer
Any thoughts as to what is the problem here? env
values which are integers are rightly flagged by kubeval since those are not accepted.
Thanks!
Would it make sense to make travis publish a Docker image for every new release? That way it would be even easier to install and use kubeval
.
For a Kubernetes Dashboard spec, also downloaded, the k8s API happily accepts this:
spec:
containers:
<...>
args:
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
Kubeval, however, doesn't like it:
The document ../../contentful/cf-infra-stacks/kubeconfigs/staging/us-east-1/delivery-k8s-002/kubernetes-dashboard/dashboard.yaml is not a valid Deployment
--> spec.template.spec.containers.0.args: Invalid type. Expected: array, given: null
here is sample service file,
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: httpd
name: INVALID-e_f
spec:
ports:
- port: 8080
targetPort: 80
selector:
app: httpd
type: INVALID
status:
loadBalancer: {}
after running kubeval, it shows valid, but it's not
$ kubeval service.yml
The document docker-compose.yml contains a valid Service
There is 404 error under
https://raw.githubusercontent.com/garethr/kubernetes-json-schema/master/v1.8.7-standalone/deployment.json
Error from kubeval binary:
* Problem loading schema from the network at https://raw.githubusercontent.com/garethr/kubernetes-json-schema/master/v1.8.7-standalone/deployment.json: Could not read schema from HTTP, response status is 404 Not Found
Suppose you have a chart feature that's conditionally included, like say cronjobs:
{{- range $v := $.Values.cronJobs }}
---
apiVersion: batch/v1beta1
kind: CronJob
...
{{- end }}
If you have a service not using this feature (not supplying any Values.cronJobs
), then helm template will output (among other things):
---
# Source: base/templates/cronjobs.yaml
as the result for that template file, and kubeval
will complain about * Missing a kind key
for this part of the resource.
Is there a sensible way to ignore these failures? Is this maybe a helm template bug?
I have a file sample file as,
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
batman: true
io.kompose.service: redis-master
name: redis-master
spec:
replicas: 1
template:
metadata:
labels:
io.kompose.service: redis-master
spec:
containers:
- image: gcr.io/google_containers/redis:e2e
name: redis-master
ports:
- containerPort: 6379
restartPolicy: Always
in which batman
is extra key which kubeval recognize very well with strict mode,
$ kubval deployment.yaml --strict
The document redis-master-deployment.yaml contains an invalid Deployment
---> batman: Additional property batman is not allowed
But If I provide extra key superman
at root level as below,
apiVersion: extensions/v1beta1
kind: Deployment
superman: true
metadata:
labels:
io.kompose.service: redis-master
name: redis-master
spec:
replicas: 1
template:
metadata:
labels:
io.kompose.service: redis-master
spec:
containers:
- image: gcr.io/google_containers/redis:e2e
name: redis-master
ports:
- containerPort: 6379
restartPolicy: Always
Kubeval fails to validate
$ kubval deployment.yaml --strict
The document redis-master-deployment.yaml contains a valid Deployment
kubeval
can be used to validate config files in a CI system, it would be useful to provide an example of this for different tools:
To reproduce:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: rfq-explorer
spec:
type: RollingUpdate
kubeval deployment.yml
The document deployment.yml contains a valid Deployment
Expected:
3. ---> spec.type: Unknown property.
Or something along those lines
I have a Job with the following resources spec for it's containers:
resources:
limits:
cpu: 2
memory: "12G"
requests:
cpu: 1
memory: "8G"
I'm getting the following validation error:
./bin/linux/amd64/kubeval ec.10395.yaml
The document ec.10395.yaml contains an invalid Job
---> spec.template.spec.containers.0.resources.requests: Invalid type. Expected: [string,null], given: integer
---> spec.template.spec.containers.0.resources.limits: Invalid type. Expected: [string,null], given: integer
The error doesn't refer to the cpu key for some reason. Furthermore, kubectl happily creates my job correctly and the documentation itself uses an integer for cpus as well as values like "200m".
For folks on Windows it would be nice to simply install with chocolatey.
Is it expected behavior for kubeval to fail validation on a configmap that contains a multi-line scalar, i.e. (from the docs):
data:
game.properties: |
enemies=aliens
lives=3
enemies.cheat=true
enemies.cheat.level=noGoodRotten
secret.code.passphrase=UUDDLRLRBABAS
secret.code.allowed=true
secret.code.lives=30
If that data is crunched into JSON, it validates fine. From a readability/maintainability standpoint it's much easier to be able to use the multi-line scalar, especially with configmaps like nginx. Is this something we will need to use a custom schema for?
The actual error is:
* Failed to decode YAML from my-configmap.yml
Exited with code 123
I noticed you have targets in the Makefile to calculate checksums. Would it be possible to release those as well?
1 error occurred:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.