kapicorp / kapitan-reference Goto Github PK
View Code? Open in Web Editor NEWReference structure for Kapitan - alpha version
Home Page: https://www.kapicorp.com
Reference structure for Kapitan - alpha version
Home Page: https://www.kapicorp.com
Possbiel sources:
As already mentioned here: Slack#Kapitan
It might be possible to use this: https://github.com/bitnami-labs/kube-libsonnet which already includes several more ingress things
Types of Ingress
k8s-docs for Ingress
Ingress host/hostname
Currently the Ingress hostname is set to a wildcard.
In order to change this, please allow the generator to pick up a host.
parameters:
ingresses:
sonarqube-ingress:
host: "foo.bar.com"
paths:
- path: /
[...]
host: "*.foo.com"
paths:
- path: /
[...]
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-wildcard-host
spec:
rules:
- host: "foo.bar.com"
http:
paths:
- pathType: Prefix
path: "/bar"
backend:
service:
name: service1
port:
number: 80
- host: "*.foo.com"
http:
paths:
- pathType: Prefix
path: "/foo"
backend:
service:
name: service2
port:
number: 80
#
# Ingress
#
ingress:
rules:
- host: ${target_name}.${domain}
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: ${target_name}
port:
number: ${gitea:http_port}
parameters:
kapitan:
compile:
- output_path: manifests
input_type: jinja2
input_paths:
- templates/jinja/ingress.yml
{% set p = inventory.parameters %}
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ p.target_name }}
namespace: {{ p.namespace }}
labels: {{ p.generators.manifest.default_config.labels }}
annotations: {{ p.generators.manifest.default_config.annotations }}
spec:
rules: {{ p.ingress.rules }}
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: gitea
namespace: gitea
labels: {'app.kubernetes.io/part-of': 'gitea', 'app.kubernetes.io/managed-by': 'kapitan'}
annotations: {'manifests.kapicorp.com/generated': 'true'}
spec:
rules: [{'host': 'gitea.example.com', 'http': {'paths': [{'pathType': 'Prefix', 'path': '/', 'backend': {'service': {'name': 'gitea', 'port': {'number': 3000}}}}]}}]
{% set p = inventory.parameters %}
{% if inventory.parameters.ingress is defined %}
{% set i = inventory.parameters.ingress %}
{% set labels = p.generators.manifest.default_config.labels %}
{% set annotations = p.generators.manifest.default_config.annotations %}
{% for ingress in i %}
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ p.target_name }}-{{ loop.index }}
namespace: {{ p.namespace }}
labels: {{ i[ingress].extra.labels }}
annotations: {{ i[ingress].extra.annotations }}
spec:
tls: {{ i[ingress].tls | default("")}}
rules: {{ i[ingress].rules }}
{% endfor %}
{% else %}
---
{% endif %}
extra:
certs:
- name: wildcard-example-com
cert: ?{vaultkv:ssl/wildcard-example-com-cert}
key: ?{vaultkv:ssl/wildcard-example-com-key}
ingress:
wikijs:
extra:
labels: []
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
tls:
- hosts:
- wiki.${domain}
secretName: ${target_name}-tls
rules:
- host: wiki.${domain}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: wikijs
port:
number: ${wikijs:service:wikijs:http}
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: k8s-wikijs-1
namespace: wikijs
labels: []
annotations: {'nginx.ingress.kubernetes.io/proxy-body-size': '0', 'nginx.ingress.kubernetes.io/proxy-read-timeout': '600', 'nginx.ingress.kubernetes.io/proxy-send-timeout': '600'}
spec:
tls: [{'hosts': ['wiki.example.com'], 'secretName': 'k8s-wikijs-tls'}]
rules: [{'host': 'wiki.example.com', 'http': {'paths': [{'path': '/', 'pathType': 'Prefix', 'backend': {'service': {'name': 'wikijs', 'port': {'number': 3000}}}}]}}]
I could not find an example for Helm.
I'm working on it currently but if someone else has one - feel free. :-)
As discussed in Slack
Hey, I'm searching for a mechanism where my deployment will be triggered after running kubectl apply -f compiled/xyz/manifests.
With the default kubernetes scheduler this is (AFAIK) just possible when something else in the spec.template section has changed. I previously used ansible/jinja2 to hash all files and then put it as a label into the deployment.
Does someone has an idea how to achive this with kapitan?
I'd like to have a function which provides a label per component e.g. Postgres.
This label then should be automatically added to every resource referring to this Postgres instance.
For example
$ cat deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
$ cat configmap.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: game-demo
data:
# property-like keys; each key maps to a simple value
player_initial_lives: "3"
ui_properties_file_name: "user-interface.properties"
# file-like keys
game.properties: |
enemy.types=aliens,monsters
player.maximum-lives=5
user-interface.properties: |
color.good=purple
color.bad=yellow
allow.textmode=true
$ cat deployment.yml configmap.yml | sha256sum
d8dd3fa5923f8c52908b63532599bae4333155cfbb0c2f26114fdbd6663ab024 -
After changing the containerPort from 80 to 8080
cat deployment.yml configmap.yml | sha256sum
9ea6395c10946fc80d61a00b5c4770654b9f44ec56dd1dac2bd9e9a4dc16a27f -
Currently configmaps and secrets are readOnly.
In some cases it might be required to have the option.
Maybe fixed with #61
Describe the feature
A configmap can take a data or binaryData field. Kapitan only allows the data field.
Expected behavior
When defining an "config_maps" dict, I would love to see that I can choose between "data" and "binaryData".
So something like:
config_maps:
data-config:
mount: /path/data
readOnly: true
data:
application.yml:
template: some_template.j2
values: ${config}
binardata-config:
mount: /path/binarydata
readOnly: true
binaryData:
keystore.jks:
template: another_template.j2
values: ${another_config}
And maybe add another option to let kapitan encode the template to base64 before writing it into the configmap.
Thank you!
Slack discussion here: https://kubernetes.slack.com/archives/C981W2HD3/p1613723263349100
Official kubernetes Documentation:
https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
There are several proposals I see how this can work:
service:
clusterip-serviceName:
type: ClusterIP
ports:
udp-video1:
service_port: ${jvb:udp_media}
container_port: ${jvb:udp_media}
protocol: UDP
loadbalancer-serviceName:
type: Loadbalancer
ports:
udp-video2:
service_port: ${jvb:udp_media}
container_port: ${jvb:udp_media}
protocol: UDP
nodeport-serviceName:
type: NodePort
ports:
udp-video3:
service_port: ${jvb:udp_media}
container_port: ${jvb:udp_media}
protocol: UDP
externalname-serviceName:
type: ExternalName
ports:
udp-video4:
service_port: ${jvb:udp_media}
container_port: ${jvb:udp_media}
protocol: UDP
service:
ports:
udp-video:
type: ClusterIP
service_port: ${jvb:udp_media}
container_port: ${jvb:udp_media}
protocol: UDP
udp-other:
type: Loadbalancer
service_port: ${jvb:udp_media}
container_port: ${jvb:udp_media}
protocol: UDP
udp-video2:
type: ExternalName
service_port: ${jvb:udp_media}
container_port: ${jvb:udp_media}
protocol: UDP
udp-other2:
type: NodePort
service_port: ${jvb:udp_media}
container_port: ${jvb:udp_media}
protocol: UDP
service:
clusterip:
- udp-video:
service_port: ${jvb:udp_media}
container_port: ${jvb:udp_media}
protocol: UDP
loadbalancer:
- udp-other:
service_port: ${jvb:udp_media}
container_port: ${jvb:udp_media}
protocol: TCP
- udp-video2:
service_port: ${jvb:udp_media}
container_port: ${jvb:udp_media}
protocol: UDP
- udp-other2:
service_port: ${jvb:udp_media}
container_port: ${jvb:udp_media}
protocol: TCP
nodeport:
- udp-other3:
service_port: ${jvb:udp_media}
container_port: ${jvb:udp_media}
protocol: UDP
externalname:
- udp-other4:
service_port: ${jvb:udp_media}
container_port: ${jvb:udp_media}
protocol: UDP
https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
lifecycle:
# Vault container doesn't receive SIGTERM from Kubernetes
# and after the grace period ends, Kube sends SIGKILL. This
# causes issues with graceful shutdowns such as deregistering itself
# from Consul (zombie services).
preStop:
exec:
command: [
"/bin/sh", "-c",
# Adding a sleep here to give the pod eviction a
# chance to propagate, so requests will not be made
# to this pod while it's terminating
"sleep 5 && kill -SIGTERM $(pidof vault)",
]
hey,
for several projects especially Statefulsets we need to reference values into another container like described here:
https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/
Also we would like to use them as labels to for example attach the current node-name as pod label.
This can in Kapitan look like this:
parameters:
components:
app-name:
# Metadata
pod:
labels:
"app.kubernetes.io/running-on":
fieldRef:
fieldPath: spec.nodeName
labels:
app.kubernetes.io/version: ${sensu-backend:version}
app.kubernetes.io/component: ${sensu-backend:component}
env:
MYSQL_ROOT_PASSWORD:
secretKeyRef:
key: mysql-root-password
POD_NAME:
fieldRef:
fieldPath: metadata.name
Official kubernetes Documentation:
https://kubernetes.io/docs/concepts/policy/pod-security-policy/
There are several proposals I see how this can work:
parameters:
#
# Parameters
#
sensu-backend:
image: sensu/sensu:6
version: 6
component: go
# gRPC sensu-storage-client
grpc-storage-client: 2379
# grpc storage peer
grpc-storage-peer: 2380
# sensu-web-ui
web-ui: 3000
# profiling
profiling: 6060
# api
api: 8080
# websocket
agent-api: 8081
# Componment/s
components:
sensu-backend:
type: statefulset
image: ${sensu-backend:image}
imagePullPolicy: Always
[...]
podsecuritypolicy:
privileged: true
allowPrivilegeEscalation: true
allowedCapabilities:
- '*'
volumes:
- '*'
hostNetwork: true
hostPorts:
- min: 0
max: 65535
hostIPC: true
hostPID: true
runAsUser:
rule: 'RunAsAny'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'RunAsAny'
fsGroup:
rule: 'RunAsAny'
Will generate:
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: restricted
annotations:
manifests.kapicorp.com/generated: 'true'
labels:
app.kubernetes.io/component: go
app.kubernetes.io/managed-by: kapitan
app.kubernetes.io/part-of: sensu
app.kubernetes.io/version: 6
spec:
privileged: false
# Required to prevent escalations to root.
allowPrivilegeEscalation: false
# This is redundant with non-root + disallow privilege escalation,
# but we can provide it for defense in depth.
requiredDropCapabilities:
- ALL
# Allow core volume types.
volumes:
- 'configMap'
- 'emptyDir'
- 'projected'
- 'secret'
- 'downwardAPI'
# Assume that persistentVolumes set up by the cluster admin are safe to use.
- 'persistentVolumeClaim'
hostNetwork: false
hostIPC: false
hostPID: false
runAsUser:
# Require the container to run without root privileges.
rule: 'MustRunAsNonRoot'
seLinux:
# This policy assumes the nodes are using AppArmor rather than SELinux.
rule: 'RunAsAny'
supplementalGroups:
rule: 'MustRunAs'
ranges:
# Forbid adding the root group.
- min: 1
max: 65535
fsGroup:
rule: 'MustRunAs'
ranges:
# Forbid adding the root group.
- min: 1
max: 65535
readOnlyRootFilesystem: false
If you do this with a versioned secret:
components:
echo-server:
<other config>
env:
KAPITAN_SECRET:
secretKeyRef:
key: 'kapitan_secret'
You would expect the name found to include the version, but its taking its information from a bit of the dataset that doesn't yet have versions (they haven't been calculated yet).
A solution could be something like this (in WorkloadCommon
):
def update_env_for_versions(self, objects):
for object in objects.root:
rendered_name = object.root.metadata.name
containers = self.root.spec.template.spec.containers
for container in containers:
for env in container.env:
if "valueFrom" in env and "secretKeyRef" in env["valueFrom"]:
if env["valueFrom"].secretKeyRef.name == rendered_name.rsplit('-', 1)[0]:
env["valueFrom"].secretKeyRef.name = rendered_name
called after
workload.add_volumes_for_objects(secrets)
I can't help but feel theres a neater solution, this could do unexpected things. It also only does secrets.
Applogies for the lack of PR, my generator is hacked about quite a bit in ways you wouldn't want and I'm pushed for time. I'll try and backport the other bits that are globally applicable and do a PR for this if no one can see a better solution.
The env["valueFrom"]
pains me, but python insisted...
When enabled the versioning for configmaps the version is not calculated correctly.
The hash is applied based on the self.root
object which is something like {'apiVersion': 'v1', 'kind': 'ConfigMap', 'metadata': {'name': 'name-of-cm', 'labels': {'name': 'label-name'}, 'namespace': 'namespace'}}
. See the according code
The hash should be calculated based on the content of the configmap, so .data
or .stringData
.
The versioning mind be also broken for secrets, but I haven't tested it. I assume it, because the SharedConfig
class is used for both Configmaps and Secrets.
Please add support for the initContainers
.
https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
parameters:
sensu-backend:
image: sensu/sensu:6
version: '6'
component: go
components:
sensu-backend:
replicas: 3
type: statefulset
image: ${sensu-backend:image}
imagePullPolicy: Always
# Possible solution
initContainers:
nginxA:
image: yzv
command: [
"bash", "-c",
"|", "set -ex"
]
nginxB:
image: yzv
command: [
"bash", "-c",
"|", "set -ex"
]
https://github.com/kapicorp/kapitan-reference/blob/master/.kapitan#L1
Existing .kapitan
requires version 0.31.0rc0
As kapitan version 0.31.0 is already released, it seems reasonable to update this requirement to 0.31.0
When checking out the repository and running the compile command the output directory and all it's contents are owned by root.
$ > uname -a
Linux aretousa 5.6.0-2-amd64 #1 SMP Debian 5.6.14-1 (2020-05-23) x86_64 GNU/Linux
$ > sudo rm -rf compiled
$ > ./kapitan compile
Compiled kapicorp/tesoro (1.21s)
Compiled examples/tutorial (1.50s)
Compiled examples/pritunl (1.39s)
Compiled examples/echo-server (1.20s)
Compiled kapicorp/dev-sockshop (3.53s)
Compiled kapicorp/prod-sockshop (3.61s)
Compiled examples/global (0.56s)
Compiled examples/examples (1.85s)
Compiled examples/mysql (1.28s)
Compiled examples/gke-pvm-killer (0.98s)
Compiled examples/postgres-proxy (0.98s)
Compiled examples/sock-shop (3.21s)
$ > ls -alh
โฆ
drwx------ 14 root root 4.0K Jun 17 11:30 compiled
โฆ
I'd like to add a component which is a CronJob. The Cronjob should run a container with a volume mount.
The backup (cronjob) should then be written to the PVC.
After that my k8s backup mechanism get track of the rest.
The libsonnet simply uses podspec
which contains replicas and strategy by default
Next the restartPolicy
can only be one of "OnFailure", "Never".
Additionally strategy
isnt supported
Also the generated MatchLabels
doesn't fit onto a CronJob resource.
#
# Backup
#
postgres_backup:
type: deployment
image: ${postgres:image}
replicas: ""
schedule: ${postgres:backup:interval}
env:
PGHOST: ${database:host}
PGPORT: ${database:port}
PGDATABASE: ${database:name}
PGUSER: ${database:user}
PGPASSWORD: ${postgres:users:postgres:password}
volume_mounts:
pg-backup:
mountPath: /backup/${database:name}
subPath: pgdata
args:
- DUMP_FILE_NAME="backupOn`date +%Y-%m-%d-%H-%M`.dump"
- echo "Creating dump: $DUMP_FILE_NAME"
- cd pg_backup
- pg_dump -C -w --format=c --blobs > $DUMP_FILE_NAME
- if [ $? -ne 0 ]; then
- rm $DUMP_FILE_NAME
- echo "Back up not created, check db connection settings"
- exit 1
- fi
- echo 'Successfully Backed Up'
- exit 0
$ kubectl apply --recursive -f compiled/gitea/manifests/
The CronJob "postgres_backup" is invalid:
* spec.jobTemplate.spec.template.spec.restartPolicy: Required value: valid values: "OnFailure", "Never"
* spec.jobTemplate.spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/managed-by":"kapitan", "app.kubernetes.io/part-of":"gitea", "name":"postgres_backup"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: `selector` will be auto-generated
In my current usecase I want to add a service of type NodePort
. Currently this is possible but without the ability to specify the nodePort
itself.
So if I create a service of type NodePort
I get a service with a (by kubernetes) randomly added NodePort
between 30000-32767
Official docs:
https://kubernetes.io/docs/concepts/services-networking/service/#nodeport
Expected input:
components:
myapp:
[...]
ports:
my-service:
service_port: 80
container_port: 80
node_port: 30007
expected outcome:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
selector:
app: myapp
ports:
# By default and for convenience, the `targetPort` is set to the same value as the `port` field.
- port: 80
targetPort: 80
# Optional field
# By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
nodePort: 30007
The old libsonnet generator has it:
kapitan-reference/lib/kap.libsonnet
Line 132 in 92b13dc
The new kadet generator doesn't honor the node_port
kapitan-reference/components/generators/kubernetes/__init__.py
Lines 275 to 293 in cfe6f51
So I exceeded my knowledge and time to solve this issue myself ๐
Pod Management Policies
In Kubernetes 1.7 and later, StatefulSet allows you to relax its ordering guarantees while preserving its uniqueness and identity guarantees via its .spec.podManagementPolicy field.
https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: vault
namespace: my-vault-namespace
labels:
app.kubernetes.io/name: vault
app.kubernetes.io/instance: vault
app.kubernetes.io/managed-by: Helm
spec:
serviceName: vault-internal
podManagementPolicy: Parallel
https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
Allow usage of:
apiVersion: apps/v1
kind: Deployment/Statefulset
metadata:
name: "my-release-harbor-portal"
labels:
heritage: Helm
release: my-release
chart: harbor
app: "harbor"
component: portal
spec:
template:
spec:
[...]
automountServiceAccountToken: false
[...]
Discoverd adopting helm-portal
Really glad I stumbled upon this - the manifest generators are super handy and make defining multiple homogenous services really easy ๐
When using the deployment/service manifest generator, if the service
directive is specified then a service is created for each port defined in the container. I have a use-case in which we have a container that runs two ports, a user-facing http port and an admin port for healthchecks/metrics, and it is only necessary to create a service for the former. It's not a big deal, but more for neatness/not creating unnecessary services.
I guess a simple (?) fix would be to only create services for which service_port is defined. So, for the following configuration, only the http-server
port would have a service created.
parameters:
applications:
<app-name>:
component_defaults:
service:
type: ClusterIP
ports:
admin-server:
container_port: <port-1>
http-server:
container_port: <port-2>
service_port: 80
Document the manifest generator structure.
The idea is to use a json-schema to doc tool to keep the json schema self documenting
When trying to create cronjob resource with the following config:
parameters:
postgres-backup:
image: moep1990/pgbackup:latest
component: database
persistence:
backup:
storageclass: ${storageclass}
accessModes: ["ReadWriteOnce"]
size: 10Gi
components:
postgres-backup:
type: job
schedule: "0 */6 * * *"
image: ${postgres-backup:image}
env:
PGDATABASE: ${database:name}
PGHOST: ${database:host}
PGPASSWORD: "xyz"
PGPORT: ${database:port}
PGUSER: ${database:user}
# Persistence
volume_mounts:
backup:
mountPath: /var/postgres
subPath: postgres
volume_claims:
backup:
spec:
accessModes: ${persistence:backup:accessModes}
storageClassName: ${persistence:backup:storageclass}
resources:
requests:
storage: ${persistence:backup:size}
I get the following error:
$ ./kapitan compile -t artifactory
Unknown (Non-Kapitan) Error occurred
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/local/lib/python3.7/multiprocessing/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "/opt/venv/lib/python3.7/site-packages/kapitan/targets.py", line 466, in compile_target
input_compiler.compile_obj(comp_obj, ext_vars, **kwargs)
File "/opt/venv/lib/python3.7/site-packages/kapitan/inputs/base.py", line 55, in compile_obj
self.compile_input_path(input_path, comp_obj, ext_vars, **kwargs)
File "/opt/venv/lib/python3.7/site-packages/kapitan/inputs/base.py", line 77, in compile_input_path
**kwargs,
File "/opt/venv/lib/python3.7/site-packages/kapitan/inputs/kadet.py", line 120, in compile_file
output_obj = kadet_module.main(input_params).to_dict()
File "/src/components/generators/kubernetes/__init__.py", line 908, in main
return globals()[function](input_params)
File "/src/components/generators/kubernetes/__init__.py", line 841, in generate_manifests
workload.add_volumes_for_objects(configs)
AttributeError: 'CronJob' object has no attribute 'add_volumes_for_objects'
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/venv/lib/python3.7/site-packages/kapitan/targets.py", line 136, in compile_targets
[p.get() for p in pool.imap_unordered(worker, target_objs) if p]
File "/opt/venv/lib/python3.7/site-packages/kapitan/targets.py", line 136, in <listcomp>
[p.get() for p in pool.imap_unordered(worker, target_objs) if p]
File "/usr/local/lib/python3.7/multiprocessing/pool.py", line 748, in next
raise value
AttributeError: 'CronJob' object has no attribute 'add_volumes_for_objects'
'CronJob' object has no attribute 'add_volumes_for_objects'
Describe the bug/feature
I noticed is a NoneType exception when providing empty Keys in the classes
parameters:
components:
mysql:
[...]
config_maps:
config:
secrets:
secrets:
data:
mysql-root-password:
value: ?{plain:targets/${target_name}/mysql-root-password||randomstr:32|base64}
mysql-password:
value: ?{plain:targets/${target_name}/mysql-password||randomstr|base64}
To Reproduce
Steps to reproduce the behavior:
git clone https://github.com/kapicorp/kapitan-reference.git
config_maps.config
./kapitan compile
Expected behavior
Empty Keys without values get ignored
If it's a bug (please complete the following information):
I'm using docker provided by the kapitan-reference repo
https://github.com/kapicorp/kapitan-reference/blob/master/kapitan
Additional context
$ ./kapitan compile
Unknown (Non-Kapitan) Error occurred
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/local/lib/python3.7/multiprocessing/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "/opt/venv/lib/python3.7/site-packages/kapitan/targets.py", line 463, in compile_target
input_compiler.compile_obj(comp_obj, ext_vars, **kwargs)
File "/opt/venv/lib/python3.7/site-packages/kapitan/inputs/base.py", line 54, in compile_obj
self.compile_input_path(input_path, comp_obj, ext_vars, **kwargs)
File "/opt/venv/lib/python3.7/site-packages/kapitan/inputs/base.py", line 76, in compile_input_path
**kwargs,
File "/opt/venv/lib/python3.7/site-packages/kapitan/inputs/kadet.py", line 120, in compile_file
output_obj = kadet_module.main(input_params).to_dict()
File "/src/components/generators/kubernetes/__init__.py", line 799, in main
return globals()[function](input_params)
File "/src/components/generators/kubernetes/__init__.py", line 735, in generate_manifests
config_maps = GenerateConfigMaps(name=name, component=component).root
File "/opt/venv/lib/python3.7/site-packages/kapitan/inputs/kadet.py", line 188, in __init__
self.body()
File "/src/components/generators/kubernetes/__init__.py", line 541, in body
component.config_maps.items()]
File "/src/components/generators/kubernetes/__init__.py", line 540, in <listcomp>
self.root = [ConfigMap(name=name, config=config, component=component) for config_name, config in
File "/opt/venv/lib/python3.7/site-packages/kapitan/inputs/kadet.py", line 188, in __init__
self.body()
File "/src/components/generators/kubernetes/__init__.py", line 143, in body
for key, config_spec in config.data.items():
AttributeError: 'NoneType' object has no attribute 'data'
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/venv/lib/python3.7/site-packages/kapitan/targets.py", line 136, in compile_targets
[p.get() for p in pool.imap_unordered(worker, target_objs) if p]
File "/opt/venv/lib/python3.7/site-packages/kapitan/targets.py", line 136, in <listcomp>
[p.get() for p in pool.imap_unordered(worker, target_objs) if p]
File "/usr/local/lib/python3.7/multiprocessing/pool.py", line 748, in next
raise value
AttributeError: 'NoneType' object has no attribute 'data'
'NoneType' object has no attribute 'data'
discoverd in harbor-core:
apiVersion: apps/v1
kind: Deployment/Statefulset
metadata:
name: my-release-harbor-core
labels:
heritage: Helm
release: my-release
chart: harbor
app: "harbor"
component: core
spec:
template:
spec:
containers:
- name: core
[...]
startupProbe:
httpGet:
path: /api/v2.0/ping
scheme: HTTP
port: 8080
failureThreshold: 360
initialDelaySeconds: 10
periodSeconds: 10
[...]
parameters:
filebeat:
image: docker.elastic.co/beats/filebeat
version: "1.9"
component: go
# Component/s
components:
filebeat:
name: filebeat
type: statefulset
$ ./kapitan compile -t filebeat
Compiled filebeat (0.66s)
parameters:
filebeat:
image: docker.elastic.co/beats/filebeat
version: "1.9"
component: go
# Component/s
components:
filebeat:
name: filebeat
type: daemonset
$ ./kapitan compile -t filebeat
Unknown (Non-Kapitan) Error occurred
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/local/lib/python3.7/multiprocessing/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "/opt/venv/lib/python3.7/site-packages/kapitan/targets.py", line 466, in compile_target
input_compiler.compile_obj(comp_obj, ext_vars, **kwargs)
File "/opt/venv/lib/python3.7/site-packages/kapitan/inputs/base.py", line 55, in compile_obj
self.compile_input_path(input_path, comp_obj, ext_vars, **kwargs)
File "/opt/venv/lib/python3.7/site-packages/kapitan/inputs/base.py", line 77, in compile_input_path
**kwargs,
File "/opt/venv/lib/python3.7/site-packages/kapitan/inputs/kadet.py", line 120, in compile_file
output_obj = kadet_module.main(input_params).to_dict()
File "/src/components/generators/kubernetes/__init__.py", line 908, in main
return globals()[function](input_params)
File "/src/components/generators/kubernetes/__init__.py", line 826, in generate_manifests
workload = Workload(name=name, component=component)
File "/opt/venv/lib/python3.7/site-packages/kapitan/inputs/kadet.py", line 188, in __init__
self.body()
File "/src/components/generators/kubernetes/__init__.py", line 508, in body
raise ()
TypeError: exceptions must derive from BaseException
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/venv/lib/python3.7/site-packages/kapitan/targets.py", line 136, in compile_targets
[p.get() for p in pool.imap_unordered(worker, target_objs) if p]
File "/opt/venv/lib/python3.7/site-packages/kapitan/targets.py", line 136, in <listcomp>
[p.get() for p in pool.imap_unordered(worker, target_objs) if p]
File "/usr/local/lib/python3.7/multiprocessing/pool.py", line 748, in next
raise value
TypeError: exceptions must derive from BaseException
exceptions must derive from BaseException
For a Deployment + Job + CronJob resource it's sometimes required to have one or more additional PV/PVC's.
Therfore I need the generators to be able to generate them.
Currently PV/PVC generator only allows the function with volumeClaimTemplate
by type: statefulset
to create actual persistence.
parameters:
kapitan:
compile:
- output_path: manifests
input_type: jinja2
input_paths:
- templates/jinja/pvc.yml
parameters:
extra:
pvcs:
- name: pg-backup
spec:
storageClassName: ${postgres:persistence:storageclass}
accessModes: ${postgres:persistence:accessModes}
resources:
requests:
storage: ${postgres:backup:size}
{% set p = inventory.parameters %}
{% for pvc in p.extra.pvcs %}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ pvc.name }}
namespace: {{ p.namespace }}
labels: {{ p.generators.manifest.default_config.labels }}
annotations: {{ p.generators.manifest.default_config.annotations }}
spec: {{ pvc.spec }}
{% endfor %}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pg-backup
namespace: gitea
labels: {'app.kubernetes.io/part-of': 'gitea', 'app.kubernetes.io/managed-by': 'kapitan'}
annotations: {'manifests.kapicorp.com/generated': 'true'}
spec: {'storageClassName': 'standard', 'accessModes': ['ReadWriteOnce'], 'resources': {'requests': {'storage': '10Gi'}}}
Depends on https://github.com/kapicorp/kapitan-reference/pull/46/files
It would be great to be able to set application type: cronjob
, along with a schedule, and generate a https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#cronjob-v1beta1-batch type configuration.
The main implementation question I see with this concerns the fact that the Cronjob's spec.jobTemplate field is itself a jobSpec, so it would be nice to reuse this, but all the configuration setting is hardcoded to happen at the root.spec level (like here), so moving it down to the root.spec.jobTemplate.spec level seems a bit fiddly.
It'd be brilliant if we could specify different configurations for the liveness and readiness probes separately. In the current framework, any configuration applied to one healthcheck is applied to both, but I think it's a reasonable requirement to have different configurations, e.g. a separate http path for readiness and liveness.
My vision for this is that readiness
and liveness
become fields directly under healthcheck
, with the configuration defined beneath them, rather than have them as members of an array, so
healthcheck:
readiness:
type: http
port: http
path: /_health
timeout_seconds: 3
for example
Have had a go at hacking around locally but it's a bit unclear where to change given all the different places where project structure is defined - schema.json, service_component, schemas.libsonnet, kap.libsonnet...any pointers? :)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.