GithubHelp home page GithubHelp logo

kapicorp / kapitan-reference Goto Github PK

View Code? Open in Web Editor NEW
41.0 2.0 22.0 1.62 MB

Reference structure for Kapitan - alpha version

Home Page: https://www.kapicorp.com

Shell 60.57% Python 1.02% Jinja 3.41% Mustache 35.01%
kapitan templates jsonnet terraform kubernetes

kapitan-reference's People

Contributors

ademariag avatar alanhughes avatar eugenfo avatar lingwooc avatar moep90 avatar rasmusdencker avatar splichy avatar yolabingo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

kapitan-reference's Issues

Allow to generate different types/allow to specify hostnames

Possbiel sources:
As already mentioned here: Slack#Kapitan
It might be possible to use this: https://github.com/bitnami-labs/kube-libsonnet which already includes several more ingress things

Types of Ingress
k8s-docs for Ingress

  • Ingress backed by a single Service
  • Simple fanout
  • Name based virtual hosting
  • TLS
  • Load balancing

Ingress host/hostname
Currently the Ingress hostname is set to a wildcard.
In order to change this, please allow the generator to pick up a host.

parameters:
  ingresses:
    sonarqube-ingress:
      host: "foo.bar.com"
        paths:
          - path: /
[...]
      host: "*.foo.com"
        paths:
          - path: /
[...]
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-wildcard-host
spec:
  rules:
  - host: "foo.bar.com"
    http:
      paths:
      - pathType: Prefix
        path: "/bar"
        backend:
          service:
            name: service1
            port:
              number: 80
  - host: "*.foo.com"
    http:
      paths:
      - pathType: Prefix
        path: "/foo"
        backend:
          service:
            name: service2
            port:
              number: 80

My current Workaround

The Component

  #
  # Ingress
  #
  ingress:
    rules:
      - host: ${target_name}.${domain}
        http:
          paths:
            - pathType: Prefix
              path: /
              backend:
                service:
                  name: ${target_name}
                  port:
                    number: ${gitea:http_port}

The Kapitan Compiler info

parameters:
  kapitan:
    compile:
      - output_path: manifests
        input_type: jinja2
        input_paths: 
          - templates/jinja/ingress.yml

The Template without TLS

{% set p = inventory.parameters %}
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: {{ p.target_name }}
  namespace: {{ p.namespace }}
  labels: {{ p.generators.manifest.default_config.labels }}
  annotations: {{ p.generators.manifest.default_config.annotations }}
spec:
  rules: {{ p.ingress.rules }}

The Result:

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: gitea
  namespace: gitea
  labels: {'app.kubernetes.io/part-of': 'gitea', 'app.kubernetes.io/managed-by': 'kapitan'}
  annotations: {'manifests.kapicorp.com/generated': 'true'}
spec:
  rules: [{'host': 'gitea.example.com', 'http': {'paths': [{'pathType': 'Prefix', 'path': '/', 'backend': {'service': {'name': 'gitea', 'port': {'number': 3000}}}}]}}]

The Template with TLS

{% set p = inventory.parameters %}
{% if inventory.parameters.ingress is defined %}
{% set i = inventory.parameters.ingress %}
{% set labels = p.generators.manifest.default_config.labels %}
{% set annotations = p.generators.manifest.default_config.annotations %}
{% for ingress in i %}
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: {{ p.target_name }}-{{ loop.index }}
  namespace: {{ p.namespace }}
  labels: {{ i[ingress].extra.labels }}
  annotations: {{ i[ingress].extra.annotations }}
spec:
  tls: {{ i[ingress].tls | default("")}}
  rules: {{ i[ingress].rules }}
{% endfor %}
{% else %}
---
{% endif %}

Kapitan Definition

  extra:
    certs:
      - name: wildcard-example-com
        cert: ?{vaultkv:ssl/wildcard-example-com-cert}
        key: ?{vaultkv:ssl/wildcard-example-com-key}

  ingress:
    wikijs:
      extra:
        labels: []
        annotations:
          nginx.ingress.kubernetes.io/proxy-body-size: "0"
          nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
          nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
      tls:
      - hosts:
          - wiki.${domain}
        secretName: ${target_name}-tls
      rules:
        - host: wiki.${domain}
          http:
            paths:
            - path: /
              pathType: Prefix
              backend:
                service:
                  name: wikijs
                  port:
                    number: ${wikijs:service:wikijs:http}

The Result

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: k8s-wikijs-1
  namespace: wikijs
  labels: []
  annotations: {'nginx.ingress.kubernetes.io/proxy-body-size': '0', 'nginx.ingress.kubernetes.io/proxy-read-timeout': '600', 'nginx.ingress.kubernetes.io/proxy-send-timeout': '600'}
spec:
  tls: [{'hosts': ['wiki.example.com'], 'secretName': 'k8s-wikijs-tls'}]
  rules: [{'host': 'wiki.example.com', 'http': {'paths': [{'path': '/', 'pathType': 'Prefix', 'backend': {'service': {'name': 'wikijs', 'port': {'number': 3000}}}}]}}]

Allow to generate a hash value for a component

As discussed in Slack

Hey, I'm searching for a mechanism where my deployment will be triggered after running kubectl apply -f compiled/xyz/manifests.
With the default kubernetes scheduler this is (AFAIK) just possible when something else in the spec.template section has changed. I previously used ansible/jinja2 to hash all files and then put it as a label into the deployment.
Does someone has an idea how to achive this with kapitan?

I'd like to have a function which provides a label per component e.g. Postgres.
This label then should be automatically added to every resource referring to this Postgres instance.

For example

$ cat deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
$ cat configmap.yml
apiVersion: v1
kind: ConfigMap
metadata:
  name: game-demo
data:
  # property-like keys; each key maps to a simple value
  player_initial_lives: "3"
  ui_properties_file_name: "user-interface.properties"

  # file-like keys
  game.properties: |
    enemy.types=aliens,monsters
    player.maximum-lives=5    
  user-interface.properties: |
    color.good=purple
    color.bad=yellow
    allow.textmode=true    
$ cat deployment.yml configmap.yml | sha256sum
d8dd3fa5923f8c52908b63532599bae4333155cfbb0c2f26114fdbd6663ab024  -

After changing the containerPort from 80 to 8080

cat deployment.yml configmap.yml | sha256sum
9ea6395c10946fc80d61a00b5c4770654b9f44ec56dd1dac2bd9e9a4dc16a27f  -

[Feature] add binaryData field to Configmaps

Describe the feature
A configmap can take a data or binaryData field. Kapitan only allows the data field.

Expected behavior
When defining an "config_maps" dict, I would love to see that I can choose between "data" and "binaryData".
So something like:

config_maps:
  data-config:
    mount: /path/data
    readOnly: true
    data:
      application.yml:
        template: some_template.j2
        values: ${config}
  binardata-config:
    mount: /path/binarydata
    readOnly: true
    binaryData:
      keystore.jks:
        template: another_template.j2
        values: ${another_config}

And maybe add another option to let kapitan encode the template to base64 before writing it into the configmap.

Thank you!

Allow to generate to add different service types

Slack discussion here: https://kubernetes.slack.com/archives/C981W2HD3/p1613723263349100
Official kubernetes Documentation:
https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types

There are several proposals I see how this can work:

Option 1:

      service:
        clusterip-serviceName:
          type: ClusterIP
          ports:
            udp-video1:
              service_port: ${jvb:udp_media}
              container_port: ${jvb:udp_media}
              protocol: UDP
        loadbalancer-serviceName:
          type: Loadbalancer
          ports:
            udp-video2:
              service_port: ${jvb:udp_media}
              container_port: ${jvb:udp_media}
              protocol: UDP
        nodeport-serviceName:
          type: NodePort
          ports:
            udp-video3:
              service_port: ${jvb:udp_media}
              container_port: ${jvb:udp_media}
              protocol: UDP
        externalname-serviceName:
          type: ExternalName
          ports:
            udp-video4:
              service_port: ${jvb:udp_media}
              container_port: ${jvb:udp_media}
              protocol: UDP

Option 2:

     service:
     ports:
        udp-video:
          type: ClusterIP
          service_port: ${jvb:udp_media}
          container_port: ${jvb:udp_media}
          protocol: UDP
        udp-other:
          type: Loadbalancer
          service_port: ${jvb:udp_media}
          container_port: ${jvb:udp_media}
          protocol: UDP
        udp-video2:
          type: ExternalName
          service_port: ${jvb:udp_media}
          container_port: ${jvb:udp_media}
          protocol: UDP
        udp-other2:
          type: NodePort
          service_port: ${jvb:udp_media}
          container_port: ${jvb:udp_media}
          protocol: UDP

Option 3:

      service:
        clusterip:
          - udp-video:
            service_port: ${jvb:udp_media}
            container_port: ${jvb:udp_media}
            protocol: UDP
        loadbalancer:
          - udp-other:
            service_port: ${jvb:udp_media}
            container_port: ${jvb:udp_media}
            protocol: TCP
          - udp-video2:
            service_port: ${jvb:udp_media}
            container_port: ${jvb:udp_media}
            protocol: UDP
          - udp-other2:
            service_port: ${jvb:udp_media}
            container_port: ${jvb:udp_media}
            protocol: TCP
        nodeport:
          - udp-other3:
            service_port: ${jvb:udp_media}
            container_port: ${jvb:udp_media}
            protocol: UDP
        externalname:
          - udp-other4:
            service_port: ${jvb:udp_media}
            container_port: ${jvb:udp_media}
            protocol: UDP

Allow usage of container-lifecycle-hooks

https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/

          lifecycle:
            # Vault container doesn't receive SIGTERM from Kubernetes
            # and after the grace period ends, Kube sends SIGKILL.  This
            # causes issues with graceful shutdowns such as deregistering itself
            # from Consul (zombie services).
            preStop:
              exec:
                command: [
                  "/bin/sh", "-c",
                  # Adding a sleep here to give the pod eviction a
                  # chance to propagate, so requests will not be made
                  # to this pod while it's terminating
                  "sleep 5 && kill -SIGTERM $(pidof vault)",
                ]

Allow generators to pick up value references not just by environment variables

hey,
for several projects especially Statefulsets we need to reference values into another container like described here:
https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/

Also we would like to use them as labels to for example attach the current node-name as pod label.

This can in Kapitan look like this:

parameters:
  components:
    app-name:
       # Metadata
      pod:
        labels:
          "app.kubernetes.io/running-on":
            fieldRef:
              fieldPath: spec.nodeName

      labels:
        app.kubernetes.io/version: ${sensu-backend:version}
        app.kubernetes.io/component: ${sensu-backend:component}
      
      env:
        MYSQL_ROOT_PASSWORD:
          secretKeyRef:
            key: mysql-root-password
        POD_NAME:
          fieldRef:
            fieldPath: metadata.name

Allow generators to generate PodSecurityPolicy

Official kubernetes Documentation:
https://kubernetes.io/docs/concepts/policy/pod-security-policy/

There are several proposals I see how this can work:

parameters:
  #
  # Parameters
  #
  sensu-backend:
    image: sensu/sensu:6
    version: 6
    component: go
    # gRPC sensu-storage-client
    grpc-storage-client: 2379
    # grpc storage peer
    grpc-storage-peer: 2380
    # sensu-web-ui
    web-ui: 3000
    # profiling
    profiling: 6060
    # api
    api: 8080
    # websocket
    agent-api: 8081

  # Componment/s
  components:
    sensu-backend:
      type: statefulset
      image: ${sensu-backend:image}
      imagePullPolicy: Always
      [...]

      podsecuritypolicy:
        privileged: true
        allowPrivilegeEscalation: true
        allowedCapabilities:
        - '*'
        volumes:
        - '*'
        hostNetwork: true
        hostPorts:
        - min: 0
        max: 65535
        hostIPC: true
        hostPID: true
        runAsUser:
          rule: 'RunAsAny'
        seLinux:
          rule: 'RunAsAny'
        supplementalGroups:
          rule: 'RunAsAny'
        fsGroup:
          rule: 'RunAsAny'

Will generate:

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: restricted
  annotations:
    manifests.kapicorp.com/generated: 'true'
  labels:
    app.kubernetes.io/component: go
    app.kubernetes.io/managed-by: kapitan
    app.kubernetes.io/part-of: sensu
    app.kubernetes.io/version: 6
spec:
  privileged: false
  # Required to prevent escalations to root.
  allowPrivilegeEscalation: false
  # This is redundant with non-root + disallow privilege escalation,
  # but we can provide it for defense in depth.
  requiredDropCapabilities:
    - ALL
  # Allow core volume types.
  volumes:
    - 'configMap'
    - 'emptyDir'
    - 'projected'
    - 'secret'
    - 'downwardAPI'
    # Assume that persistentVolumes set up by the cluster admin are safe to use.
    - 'persistentVolumeClaim'
  hostNetwork: false
  hostIPC: false
  hostPID: false
  runAsUser:
    # Require the container to run without root privileges.
    rule: 'MustRunAsNonRoot'
  seLinux:
    # This policy assumes the nodes are using AppArmor rather than SELinux.
    rule: 'RunAsAny'
  supplementalGroups:
    rule: 'MustRunAs'
    ranges:
      # Forbid adding the root group.
      - min: 1
        max: 65535
  fsGroup:
    rule: 'MustRunAs'
    ranges:
      # Forbid adding the root group.
      - min: 1
        max: 65535
  readOnlyRootFilesystem: false

Versioning and env valueFrom does not work in generators

If you do this with a versioned secret:

  components:
    echo-server:
      <other config>
      env:
        KAPITAN_SECRET:
          secretKeyRef:
            key: 'kapitan_secret'

You would expect the name found to include the version, but its taking its information from a bit of the dataset that doesn't yet have versions (they haven't been calculated yet).

A solution could be something like this (in WorkloadCommon):

    def update_env_for_versions(self, objects):
        for object in objects.root:
            rendered_name = object.root.metadata.name

            containers = self.root.spec.template.spec.containers
            for container in containers:
                for env in container.env:
                    if "valueFrom" in env and "secretKeyRef" in env["valueFrom"]:
                        if env["valueFrom"].secretKeyRef.name == rendered_name.rsplit('-', 1)[0]:
                            env["valueFrom"].secretKeyRef.name = rendered_name

called after
workload.add_volumes_for_objects(secrets)
I can't help but feel theres a neater solution, this could do unexpected things. It also only does secrets.

Applogies for the lack of PR, my generator is hacked about quite a bit in ways you wouldn't want and I'm pushed for time. I'll try and backport the other bits that are globally applicable and do a PR for this if no one can see a better solution.


The env["valueFrom"] pains me, but python insisted...

Configmap version is not calculated correctly

When enabled the versioning for configmaps the version is not calculated correctly.
The hash is applied based on the self.root object which is something like {'apiVersion': 'v1', 'kind': 'ConfigMap', 'metadata': {'name': 'name-of-cm', 'labels': {'name': 'label-name'}, 'namespace': 'namespace'}}. See the according code

The hash should be calculated based on the content of the configmap, so .data or .stringData.

The versioning mind be also broken for secrets, but I haven't tested it. I assume it, because the SharedConfig class is used for both Configmaps and Secrets.

Compiled output is owned by root

When checking out the repository and running the compile command the output directory and all it's contents are owned by root.

$ > uname -a
Linux aretousa 5.6.0-2-amd64 #1 SMP Debian 5.6.14-1 (2020-05-23) x86_64 GNU/Linux
$ > sudo rm -rf compiled
$ > ./kapitan compile
Compiled kapicorp/tesoro (1.21s)
Compiled examples/tutorial (1.50s)
Compiled examples/pritunl (1.39s)
Compiled examples/echo-server (1.20s)
Compiled kapicorp/dev-sockshop (3.53s)
Compiled kapicorp/prod-sockshop (3.61s)
Compiled examples/global (0.56s)
Compiled examples/examples (1.85s)
Compiled examples/mysql (1.28s)
Compiled examples/gke-pvm-killer (0.98s)
Compiled examples/postgres-proxy (0.98s)
Compiled examples/sock-shop (3.21s)
$ > ls -alh
โ€ฆ
drwx------ 14 root        root        4.0K Jun 17 11:30 compiled
โ€ฆ

[Enhancement] Please improve generator for Job/CronJob

Usecase:

I'd like to add a component which is a CronJob. The Cronjob should run a container with a volume mount.
The backup (cronjob) should then be written to the PVC.
After that my k8s backup mechanism get track of the rest.

Affected Resources

  1. CronJob Resource
  2. PV Resources
  3. PVC Resources

Whats does not work as expected?

The libsonnet simply uses podspec which contains replicas and strategy by default
Next the restartPolicy can only be one of "OnFailure", "Never".
Additionally strategy isnt supported
Also the generated MatchLabels doesn't fit onto a CronJob resource.

    #
    # Backup
    #
    postgres_backup:
      type: deployment
      image: ${postgres:image}
      replicas: ""
      schedule: ${postgres:backup:interval}
      env:
        PGHOST: ${database:host}
        PGPORT: ${database:port}
        PGDATABASE: ${database:name}
        PGUSER: ${database:user}
        PGPASSWORD: ${postgres:users:postgres:password}
      volume_mounts:
        pg-backup:
          mountPath: /backup/${database:name}
          subPath: pgdata
      args: 
        -  DUMP_FILE_NAME="backupOn`date +%Y-%m-%d-%H-%M`.dump"
        -  echo "Creating dump: $DUMP_FILE_NAME"
        -  cd pg_backup
        -  pg_dump -C -w --format=c --blobs > $DUMP_FILE_NAME
        -  if [ $? -ne 0 ]; then
        -    rm $DUMP_FILE_NAME
        -    echo "Back up not created, check db connection settings"
        -  exit 1
        -  fi
        -  echo 'Successfully Backed Up'
        -  exit 0

The error

$ kubectl apply --recursive -f compiled/gitea/manifests/
The CronJob "postgres_backup" is invalid: 
* spec.jobTemplate.spec.template.spec.restartPolicy: Required value: valid values: "OnFailure", "Never"
* spec.jobTemplate.spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/managed-by":"kapitan", "app.kubernetes.io/part-of":"gitea", "name":"postgres_backup"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: `selector` will be auto-generated

[FEATURE] Add ability to specify nodePort

In my current usecase I want to add a service of type NodePort. Currently this is possible but without the ability to specify the nodePort itself.
So if I create a service of type NodePort I get a service with a (by kubernetes) randomly added NodePort between 30000-32767

Official docs:
https://kubernetes.io/docs/concepts/services-networking/service/#nodeport

Expected input:

  components:
    myapp:
[...]
      ports:
        my-service:
          service_port: 80
          container_port: 80
          node_port: 30007

expected outcome:

apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  type: NodePort
  selector:
    app: myapp
  ports:
      # By default and for convenience, the `targetPort` is set to the same value as the `port` field.
    - port: 80
      targetPort: 80
      # Optional field
      # By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
      nodePort: 30007

The old libsonnet generator has it:

nodePort: utils.objectGet(self.port_info, 'node_port'),

The new kadet generator doesn't honor the node_port

all_ports = [component.ports] + [container.ports for container in component.additional_containers.values()
if 'ports' in container]
exposed_ports = {}
for port in all_ports:
for port_name in port.keys():
if not service_spec.expose_ports or port_name in service_spec.expose_ports:
exposed_ports.update(port)
for port_name in sorted(exposed_ports):
port_spec = exposed_ports[port_name]
if 'service_port' in port_spec:
self.root.spec.ports += [{
'name': port_name,
'port': port_spec.service_port,
'targetPort': port_name,
'protocol': port_spec.get('protocol', 'TCP')
}]

So I exceeded my knowledge and time to solve this issue myself ๐Ÿ‘Ž

Allow usage of podManagementPolicy

Pod Management Policies
In Kubernetes 1.7 and later, StatefulSet allows you to relax its ordering guarantees while preserving its uniqueness and identity guarantees via its .spec.podManagementPolicy field.

https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: vault
  namespace: my-vault-namespace
  labels:
    app.kubernetes.io/name: vault
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
spec:
  serviceName: vault-internal
  podManagementPolicy: Parallel

Create services on a per-port basis

Really glad I stumbled upon this - the manifest generators are super handy and make defining multiple homogenous services really easy ๐Ÿ‘

When using the deployment/service manifest generator, if the service directive is specified then a service is created for each port defined in the container. I have a use-case in which we have a container that runs two ports, a user-facing http port and an admin port for healthchecks/metrics, and it is only necessary to create a service for the former. It's not a big deal, but more for neatness/not creating unnecessary services.

I guess a simple (?) fix would be to only create services for which service_port is defined. So, for the following configuration, only the http-server port would have a service created.

parameters:
  applications:
    <app-name>:
      component_defaults:
        service:
          type: ClusterIP
        ports:
          admin-server:
            container_port: <port-1>
          http-server:
            container_port: <port-2>
            service_port: 80

Document manifest generator

Document the manifest generator structure.

The idea is to use a json-schema to doc tool to keep the json schema self documenting

Error creating a cronjob resource

When trying to create cronjob resource with the following config:

parameters:
  postgres-backup:
    image: moep1990/pgbackup:latest
    component: database
  
  persistence:
    backup:
      storageclass: ${storageclass}
      accessModes: ["ReadWriteOnce"]
      size: 10Gi

  components:
    postgres-backup:
      type: job
      schedule: "0 */6 * * *"
      image: ${postgres-backup:image}
      env:
        PGDATABASE: ${database:name}
        PGHOST: ${database:host}
        PGPASSWORD: "xyz"
        PGPORT: ${database:port}
        PGUSER: ${database:user}
      # Persistence
      volume_mounts:
        backup:
          mountPath: /var/postgres
          subPath: postgres
      volume_claims:
        backup:
          spec:
            accessModes: ${persistence:backup:accessModes}
            storageClassName: ${persistence:backup:storageclass}
            resources:
              requests:
                storage: ${persistence:backup:size}

I get the following error:

$ ./kapitan compile -t artifactory
Unknown (Non-Kapitan) Error occurred
multiprocessing.pool.RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/multiprocessing/pool.py", line 121, in worker
    result = (True, func(*args, **kwds))
  File "/opt/venv/lib/python3.7/site-packages/kapitan/targets.py", line 466, in compile_target
    input_compiler.compile_obj(comp_obj, ext_vars, **kwargs)
  File "/opt/venv/lib/python3.7/site-packages/kapitan/inputs/base.py", line 55, in compile_obj
    self.compile_input_path(input_path, comp_obj, ext_vars, **kwargs)
  File "/opt/venv/lib/python3.7/site-packages/kapitan/inputs/base.py", line 77, in compile_input_path
    **kwargs,
  File "/opt/venv/lib/python3.7/site-packages/kapitan/inputs/kadet.py", line 120, in compile_file
    output_obj = kadet_module.main(input_params).to_dict()
  File "/src/components/generators/kubernetes/__init__.py", line 908, in main
    return globals()[function](input_params)
  File "/src/components/generators/kubernetes/__init__.py", line 841, in generate_manifests
    workload.add_volumes_for_objects(configs)
AttributeError: 'CronJob' object has no attribute 'add_volumes_for_objects'
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/venv/lib/python3.7/site-packages/kapitan/targets.py", line 136, in compile_targets
    [p.get() for p in pool.imap_unordered(worker, target_objs) if p]
  File "/opt/venv/lib/python3.7/site-packages/kapitan/targets.py", line 136, in <listcomp>
    [p.get() for p in pool.imap_unordered(worker, target_objs) if p]
  File "/usr/local/lib/python3.7/multiprocessing/pool.py", line 748, in next
    raise value
AttributeError: 'CronJob' object has no attribute 'add_volumes_for_objects'


'CronJob' object has no attribute 'add_volumes_for_objects'

NoneType exception when providing empty Keys in the classes

Describe the bug/feature
I noticed is a NoneType exception when providing empty Keys in the classes

parameters:
  components:
    mysql:
[...]
      config_maps:
        config:

      secrets:
        secrets:
          data:
            mysql-root-password:
              value: ?{plain:targets/${target_name}/mysql-root-password||randomstr:32|base64}
            mysql-password:
              value: ?{plain:targets/${target_name}/mysql-password||randomstr|base64}

To Reproduce
Steps to reproduce the behavior:

  1. git clone https://github.com/kapicorp/kapitan-reference.git
  2. Go to file: https://github.com/kapicorp/kapitan-reference/blob/master/inventory/classes/components/mysql.yml
  3. Remove content from config_maps.config
  4. Run ./kapitan compile

Expected behavior
Empty Keys without values get ignored

If it's a bug (please complete the following information):
I'm using docker provided by the kapitan-reference repo
https://github.com/kapicorp/kapitan-reference/blob/master/kapitan

Additional context

$ ./kapitan compile
Unknown (Non-Kapitan) Error occurred
multiprocessing.pool.RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/multiprocessing/pool.py", line 121, in worker
    result = (True, func(*args, **kwds))
  File "/opt/venv/lib/python3.7/site-packages/kapitan/targets.py", line 463, in compile_target
    input_compiler.compile_obj(comp_obj, ext_vars, **kwargs)
  File "/opt/venv/lib/python3.7/site-packages/kapitan/inputs/base.py", line 54, in compile_obj
    self.compile_input_path(input_path, comp_obj, ext_vars, **kwargs)
  File "/opt/venv/lib/python3.7/site-packages/kapitan/inputs/base.py", line 76, in compile_input_path
    **kwargs,
  File "/opt/venv/lib/python3.7/site-packages/kapitan/inputs/kadet.py", line 120, in compile_file
    output_obj = kadet_module.main(input_params).to_dict()
  File "/src/components/generators/kubernetes/__init__.py", line 799, in main
    return globals()[function](input_params)
  File "/src/components/generators/kubernetes/__init__.py", line 735, in generate_manifests
    config_maps = GenerateConfigMaps(name=name, component=component).root
  File "/opt/venv/lib/python3.7/site-packages/kapitan/inputs/kadet.py", line 188, in __init__
    self.body()
  File "/src/components/generators/kubernetes/__init__.py", line 541, in body
    component.config_maps.items()]
  File "/src/components/generators/kubernetes/__init__.py", line 540, in <listcomp>
    self.root = [ConfigMap(name=name, config=config, component=component) for config_name, config in
  File "/opt/venv/lib/python3.7/site-packages/kapitan/inputs/kadet.py", line 188, in __init__
    self.body()
  File "/src/components/generators/kubernetes/__init__.py", line 143, in body
    for key, config_spec in config.data.items():
AttributeError: 'NoneType' object has no attribute 'data'
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/venv/lib/python3.7/site-packages/kapitan/targets.py", line 136, in compile_targets
    [p.get() for p in pool.imap_unordered(worker, target_objs) if p]
  File "/opt/venv/lib/python3.7/site-packages/kapitan/targets.py", line 136, in <listcomp>
    [p.get() for p in pool.imap_unordered(worker, target_objs) if p]
  File "/usr/local/lib/python3.7/multiprocessing/pool.py", line 748, in next
    raise value
AttributeError: 'NoneType' object has no attribute 'data'


'NoneType' object has no attribute 'data'

Allow usage of startupProbe

https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/

discoverd in harbor-core:

apiVersion: apps/v1
kind: Deployment/Statefulset
metadata:
  name: my-release-harbor-core
  labels:
    heritage: Helm
    release: my-release
    chart: harbor
    app: "harbor"
    component: core
spec:
  template:
    spec:
      containers:
      - name: core
[...]
        startupProbe:
          httpGet:
            path: /api/v2.0/ping
            scheme: HTTP
            port: 8080
          failureThreshold: 360
          initialDelaySeconds: 10
          periodSeconds: 10
[...]

Allow generators to create DaemonSets

Statefulset

parameters:
  filebeat:
    image: docker.elastic.co/beats/filebeat
    version: "1.9"
    component: go

  # Component/s
  components:
    filebeat:
      name: filebeat
      type: statefulset

Success

$ ./kapitan compile -t filebeat
Compiled filebeat (0.66s)

Daemonset

parameters:
  filebeat:
    image: docker.elastic.co/beats/filebeat
    version: "1.9"
    component: go

  # Component/s
  components:
    filebeat:
      name: filebeat
      type: daemonset

Failure

$ ./kapitan compile -t filebeat
Unknown (Non-Kapitan) Error occurred
multiprocessing.pool.RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/multiprocessing/pool.py", line 121, in worker
    result = (True, func(*args, **kwds))
  File "/opt/venv/lib/python3.7/site-packages/kapitan/targets.py", line 466, in compile_target
    input_compiler.compile_obj(comp_obj, ext_vars, **kwargs)
  File "/opt/venv/lib/python3.7/site-packages/kapitan/inputs/base.py", line 55, in compile_obj
    self.compile_input_path(input_path, comp_obj, ext_vars, **kwargs)
  File "/opt/venv/lib/python3.7/site-packages/kapitan/inputs/base.py", line 77, in compile_input_path
    **kwargs,
  File "/opt/venv/lib/python3.7/site-packages/kapitan/inputs/kadet.py", line 120, in compile_file
    output_obj = kadet_module.main(input_params).to_dict()
  File "/src/components/generators/kubernetes/__init__.py", line 908, in main
    return globals()[function](input_params)
  File "/src/components/generators/kubernetes/__init__.py", line 826, in generate_manifests
    workload = Workload(name=name, component=component)
  File "/opt/venv/lib/python3.7/site-packages/kapitan/inputs/kadet.py", line 188, in __init__
    self.body()
  File "/src/components/generators/kubernetes/__init__.py", line 508, in body
    raise ()
TypeError: exceptions must derive from BaseException
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/venv/lib/python3.7/site-packages/kapitan/targets.py", line 136, in compile_targets
    [p.get() for p in pool.imap_unordered(worker, target_objs) if p]
  File "/opt/venv/lib/python3.7/site-packages/kapitan/targets.py", line 136, in <listcomp>
    [p.get() for p in pool.imap_unordered(worker, target_objs) if p]
  File "/usr/local/lib/python3.7/multiprocessing/pool.py", line 748, in next
    raise value
TypeError: exceptions must derive from BaseException


exceptions must derive from BaseException

Please improve PV/PVC generator

Usecase

For a Deployment + Job + CronJob resource it's sometimes required to have one or more additional PV/PVC's.
Therfore I need the generators to be able to generate them.

Issue

Currently PV/PVC generator only allows the function with volumeClaimTemplate by type: statefulset to create actual persistence.

Current "Workaround"

parameters:
  kapitan:
    compile:
      - output_path: manifests
        input_type: jinja2
        input_paths: 
          - templates/jinja/pvc.yml

The Component

parameters:
  extra:
    pvcs:
      - name: pg-backup
        spec:
          storageClassName: ${postgres:persistence:storageclass}
          accessModes: ${postgres:persistence:accessModes}
          resources:
            requests:
              storage: ${postgres:backup:size}

My Jinja2 Template:

{% set p = inventory.parameters %}
{% for pvc in p.extra.pvcs %}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: {{ pvc.name }}
  namespace: {{ p.namespace }}
  labels: {{ p.generators.manifest.default_config.labels }}
  annotations: {{ p.generators.manifest.default_config.annotations }}
spec: {{ pvc.spec }}
{% endfor %}

The Result

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pg-backup
  namespace: gitea
  labels: {'app.kubernetes.io/part-of': 'gitea', 'app.kubernetes.io/managed-by': 'kapitan'}
  annotations: {'manifests.kapicorp.com/generated': 'true'}
spec: {'storageClassName': 'standard', 'accessModes': ['ReadWriteOnce'], 'resources': {'requests': {'storage': '10Gi'}}}

Kadet generator for Cronjob

Depends on https://github.com/kapicorp/kapitan-reference/pull/46/files

It would be great to be able to set application type: cronjob, along with a schedule, and generate a https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#cronjob-v1beta1-batch type configuration.

The main implementation question I see with this concerns the fact that the Cronjob's spec.jobTemplate field is itself a jobSpec, so it would be nice to reuse this, but all the configuration setting is hardcoded to happen at the root.spec level (like here), so moving it down to the root.spec.jobTemplate.spec level seems a bit fiddly.

Liveness + Readiness Probe HealthCheck config

It'd be brilliant if we could specify different configurations for the liveness and readiness probes separately. In the current framework, any configuration applied to one healthcheck is applied to both, but I think it's a reasonable requirement to have different configurations, e.g. a separate http path for readiness and liveness.

My vision for this is that readiness and liveness become fields directly under healthcheck, with the configuration defined beneath them, rather than have them as members of an array, so

      healthcheck:
        readiness:
          type: http
          port: http
          path: /_health
          timeout_seconds: 3

for example

Have had a go at hacking around locally but it's a bit unclear where to change given all the different places where project structure is defined - schema.json, service_component, schemas.libsonnet, kap.libsonnet...any pointers? :)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.