GithubHelp home page GithubHelp logo

splunk / qbec Goto Github PK

View Code? Open in Web Editor NEW
165.0 20.0 40.0 4.27 MB

configure kubernetes objects on multiple clusters using jsonnet

Home Page: https://qbec.io

License: Apache License 2.0

Makefile 0.53% Go 99.28% HTML 0.06% Shell 0.13%
jsonnet kubernetes kubecfg ksonnet k8s-config hacktoberfest

qbec's Introduction

qbec

Github build status Go Report Card codecov GolangCI

Build Stats

Qbec (pronounced like the Canadian province) is a CLI tool that allows you to create Kubernetes objects on multiple Kubernetes clusters or namespaces configured correctly for the target environment in question.

It is based on jsonnet and is similar to other tools in the same space like kubecfg and ksonnet.

For more info, read the docs

Installing

Use a prebuilt binary from the releases page for your operating system.

On MacOS, you can install qbec using homebrew:

$ brew tap splunk/tap
$ brew install qbec

Building from source

git clone [email protected]:splunk/qbec
cd qbec
make install  # installs lint tools etc.
make

Sign the CLA

Follow the steps here cla-assistant

qbec's People

Contributors

abhide avatar dan1 avatar dependabot[bot] avatar e-zhang avatar gotwarlost avatar harsimranmaan avatar kalhanreddy avatar korroot avatar kvaps avatar michaelw avatar sj14 avatar splkforrest avatar taitken-splunk avatar wurbanski avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

qbec's Issues

exec plugin: invalid apiVersion "client.authentication.k8s.io/v1beta1"

My $HOME/.kube/config:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: "<snip>"
    server: "<snip>"
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    namespace: default
    user: oidc
  name: oidc@kubernetes
current-context: oidc@kubernetes
kind: Config
preferences: {}
users:
- name: oidc
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - get-token
      - --oidc-issuer-url=<snip>
      - --oidc-client-id=kubernetes
      - --oidc-extra-scope=email
      - --oidc-extra-scope=groups
      command: kubelogin

qbec's output (for any subcommand communicating with the cluster):

setting cluster to kubernetes
setting context to oidc@kubernetes
✘ exec plugin: invalid apiVersion "client.authentication.k8s.io/v1beta1"

Maybe it's time to update client-go?

Feature request: qbec.io/update-policy annotation

Hi, I was just thinking about how to secure from removal of important resources, eg PersistentVolumeClaims. In my case I want to be able to create this kind resources but do not remove them under what circumstances.

My idea is to specify special annotation for these kind resources:

 apiVersion: v1
 kind: PersistentVolumeClaim
 metadata:
   annotations:
     qbec.io/component: pvc
+    qbec.io/update-policy: DoNotRemove
   labels:
     qbec.io/application: myapp
     qbec.io/environment: devel
   name: myclaim
 spec:
   accessModes:
   - ReadWriteOnce
   resources:
     requests:
       storage: 8Gi
   storageClassName: myclass
   volumeMode: Filesystem

I think we can support few more policies:

  • Normal - Standard policy, used by default if annotation is not set
  • DoNotUpdate - It might be useful for initial Jobs
  • DoNotRemove - It might be useful for PVCs

On the client we can show warning like:

[warn]: delete persistentvolumeclaim myclaim (source pvc): skiping due DoNotRemove policy set

and add skipped to final stats:

stats:
  skipped:
  - persistentvolumeclaim myclaim (source pvc)

And add --force flag to handle these resources too

qbec.io/defaultNs does not respect the value from --force:k8s-namespace

The variable qbec.io/defaultNs will always provide the value specified by spec.environments.<env name>.defaultNamespace in qbec.yaml and will not respect the override value from --force:k8s-namespace. The expected behavior from the documentation:

qbec.io/defaultNs - the default namespace in use. This is typically picked from the environment definition, possibly changed for app tags, or the value forced from the command line using the --force:k8s-namespace option.

QBEC fails on empty annotations

Example component:

apiVersion: v1
kind: Service
metadata:
  name: asd-stolon-proxy
  labels:
    app: stolon
    chart: stolon-1.1.2
    release: asd
    heritage: Tiller
  annotations:
spec:
  type: ClusterIP
  ports:
    - name: proxy
      port: 5432
      protocol: TCP
      targetPort: 5432
      
  selector:
    app: stolon
    release: asd
    component: stolon-proxy
# qbec show default
✘ /v1, Kind=Service, Name=asd-stolon-proxy: .metadata.annotations accessor error: <nil> is of the type <nil>, expected map[string]interface{}

feature requiest: allow nested environments

Hi,

Ksonnet was support to create nested environments eg. us-west/staging where us-west/staging inherits parameters from us-west it would be nice to add this opportunity to qbec also.

Thank you!

Add oidc authorization support

Hi, when I try to run qbec apply command, I have an error:

✘ No Auth Provider found for name "oidc"

version:

qbec version: 0.6.1
jsonnet version: v0.11.2
go version: 1.10.3
commit: a5dd69e

qbec shows it applying changes with every run when nothing is really changing

$ qbec version
qbec version: 0.7.5
jsonnet version: v0.13.0
go version: 1.12.5
commit: e4d3f78
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.7", GitCommit:"4683545293d792934a7a7e12f2cc47d20b2dd01b", GitTreeState:"clean", BuildDate:"2019-06-06T01:46:52Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.7-gke.8", GitCommit:"7d3d6f113e933ed1b44b78dff4baf649258415e5", GitTreeState:"clean", BuildDate:"2019-06-19T16:37:16Z", GoVersion:"go1.11.5b4", Compiler:"gc", Platform:"linux/amd64"}
$ qbec validate infra -c grafana
setting cluster to gke_myproject_europe-north1_infra
setting context to gke_myproject_europe-north1_infra
cluster metadata load took 209ms
1 components evaluated in 772ms
✔ configmaps grafana-dashboards -n default (source grafana) is valid
✔ serviceaccounts grafana -n default (source grafana) is valid
✔ secrets grafana-datasources -n default (source grafana) is valid
✔ deployments grafana -n default (source grafana) is valid
✔ services grafana -n default (source grafana) is valid
✔ persistentvolumeclaims grafana-storage -n default (source grafana) is valid
---
stats:
  valid: 6

command took 1.14s
$ qbec apply infra -n -c grafana
setting cluster to gke_myproject_europe-north1_infra
setting context to gke_myproject_europe-north1_infra
cluster metadata load took 210ms
1 components evaluated in 763ms
5 components evaluated in 715ms
[dry-run] sync deployments grafana -n default (source grafana)
kind: application/strategic-merge-patch+json
operation: update object
patch: |-
  {
      "spec": {
          "template": {
              "spec": {
                  "$setElementOrder/containers": [
                      {
                          "name": "grafana"
                      }
                  ],
                  "containers": [
                      {
                          "$setElementOrder/volumeMounts": [
                              {
                                  "mountPath": "/var/lib/grafana"
                              },
                              {
                                  "mountPath": "/etc/grafana/provisioning/datasources"
                              },
                              {
                                  "mountPath": "/etc/grafana/provisioning/dashboards"
                              }
                          ],
                          "name": "grafana",
                          "volumeMounts": [
                              {
                                  "mountPath": "/var/lib/grafana",
                                  "readOnly": false
                              },
                              {
                                  "mountPath": "/etc/grafana/provisioning/datasources",
                                  "readOnly": false
                              },
                              {
                                  "mountPath": "/etc/grafana/provisioning/dashboards",
                                  "readOnly": false
                              }
                          ]
                      }
                  ]
              }
          }
      }
  }
source: open API

server objects load took 1.451s
---
stats:
  same: 5
  updated:
  - deployments grafana -n default (source grafana)

** dry-run mode, nothing was actually changed **
command took 3.56s

And applying without -n shows it's changing these fields

This is generated yaml:

$ qbec show infra  -c grafana -k deployment
1 components evaluated in 732ms
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  annotations:
    qbec.io/component: grafana
  labels:
    app: grafana
    qbec.io/application: infra
    qbec.io/environment: infra
  name: grafana
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: grafana
  template:
    metadata:
      labels:
        app: grafana
    spec:
      containers:
      - env:
        - name: GF_SERVER_ROOT_URL
          value: https://grafana.myproject.fi
        - name: GF_SERVER_DOMAIN
          value: grafana.myproject.fi
        image: grafana/grafana:6.2.1
        name: grafana
        ports:
        - containerPort: 3000
          name: http
        readinessProbe:
          httpGet:
            path: /api/health
            port: http
        resources:
          limits:
            cpu: 200m
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - mountPath: /var/lib/grafana
          name: grafana-storage
          readOnly: false
        - mountPath: /etc/grafana/provisioning/datasources
          name: grafana-datasources
          readOnly: false
        - mountPath: /etc/grafana/provisioning/dashboards
          name: grafana-dashboards
          readOnly: false
      nodeSelector:
        beta.kubernetes.io/os: linux
      securityContext:
        fsGroup: 472
        runAsNonRoot: true
        runAsUser: 65534
      serviceAccountName: grafana
      volumes:
      - name: grafana-storage
        persistentVolumeClaim:
          claimName: grafana-storage
      - name: grafana-datasources
        secret:
          secretName: grafana-datasources
      - configMap:
          name: grafana-dashboards
        name: grafana-dashboards

command took 740ms

Actually, kubectl apply -f on this yaml also every run shows:

deployment.apps/grafana configured

Add an option to specify kubeconfig context instead of server

When creating and tearing down throwaway clusters (e.g. for testing cluster config), with apiserver URLs containing IP addresses which are different each time, it's often rather inconvenient to update server: in qbec.yaml that often. It can be scripted of course, but still you get some uncommitted changes in your git repo, etc. Maybe it should be possible to specify e.g. a context name instead, or something like that?

devise a framework for per-object tweaking of qbec behavior

The known use case today is to be able to control GC of transient objects like jobs and one-off pods.

The approach would likely for the user to declare qbec-namespaced annotations for objects that qbec can then respect.

Continue to document additional cases where this may be required in this ticket.

Figure out code coverage for qbec

As in:

  • Code coverage is computed for master as well as PRs
  • Code coverage up/ down percent when a PR is submitted
  • Bonus: a way to have detailed code coverage reports clickable from PR/ master

qbec validate fails to validate custom resources

andor@titude:~/work/kube$ qbec validate infra -c cert-manager
setting cluster to gke__europe-north1_infra
setting context to gke__europe-north1_infra
cluster metadata load took 303ms
✔ apiservices v1beta1.admission.certmanager.k8s.io (source cert-manager) is valid
✔ clusterroles cert-manager (source cert-manager) is valid
✔ clusterroles cert-manager-webhook:webhook-requester (source cert-manager) is valid
✔ clusterrolebindings cert-manager (source cert-manager) is valid
✔ clusterrolebindings cert-manager-cainjector (source cert-manager) is valid
✔ clusterroles cert-manager-cainjector (source cert-manager) is valid
✔ clusterrolebindings cert-manager-webhook:auth-delegator (source cert-manager) is valid
✔ customresourcedefinitions challenges.certmanager.k8s.io (source cert-manager) is valid
✔ customresourcedefinitions certificates.certmanager.k8s.io (source cert-manager) is valid
✔ clusterroles cert-manager-edit (source cert-manager) is valid
✔ customresourcedefinitions clusterissuers.certmanager.k8s.io (source cert-manager) is valid
✔ namespaces cert-manager (source cert-manager) is valid
? certificates cert-manager-webhook-ca -n cert-manager (source cert-manager): no schema found, cannot validate
? certificates cert-manager-webhook-webhook-tls -n cert-manager (source cert-manager): no schema found, cannot validate
✔ clusterroles cert-manager-view (source cert-manager) is valid
✔ customresourcedefinitions orders.certmanager.k8s.io (source cert-manager) is valid
✔ deployments cert-manager -n cert-manager (source cert-manager) is valid
✔ deployments cert-manager-cainjector -n cert-manager (source cert-manager) is valid
? issuers cert-manager-webhook-ca -n cert-manager (source cert-manager): no schema found, cannot validate
? issuers cert-manager-webhook-selfsign -n cert-manager (source cert-manager): no schema found, cannot validate
✔ validatingwebhookconfigurations cert-manager-webhook (source cert-manager) is valid
✔ deployments cert-manager-webhook -n cert-manager (source cert-manager) is valid
✔ serviceaccounts cert-manager -n cert-manager (source cert-manager) is valid
✔ serviceaccounts cert-manager-cainjector -n cert-manager (source cert-manager) is valid
✔ serviceaccounts cert-manager-webhook -n cert-manager (source cert-manager) is valid
✔ customresourcedefinitions issuers.certmanager.k8s.io (source cert-manager) is valid
✔ rolebindings cert-manager-webhook:webhook-authentication-reader -n kube-system (source cert-manager) is valid
✔ services cert-manager-webhook -n cert-manager (source cert-manager) is valid
---
stats:
  unknown:
  - certificates cert-manager-webhook-ca -n cert-manager (source cert-manager)
  - certificates cert-manager-webhook-webhook-tls -n cert-manager (source cert-manager)
  - issuers cert-manager-webhook-ca -n cert-manager (source cert-manager)
  - issuers cert-manager-webhook-selfsign -n cert-manager (source cert-manager)
  valid: 24

command took 490ms

Can not specify name for HELM v3 releases

[helm template] cd /home/kvaps/git/infrastructure/deployments/prometheus-operator/components && helm template ../vendor/helm-charts/stable/prometheus-operator --name monitoring --namespace monitoring --values -
[helm template] cd /home/kvaps/git/infrastructure/deployments/prometheus-operator/components && helm template ../vendor/helm-charts/stable/prometheus-operator --name monitoring --namespace monitoring --values -
Error: unknown flag: --name
Error: unknown flag: --name
✘ evaluate 'cluster-resources': RUNTIME ERROR: run helm template command: exit status 1
        components/cluster-resources.jsonnet:(5:23)-(14:2)      thunk <renderedChart> from <$>
        components/cluster-resources.jsonnet:20:12-25   $

        During evaluation

evaluate 'prometheus-operator': RUNTIME ERROR: run helm template command: exit status 1
        components/prometheus-operator.jsonnet:(5:23)-(14:2)    thunk <renderedChart> from <$>
        components/prometheus-operator.jsonnet:21:12-25 $

        During evaluation

unknown hook crd-install

Trying to base a component off the prometheus-operator helm chart but it failed to create the CRDs because they are using an unknown hook and are stripped from the output.

[helm template] cd /home/russell/projects/docker-configs/k8s-common/prometheus/components && helm template ../prometheus-operator --name-template prometheus --namespace prometheus --values -
manifest_sorter.go:175: info: skipping unknown hook: "crd-install"
manifest_sorter.go:175: info: skipping unknown hook: "crd-install"
manifest_sorter.go:175: info: skipping unknown hook: "crd-install"
manifest_sorter.go:175: info: skipping unknown hook: "crd-install"
manifest_sorter.go:175: info: skipping unknown hook: "crd-install"
1 components evaluated in 376ms
command took 480ms

Qbec apply label sync does not remove unspecified labels

When migrating from ks to qbec for managing deployment of kubernetes objects, it appears that old labels added by ks are still preserved after an apply.

In the output of qbec we see something like

sync prometheusrules myrules (source prometheusrules)
kind: application/merge-patch+json
operation: update object
patch: |-
{
      "metadata": {
           "annotations": {
               "qbec.io/component": "mycomponent",
               "qbec.io/last-applied": "...",
          },
           "labels": {
               "app": "myapp",
               "qbec.io/application": "myapp",
               "qbec.io/environment": "myenv",
               "version": "0.9.3"           
         }       
    }   
}

in the corresponding object we still see the original labels that are no longer specified

  metadata:
    annotations:
      ksonnet.io/managed: '...'
      qbec.io/component: mycomponent
      qbec.io/last-applied: ..
    creationTimestamp: "2019-01-25T00:38:50Z"
    generation: 4
    labels:
      app: myapp
      app.kubernetes.io/deploy-manager: ksonnet
      ksonnet.io/component: mycomponent
      qbec.io/application: myapp
      qbec.io/environment: myenv

so it appears there are leftover annotations and labels from the previous deploy of this object that did not get removed.

Optional `helm dependency update` for helm charts

Some helm applications may not work without helm dependency update.
For example stable/cert-manager will not working without it.

setting cluster to stage
setting context to stage
cluster metadata load took 9ms
[helm template] cd /tmp/cert-manager/components && helm template ../vendor/helm-charts/stable/cert-manager --name cert-manager --namespace kube-system --values -
Error: found in requirements.yaml, but missing in charts/ directory: webhook
✘ evaluate 'cert-manager': RUNTIME ERROR: run helm template command: exit status 1
	components/cert-manager.jsonnet:(5:1)-(14:2)	

after

cd /tmp/cert-manager/vendor/helm-charts/stable/cert-manager && helm dependency update

everything is working fine

check for basic duplication of objects across components at show/ apply

If, for example, you copy x.jsonnet to y.jsonnet in the components folder, nothing ever tells you that objects have been duplicated. This leads to head-scratching behavior as one of the two objects "wins" in the update sequence.

There are 2 levels of dup checking

  • check that no two objects have exactly the same [api-group, kind, namespace, name] attributes
  • an improved check needing server metadata that catches duplication across extensions/v1beta1 and apps/v1 for example

qbec show should at least implement the first check (and potentially the second on user request) whereas diff, apply, and friends should do the full version of the dup check.

qbec show should be able to generate kubectl applyable output

Right now (0.7.5) qbec show generate output with two issues:

  1. dos newlines;
  2. qbec-related data at the beginning and at the end of the output.
$ qbec show --colors=false infra -S -c grafana > stuff.yaml

$ file stuff.yaml
stuff.yaml: ASCII text, with very long lines, stuff.yaml: ASCII text, with very long lines, with CRLF line terminators

$ head -3 stuff.yaml 
1 components evaluated in 415ms
---
apiVersion: v1

$ tail -3 stuff.yaml
  namespace: default

command took 420ms

Duplicate objects when generating multiple objects of same kind with `generateName`

Hello,

I see the duplication check was recently added which is a really nice feature. Would it make sense to keep it behind a flag, seeing as it breaks existing configs, relying on this "feature" not being present?

Should the case be supported where you have multiple objects of the same type with generateName being set?

For instance, you might have a component which emits a list of pods, which you want to apply always, and get new pods.

Currently that's not possible due to the duplication check coming in effect.

Example component:

{
  pod1: {
    apiVersion: 'v1',
    kind: 'Pod',
    metadata: {
      generateName: 'pod1-',
    },
    spec: {},
  },
  pod2: {
    apiVersion: 'v1',
    kind: 'Pod',
    metadata: {
      generateName: 'pod2-',
    },
    spec: {},
  },
}

Thanks for qbec .. currently evaluating it for replacing our own internal tool which maps nicely to the feature set, which was also inspired greatly by ksonnet, without the craziness 😂

INTERNAL ERROR: (CRASH) Desugaring desugared object

To reproduce

Try to render the next component:

local secretEnvs = [
  {
    name: x.name,
    valueFrom: {
      secretKeyRef: { key: x.key, name: 'demo-deploy' },
    },
  }
  for x in [
    { name: 'S3_BUCKET', key: 's3Bucket' },
    { name: 'S3_URL', key: 's3URL' },
    { name: 'S3_ACCESS_KEY', key: 's3AccessKey' },
    { name: 'S3_SECRET_KEY', key: 's3SecretKey' },
  ]
];

[
  {
    apiVersion: 'apps/v1',
    kind: 'Deployment',
    metadata: {
      name: 'demo-deploy',
      labels: {
        app: 'demo-deploy',
      },
    },
    spec: {
      replicas: 1,
      selector: {
        matchLabels: {
          app: 'demo-deploy',
        },
      },
      template: {
        metadata: {
          labels: {
            app: 'demo-deploy',
          },
        },
        spec: {
          containers: [
            {
              name: 'main',
              image: 'nginx:stable',
              env: secretEnvs,
            },
          ],
        },
      },
    },
  },
]

jsonnet is working fine:

jsonnet --version
Jsonnet commandline interpreter v0.13.0

qbec don't:

qbec version
qbec version: 0.7.5
jsonnet version: v0.13.0
go version: 1.12.5
commit: e4d3f78

error:

✘ evaluate 'hello': INTERNAL ERROR: (CRASH) Desugaring desugared object
goroutine 7 [running]:
runtime/debug.Stack(0xc00075a6d8, 0x120f7e0, 0x15ba5d0)
        /usr/local/go/src/runtime/debug/stack.go:24 +0x9d
github.com/google/go-jsonnet.(*VM).evaluateSnippet.func1(0xc00075baf0)
        /Users/kanantheswaran/go/pkg/mod/github.com/google/[email protected]/vm.go:129 +0x60
panic(0x120f7e0, 0x15ba5d0)
        /usr/local/go/src/runtime/panic.go:522 +0x1b5
github.com/google/go-jsonnet.desugar(0xc00021c880, 0x0, 0x0, 0x0)
        /Users/kanantheswaran/go/pkg/mod/github.com/google/[email protected]/desugarer.go:541 +0x8db
github.com/google/go-jsonnet.desugar(0xc000237590, 0x0, 0x0, 0x0)
        /Users/kanantheswaran/go/pkg/mod/github.com/google/[email protected]/desugarer.go:358 +0xe3c
github.com/google/go-jsonnet.desugar(0xc00003b5a0, 0x0, 0x1606620, 0xc000478e60)
        /Users/kanantheswaran/go/pkg/mod/github.com/google/[email protected]/desugarer.go:328 +0x1c54
github.com/google/go-jsonnet.desugar(0xc00003b5a0, 0x0, 0x1, 0x1b)
        /Users/kanantheswaran/go/pkg/mod/github.com/google/[email protected]/desugarer.go:370 +0x24b8
github.com/google/go-jsonnet.desugarLocalBinds(0xc00003b590, 0x1, 0x1, 0x0, 0x5, 0xc0002fbee0)
        /Users/kanantheswaran/go/pkg/mod/github.com/google/[email protected]/desugarer.go:295 +0xd7
github.com/google/go-jsonnet.desugar(0xc0003369a0, 0x0, 0xc000116480, 0x420)
        /Users/kanantheswaran/go/pkg/mod/github.com/google/[email protected]/desugarer.go:504 +0xa85
github.com/google/go-jsonnet.desugarFile(...)
        /Users/kanantheswaran/go/pkg/mod/github.com/google/[email protected]/desugarer.go:587
github.com/google/go-jsonnet.snippetToAST(0xc0002fbbe0, 0x18, 0xc000116480, 0x420, 0x40b899, 0x7f4d305d2008, 0xc0000c3841, 0xc000116480)
        /Users/kanantheswaran/go/pkg/mod/github.com/google/[email protected]/vm.go:207 +0xb0
github.com/google/go-jsonnet.(*VM).evaluateSnippet(0xc0000c4f00, 0xc0002fbbe0, 0x18, 0xc000116480, 0x420, 0x0, 0x0, 0x0, 0x0, 0x0)
        /Users/kanantheswaran/go/pkg/mod/github.com/google/[email protected]/vm.go:132 +0x9c
github.com/google/go-jsonnet.(*VM).EvaluateSnippet(0xc0000c4f00, 0xc0002fbbe0, 0x18, 0xc000116480, 0x420, 0x420, 0x0, 0x7, 0xc0002dec30)
        /Users/kanantheswaran/go/pkg/mod/github.com/google/[email protected]/vm.go:160 +0x69
github.com/splunk/qbec/internal/eval.evalComponent(0xc0003502b4, 0x4, 0x0, 0x0, 0x7ffef1d58b01, 0x7, 0xc0003502b8, 0x7, 0xc0002dec30, 0x0, ...)
        /Users/kanantheswaran/go/qbec/internal/eval/eval.go:203 +0x1ea
github.com/splunk/qbec/internal/eval.evalComponents.func1(0xc000293140, 0xc0000c37a0, 0xc0003502b4, 0x4, 0x0, 0x0, 0x7ffef1d58b01, 0x7, 0xc0003502b8, 0x7, ...)
        /Users/kanantheswaran/go/qbec/internal/eval/eval.go:257 +0x1cc
created by github.com/splunk/qbec/internal/eval.evalComponents
        /Users/kanantheswaran/go/qbec/internal/eval/eval.go:254 +0x2f0

Please report a bug here: https://github.com/google/go-jsonnet/issues

qbec is trying to remove read-only fields from Service and PVC

These object were created with qbec show -S infra | kubectl apply -f -

andor@titude:~/work/kube$ qbec diff infra -c grafana
setting cluster to gke__europe-north1_infra
setting context to gke__europe-north1_infra
cluster metadata load took 475ms
--- live persistentvolumeclaims grafana-storage -n default (source grafana) (source: fallback - live object with some attributes rem
oved)
+++ config persistentvolumeclaims grafana-storage -n default (source grafana)
@@ -2,14 +2,7 @@
 kind: PersistentVolumeClaim
 metadata:
   annotations:
-    kubectl.kubernetes.io/last-applied-configuration: |
-      {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{"qbec.io/component":"grafana"},"labels":{"qbec.io/application":"infra","qbec.io/environment":"infra"},"name":"grafana-storage","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"2Gi"}}}}
-    pv.kubernetes.io/bind-completed: "yes"
-    pv.kubernetes.io/bound-by-controller: "yes"
     qbec.io/component: grafana
-    volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/gce-pd
-  finalizers:
-  - kubernetes.io/pvc-protection
   labels:
     qbec.io/application: infra
     qbec.io/environment: infra
@@ -18,11 +11,7 @@
 spec:
   accessModes:
   - ReadWriteOnce
-  dataSource: null
   resources:
     requests:
       storage: 2Gi
-  storageClassName: standard
-  volumeMode: Filesystem
-  volumeName: pvc-a84ff09e-9266-11e9-ab85-42010aa60144
 


--- live services grafana -n default (source grafana) (source: fallback - live object with some attributes removed)
+++ config services grafana -n default (source grafana)
@@ -2,8 +2,6 @@
 kind: Service
 metadata:
   annotations:
-    kubectl.kubernetes.io/last-applied-configuration: |
-      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"qbec.io/component":"grafana"},"labels":{"app":"grafana","qbec.io/application":"infra","qbec.io/environment":"infra"},"name":"grafana","namespace":"default"},"spec":{"ports":[{"name":"http","nodePort":30910,"port":3000,"targetPort":"http"}],"selector":{"app":"grafana"},"type":"NodePort"}}
     qbec.io/component: grafana
   labels:
     app: grafana
@@ -12,16 +10,12 @@
   name: grafana
   namespace: default
 spec:
-  clusterIP: 10.3.241.2
-  externalTrafficPolicy: Cluster
   ports:
   - name: http
     nodePort: 30910
     port: 3000
-    protocol: TCP
     targetPort: http
   selector:
     app: grafana
-  sessionAffinity: None
   type: NodePort
 


waiting for deletion list to be returned
server objects load took 1.915s
---
stats:
  changes:
  - persistentvolumeclaims grafana-storage -n default (source grafana)
  - services grafana -n default (source grafana)
  same: 4

✘ 2 object(s) different
command took 7.31s

And when I'm trying to apply the changes:

andor@titude:~/work/kube$ qbec apply --yes infra -c grafana 
setting cluster to gke__europe-north1_infra
setting context to gke__europe-north1_infra
cluster metadata load took 282ms

will synchronize 6 object(s)

✘ sync persistentvolumeclaims grafana-storage -n default (source grafana): PersistentVolumeClaim "grafana-storage" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
command took 6.51s

Versions:

andor@titude:~/work/kube$ qbec version
qbec version: 0.7.0
jsonnet version: v0.11.2
go version: 1.12.5
commit: a91b4a2
andor@titude:~/work/kube$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.7", GitCommit:"4683545293d792934a7a7e12f2cc47d20b2dd01b", GitTreeState:"clean", BuildDate:"2019-06-06T01:46:52Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.6-gke.13", GitCommit:"fcbc1d20b6bca1936c0317743055ac75aef608ce", GitTreeState:"clean", BuildDate:"2019-06-19T20:50:07Z", GoVersion:"go1.11.5b4", Compiler:"gc", Platform:"linux/amd64"}

Allow passing variables via TLA

Per The Jsonnet Tutorial, there are two ways to pass in external variables: external variables and top level arguments (TLAs). Unfortunately, qbec doesn't work with TLAs:

qbec show --vm:tla-str "image=xxxxx" -v 10 play1
Eval components:
local parseYaml = std.native('parseYaml');
local parseJson = std.native('parseJson');
{
  'gatewayrouter': import 'components/gatewayrouter.jsonnet'
}
✘ evaluate components: RUNTIME ERROR: couldn't manifest function in JSON output.
	During manifestation	

per our discussion on Slack, the import statement is generated by qbec itself, and cannot be changed to import it and call it as a function.

I would like a mechanism to allow me to use TLAs in qbec

QBEC can't update CustomResource with validation on v1.15

Bug report

Environment

Kubernetes:

Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.1", GitCommit:"4485c6f18cee9a5d3c3b4e523bd27972b1b53892", GitTreeState:"clean", BuildDate:"2019-07-18T09:18:22Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.1", GitCommit:"4485c6f18cee9a5d3c3b4e523bd27972b1b53892", GitTreeState:"clean", BuildDate:"2019-07-18T09:09:21Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

Qbec:

qbec version: 0.7.2
jsonnet version: v0.13.0
go version: 1.12.5
commit: ba28218

Steps to reproduce:

cd /tmp
qbec init memcached-operator
cd memcached-operator

cat > components/crd.jsonnet <<EOT
{
  apiVersion: 'apiextensions.k8s.io/v1beta1',
  kind: 'CustomResourceDefinition',
  metadata: { name: 'memcacheds.cache.example.com' },
  spec: {
    group: 'cache.example.com',
    names: {
      kind: 'Memcached',
      listKind: 'MemcachedList',
      plural: 'memcacheds',
      singular: 'memcached',
    },
    scope: 'Namespaced',
    subresources: { status: {} },
    validation: {
      openAPIV3Schema: {
        properties: {
          spec: {
            properties: {
              configure: { type: 'boolean' },
              size: {
                type: 'integer',
              },
            },
            required: ['size'],
            type: 'object',
          },
        },
        type: 'object',
      },
    },
    version: 'v1beta1',
    versions: [
      {
        name: 'v1beta1',
        served: true,
        storage: true,
      },
    ],
  },
}
EOT

cat > components/cr.jsonnet <<EOT
[
{
  apiVersion: "cache.example.com/v1beta1",
  kind: "Memcached",
  metadata: { name: "example-memcached" },
  spec: { size: 3 }
}
]
EOT

qbec apply default -c crd --yes
qbec apply default -c cr --yes
sed -i 's/size: 3/size: 2/' components/cr.jsonnet
qbec apply default -c cr --yes

Error:

setting cluster to stage
setting context to stage
cluster metadata load took 42ms

will synchronize 1 object(s)

✘ sync memcacheds example-memcached -n test (source cr): the body of the request was in an unknown format - accepted media types include: application/json-patch+json, application/merge-patch+json
command took 260ms

decouple remote listing from GC

This is a refactor-only change to put us in a place to implement a fix for #29

Currently we do the following:

  • calculate local object list
  • call client.ListExtraObjects that lists all server objects and subtracts the local list from it (this is done in a background goroutine).
  • while above is being done in background, we diff/ apply local objects
  • then we use the output of list extra objects for deletes.

The set of local objects is needed for 2 reasons.

  • it is the set used to figure out how many namespaces to query, whether cluster objects should be queried etc.
  • it is used as the actual list to be subtracted from the remote list

The first use-case of the local list is still valid but we should no longer use it for the second use case when objects with generated names are involved. That is, when objects with generated names are present their real names for the purposes of list subtraction are only known after apply is run.

We could simplify everything by only running remote queries after apply has run but this has significant performance impact because we are no longer able to parallelize remote listing with other expensive operations.

Therefore we need to make changes as follows:

  • Client interface has 2 methods, ListRemoteObjects (as opposed to ListExtraObjects) that returns the full list of objects from the server and SubtractObjects that subtracts a provided list from objects from the server's list. The subtraction has to be done by the remote package since it is the only one that understands metadata aliases etc. (extensions/v1beta vs apps/v1 and so on)

  • Start the query as usual using the local objects just to determine the remote query context

  • After mainline processing is done and remote objects are gotten, call SubtractObjects with the list of actual objects whose generated names are resolved.

Once this is done we will be in a position to correctly implement GC for objects with generated names.

qbec env add / qbec env del

Need automation for adding and removing environments, with describing them in environments directory and params.libsonnet, qbec.yaml files

apply should show prompt in CI environments

When running the apply command in a CI environment, there is no prompt for the user to continue like there is on my local machine. Instead of

will synchronize 12 object(s)

Do you want to continue [y/n]: ^C
✘ Interrupt

I get

will synchronize 12 object(s)

✘ EOF

and then my CI job fails. Passing in the --yes flag to apply solves this problem, but it was not apparent what the problem was because the prompt was not displayed in the CI logs.

Add option to specify app-tag template

The app-tag works fine for ci/cd use cases but sometimes team have specific naming needs. Eg: On my team, we use user--resourceName tagging convention. While app-tag is used to provide similar behaviour by adding the the app-tag suffix, since it only supports suffix, it cannot be used for our use case. It would be nice to have a template option in qbec. Something like app-tag-template: user-{app-tag}-{name} would help address that dynamic naming conventions

Warning output can quickly obscure interactive prompts

I was following the awesome docs at https://qbec.io/userguide/tour/, and when I got to qbec apply default, it seemed like qbec was hanging.

After getting the source and running in a debugger, I saw that the app was sitting on a call to s, err := inst.Readline().

It turns out that the prompt text was in fact displayed, but then immediately obscured by many warnings like:

[warn] not authorized to list config.istio.io/v1alpha2, Kind=authorization, error ignored
[warn] not authorized to list config.istio.io/v1alpha2, Kind=prometheus, error ignored

Knowing this now, I can easily work-around the issue by pressing y or n, even if Do you want to continue [y/n] isn't visibile on the screen, or by using --yes, but this might be as confusing to other new users as it was to me.

initial create of a CRD and a custom resource for it can fail

Although qbec can handle dynamically created CRDs and custom resources based on it in a single run, the initial create of the custom resource can fail because it looks like there is a small lag before the discovery interface can discover the new resource.

We may need to wait after a CRD is created and ensure that it it discoverable before proceeding to apply other objects.

The test case that can be used to see this behavior is the same one that @kvaps documented in #51

Packaging applications

Hi, I have few cool projects and I want to share them with the community.
I want to use qbec to provide set of end components of my application.

The problem is that I don't know how to package my application to provide it for end users without depriving of flexibility.
I want to allow them to use the same components set and override their default parameters. At the same time, I want to provide the ability to easily update my components.

At the moment, the best implementation I've seen is how working hugo and hugo themes.
Eg each hugo theme is repeats the hierarchy of the main site but in own folder.
So you can easily add theme using git submoudule and immediately start using it

I want something like this but for qbec.
Eg.
User runs:

qbec init app
cd app
git init
git submodule add https://github.com/kvaps/some-qbec-application pkg/some-qbec-application

And now he can see all the components provided by package:

qbec show default

After this he can add new own components by usual way just placing them into components of his project. So this way he don't need to modify parent repository. And he have simplicity of upgrade and for override default parameters.

What do you thinking about this? Maybe is there more elegant way?

feature requiest: resolveDockerImage function

It would be nice for add an option to resolve docker images to sha256 tag:

Example proposal:

{
   image: std.resolveDockerImage('ubuntu:18.04'),
}

will be translated to:

{
  image: 'docker.io/library/ubuntu@sha256:017eef0b616011647b269b5c65826e2e2ebddbe5d1f8c1e56b3599fb14fabec8',
}

Or we can use external handler, eg skopeio:

  skopeo inspect docker://docker.io/ubuntu:18.04

github release flow is not filename-compatible with what was happening before

File names changed from qbec-<plat>-<arch>.tar.gz to qbec_<version>_<plat>_<arch>.tar.gz (.zip for windows)

Note introduction of version and underscores instead of dashes.

Checksum file name changed from sha256-checksums.txt to checksums.txt and refers to new-style file names.

Can both of these be fixed with appropriate options to goreleaser? Otherwise, we are making downloads of qbec dependent on version number of its release.

I have manually fixed the above issues for v0.9.0

Deal with StorageClasses

StorageClass parameters can't be upgraded using standard apply method, they have to be removed and created again.

I'm not sure if qbec should handle this remove/create actions for storageClass:

--- live storageclasses linstor-1 (source storageclasses) (source: qbec annotation)
+++ config storageclasses linstor-1 (source storageclasses)
@@ -11,7 +11,6 @@
   ReplicasOnDifferent: side
   ReplicasOnSame: moonshot
   autoPlace: "2"
-  mountOpts: errors=remount-ro,discard
   storagePool: thindata
 provisioner: linstor.csi.linbit.com
 


waiting for deletion list to be returned
server objects load took 806ms
# qbec apply stage --yes
setting cluster to stage
setting context to stage
cluster metadata load took 21ms
1 components evaluated in 2ms

will synchronize 1 object(s)

1 components evaluated in 7ms
✘ sync storageclasses linstor-1 (source storageclasses): StorageClass.storage.k8s.io "linstor-1" is invalid: parameters: Forbidden: updates to parameters are forbidden.
command took 140ms

Add support for named contexts

Currently qbec matches environments in qbec.yaml to environments in the kube config using the server address. Our team uses minikube for local development and testing. Because minikube comes up on a potentially different IP address each time it starts, this behavior makes it difficult to use qbec with minikube without a variety of workarounds.

Ideally, qbec would support matching contexts by name in addition to the current behavior of matching by address.

don't try to GC default namespace

If someone accidentally declares the default namespace as one of the objects in a component and later removes it, qbec will try and delete it and fail, since k8s doesn't allow deleting the default namespace.

add --multi flag to jsonnet-qbec binary

This is a feature proposal.

I used jsonnet to render kube-prometheus manifests. However, native jsonnet lacks the parseYaml function that I really need. So I started using jsonnet-qbec tool which is a great tool, but now I miss the --multi argument that native jsonnet binary has: it allows you to "Write multiple files to the directory, list files on stdout". It is very helpful for using with pipes when you have really large json output, smth like:

jsonnet -J vendor --multi manifests "${1-example.jsonnet}" | xargs -I{} sh -c 'cat {} | gojsontoyaml > {}.yaml; rm -f {}' -- {}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.