GithubHelp home page GithubHelp logo

microsoft / fabrikate Goto Github PK

View Code? Open in Web Editor NEW
37.0 6.0 5.0 782 KB

Making GitOps with Kubernetes easier one component at a time

License: MIT License

Go 97.02% Shell 1.77% Dockerfile 1.21%
kubernetes-cluster gitops cluster flux

fabrikate's Introduction

Fabrikate

Build Status Go Report Card

Fabrikate helps make operating Kubernetes clusters with a GitOps workflow more productive. It allows you to write DRY resource definitions and configuration for multiple environments while leveraging the broad Helm chart ecosystem, capture higher level definitions into abstracted and shareable components, and enable a GitOps deployment workflow that both simplifies and makes deployments more auditable.

In particular, Fabrikate simplifies the frontend of the GitOps workflow: it takes a high level description of your deployment, a target environment configuration (eg. qa or prod), and renders the Kubernetes resource manifests for that deployment utilizing templating tools like Helm. It is intended to run as part of a CI/CD pipeline such that with every commit to your Fabrikate deployment definition triggers the generation of Kubernetes resource manifests that an in-cluster GitOps pod like Weaveworks' Flux watches and reconciles with the current set of applied resource manifests in your Kubernetes cluster.

Installation

You have a couple options:

Official Release

Grab the latest releases from the releases page and place it drop it into your $PATH.

Building From Source

You have a couple options to build from source:

Using go get:

Use go get to build and install the bleeding edge (i.e develop) version into $GOPATH/bin:

(cd && GO111MODULE=on go get github.com/microsoft/fabrikate/cmd/fab@develop)

Cloning locally:

git clone https://github.com/microsoft/fabrikate
cd fabrikate
go build -o fab cmd/fab/main.go

Getting Started

First, install the latest fab cli on your local machine from our releases, unzipping the appropriate binary and placing fab in your path. The fab cli tool, helm, and git are the only tools you need to have installed.

NOTE Fabrikate supports Helm 3, do not use Helm 2.

Let's walk through building an example Fabrikate definition to see how it works in practice. First off, let's create a directory for our cluster definition:

$ mkdir mycluster
$ cd mycluster

The first thing I want to do is pull in a common set of observability and service mesh platforms so I can operate this cluster. My organization has settled on a cloud-native stack, and Fabrikate makes it easy to leverage reusable stacks of infrastructure like this:

$ fab add cloud-native --source https://github.com/microsoft/fabrikate-definitions --path definitions/fabrikate-cloud-native

Since our directory was empty, this creates a component.yaml file in this directory:

name: mycluster
subcomponents:
  - name: cloud-native
    type: component
    source: https://github.com/microsoft/fabrikate-definitions
    method: git
    path: definitions/fabrikate-cloud-native
    branch: master

A Fabrikate definition, like this one, always contains a component.yaml file in its root that defines how to generate the Kubernetes resource manifests for its directory tree scope.

The cloud-native component we added is a remote component backed by a git repo fabrikate-cloud-native. Fabrikate definitions use remote definitions like this one to enable multiple deployments to reuse common components (like this cloud-native infrastructure stack) from a centrally updated location.

Looking inside this component at its own root component.yaml definition, you can see that it itself uses a set of remote components:

name: "cloud-native"
generator: "static"
path: "./manifests"
subcomponents:
  - name: "elasticsearch-fluentd-kibana"
    source: "../fabrikate-elasticsearch-fluentd-kibana"
  - name: "prometheus-grafana"
    source: "../fabrikate-prometheus-grafana"
  - name: "istio"
    source: "../fabrikate-istio"
  - name: "kured"
    source: "../fabrikate-kured"

Fabrikate recursively iterates component definitions, so as it processes this lower level component definition, it will in turn iterate the remote component definitions used in its implementation. Being able to mix in remote components like this makes Fabrikate deployments composable and reusable across deployments.

Let's look at the component definition for the elasticsearch-fluentd-kibana component:

{
  "name": "elasticsearch-fluentd-kibana",
  "generator": "static",
  "path": "./manifests",
  "subcomponents": [
    {
      "name": "elasticsearch",
      "generator": "helm",
      "source": "https://github.com/helm/charts",
      "method": "git",
      "path": "stable/elasticsearch"
    },
    {
      "name": "elasticsearch-curator",
      "generator": "helm",
      "source": "https://github.com/helm/charts",
      "method": "git",
      "path": "stable/elasticsearch-curator"
    },
    {
      "name": "fluentd-elasticsearch",
      "generator": "helm",
      "source": "https://github.com/helm/charts",
      "method": "git",
      "path": "stable/fluentd-elasticsearch"
    },
    {
      "name": "kibana",
      "generator": "helm",
      "source": "https://github.com/helm/charts",
      "method": "git",
      "path": "stable/kibana"
    }
  ]
}

First, we see that components can be defined in JSON as well as YAML (as you prefer).

Secondly, we see that that this component generates resource definitions. In particular, it will emit a set of static manifests from the path ./manifests, and generate the set of resource manifests specified by the inlined Helm templates definitions as it it iterates your deployment definitions.

With generalized helm charts like the ones used here, its often necessary to provide them with configuration values that vary by environment. This component provides a reasonable set of defaults for its subcomponents in config/common.yaml. Since this component is providing these four logging subsystems together as a "stack", or preconfigured whole, we can provide configuration to higher level parts based on this knowledge:

config:
subcomponents:
  elasticsearch:
    namespace: elasticsearch
    injectNamespace: true
    config:
      client:
        resources:
          limits:
            memory: "2048Mi"
  elasticsearch-curator:
    namespace: elasticsearch
    injectNamespace: true
    config:
      cronjob:
        successfulJobsHistoryLimit: 0
      configMaps:
        config_yml: |-
          ---
          client:
            hosts:
              - elasticsearch-client.elasticsearch.svc.cluster.local
            port: 9200
            use_ssl: False
  fluentd-elasticsearch:
    namespace: fluentd
    injectNamespace: true
    config:
      elasticsearch:
        host: "elasticsearch-client.elasticsearch.svc.cluster.local"
  kibana:
    namespace: kibana
    injectNamespace: true
    config:
      files:
        kibana.yml:
          elasticsearch.url: "http://elasticsearch-client.elasticsearch.svc.cluster.local:9200"

This common configuration, which applies to all environments, can be mixed with more specific configuration. For example, let's say that we were deploying this in Azure and wanted to utilize its managed-premium SSD storage class for Elasticsearch, but only in azure deployments. We can build an azure configuration that allows us to do exactly that, and Fabrikate has a convenience function called set that enables to do exactly that:

$ fab set --environment azure --subcomponent cloud-native.elasticsearch data.persistence.storageClass="managed-premium" master.persistence.storageClass="managed-premium"

This creates a file called config/azure.yaml that looks like this:

subcomponents:
  cloud-native:
    subcomponents:
      elasticsearch:
        config:
          data:
            persistence:
              storageClass: managed-premium
          master:
            persistence:
              storageClass: managed-premium

Naturally, an observability stack is just the base infrastructure we need, and our real goal is to deploy a set of microservices. Furthermore, let's assume that we want to be able to split the incoming traffic for these services between canary and stable tiers with Istio so that we can more safely launch new versions of the service.

There is a Fabrikate component for that as well called fabrikate-istio-service that we'll leverage to add this service, so let's do just that:

$ fab add simple-service --source https://github.com/microsoft/fabrikate-definitions --path definitions/fabrikate-istio

This component creates these traffic split services using the config applied to it. Let's create a prod config that does this for a prod cluster by creating config/prod.yaml and placing the following in it:

subcomponents:
  simple-service:
    namespace: services
    config:
      gateway: my-ingress.istio-system.svc.cluster.local
      service:
        dns: simple.mycompany.io
        name: simple-service
        port: 80
      configMap:
        PORT: 80
      tiers:
        canary:
          image: "timfpark/simple-service:441"
          replicas: 1
          weight: 10
          port: 80
          resources:
            requests:
              cpu: "250m"
              memory: "256Mi"
            limits:
              cpu: "1000m"
              memory: "512Mi"

        stable:
          image: "timfpark/simple-service:440"
          replicas: 3
          weight: 90
          port: 80
          resources:
            requests:
              cpu: "250m"
              memory: "256Mi"
            limits:
              cpu: "1000m"
              memory: "512Mi"

This defines a service that is exposed on the cluster via a particular gateway and dns name and port. It also defines a traffic split between two backend tiers: canary (10%) and stable (90%). Within these tiers, we also define the number of replicas and the resources they are allowed to use, along with the container that is deployed in them. Finally, it also defines a ConfigMap for the service, which passes along an environmental variable to our app called PORT.

From here we could add definitions for all of our microservices in a similar manner, but in the interest of keeping this short, we'll just do one of the services here.

With this, we have a functionally complete Fabrikate definition for our deployment. Let's now see how we can use Fabrikate to generate resource manifests for it.

First, let's install the remote components and helm charts:

$ fab install

This installs all of the required components and charts locally and we can now generate the manifests for our deployment with:

$ fab generate prod azure

This will iterate through our deployment definition, collect configuration values from azure, prod, and common (in that priority order) and generate manifests as it descends breadth first. You can see the generated manifests in ./generated/prod-azure, which has the same logical directory structure as your deployment definition.

Fabrikate is meant to used as part of a CI / CD pipeline that commits the generated manifests checked into a repo so that they can be applied from a pod within the cluster like Flux, but if you have a Kubernetes cluster up and running you can also apply them directly with:

$ cd generated/prod-azure
$ kubectl apply --recursive -f .

This will cause a very large number of containers to spin up (which will take time to start completely as Kubernetes provisions persistent storage and downloads the containers themselves), but after three or four minutes, you should see the full observability stack and Microservices running in your cluster.

Documentation

We have complete details about how to use and contribute to Fabrikate in these documentation items:

Community

Please join us on Slack for discussion and/or questions.

Bedrock

We maintain a sister project called Bedrock. Bedrock provides automata that make operationalizing Kubernetes clusters with a GitOps deployment workflow easier, automating a GitOps deployment model leveraging Flux, and provides automation for building a CI/CD pipeline that automatically builds resource manifests from Fabrikate defintions.

fabrikate's People

Contributors

andrewdoing avatar bnookala avatar buzzfrog avatar edaena avatar evanlouie avatar haines avatar marcel-dias avatar michaelperel avatar microsoftopensource avatar msftgits avatar mtarng avatar runyontr avatar sarath-p avatar shubham1172 avatar timfpark avatar yradsmikham avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

fabrikate's Issues

Fab install fails when attempting to add helm repo on a new client without running `helm init` first

The latest version of Fabrikate (v0.2.3+) will add help repos and update helm chart dependencies successfully on an agent that already has helm installed and initialized (e.g. have ran helm init). If the helm init command was never executed and the agent attempts to run any helm command, it will fail, causing fab install to fail when adding helm repos.

Fabrikate should perform a check on the helm CLI and ensure that it is initialized successfully before attempting to run helm commands.

how would I attach an already created nginx-ingress-controller with external ip to kibana

I'm a newbie at this.
How would I attach an already created nginx-ingress-controller with external ip to kibana

[root@aks-bastion01 ~]# kubectl get service -l app=nginx-ingress -n acc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-ingress-controller LoadBalancer 10.2.0.70 4.4.8.123 80:32121/TCP,443:30899/TCP 45m ingress-nginx-ingress-default-backend ClusterIP 10.2.0.176 <none> 80/TCP

Fabrikate should be able to add subcomponents.

As a person in a devops role, it would make my job of developing a Fabrikate definition easier if I could easily add subcomponents to a component without having to manually add them myself.

For example, let's say I currently had the following component.yaml:

name: "prometheus-grafana"
generator: "static"
path: "./manifests"
subcomponents:
- name: "prometheus"
  type: "chart"
  source: "https://github.com/helm/charts"
  method: "git"
  path: "stable/prometheus"

and to complete it, I wanted to add the stable/prometheus chart to it. With this command line feature, I could do the following:

$ fab add grafana --source https://github.com/helm/charts --path stable/grafana --type chart

which would add this remote component to component.json such that it looks like the following:

name: "prometheus-grafana"
generator: "static"
path: "./manifests"
subcomponents:
- name: "grafana"
  type: "chart"
  source: "https://github.com/helm/charts"
  method: "git"
  path: "stable/grafana"
- name: "prometheus"
  type: "chart"
  source: "https://github.com/helm/charts"
  method: "git"
  path: "stable/prometheus"

Note that fetching the source will utilize the git method by default.

Likewise, the user can also add Fabrikate components. For example, if they had the following component.yaml:

name: "cloud-native"
subcomponents:
  - name: "elasticsearch-fluentd-kibana"
    source: "https://github.com/timfpark/fabrikate-elasticsearch-fluentd-kibana"
    method: "git"
  - name: "istio"
    source: "https://github.com/timfpark/fabrikate-istio"
    method: "git"
  - name: "kured"
    source: "https://github.com/timfpark/fabrikate-kured"
    method: "git"
  - name: "jaeger"
    source: "https://github.com/bnookala/fabrikate-jaeger"
    method: "git"

And they wanted to add the prometheus-grafana component from the previous example to it, they could execute:

$ fab add prometheus-grafana --source https://github.com/timfpark/fabrikate-prometheus-grafana

And it would be added to their component.yaml:

name: "cloud-native"
subcomponents:
  - name: "elasticsearch-fluentd-kibana"
    source: "https://github.com/timfpark/fabrikate-elasticsearch-fluentd-kibana"
    method: "git"
  - name: "prometheus-grafana"
    source: "https://github.com/timfpark/fabrikate-prometheus-grafana"
    method: "git"
  - name: "istio"
    source: "https://github.com/timfpark/fabrikate-istio"
    method: "git"
  - name: "kured"
    source: "https://github.com/timfpark/fabrikate-kured"
    method: "git"
  - name: "jaeger"
    source: "https://github.com/bnookala/fabrikate-jaeger"
    method: "git"

Fails to generate cloud-native stack

Fabrikate: develop
Component: https://github.com/timfpark/fabrikate-cloud-native/ @master

When I run ./fab generate prod the following error is produced.

INFO[05-02-2019 15:38:13] ๐Ÿ’พ  Loading component.yaml
INFO[05-02-2019 15:38:13] ๐Ÿ’พ  Loading config/prod.yaml
INFO[05-02-2019 15:38:13] ๐Ÿ’พ  Loading config/common.yaml
INFO[05-02-2019 15:38:13] ๐Ÿ’พ  Loading components/elasticsearch-fluentd-kibana/component.json
INFO[05-02-2019 15:38:13] ๐Ÿ’พ  Loading components/elasticsearch-fluentd-kibana/config/common.yaml
INFO[05-02-2019 15:38:13] ๐Ÿšš  generating component 'elasticsearch-fluentd-kibana' statically from path ./manifests
INFO[05-02-2019 15:38:13] ๐Ÿšš  generating component 'elasticsearch' with helm with repo https://github.com/helm/charts
INFO[05-02-2019 15:38:13] ๐Ÿšš  generating component 'elasticsearch-curator' with helm with repo https://github.com/helm/charts
INFO[05-02-2019 15:38:13] ๐Ÿšš  generating component 'fluentd-elasticsearch' with helm with repo https://github.com/helm/charts
INFO[05-02-2019 15:38:13] ๐Ÿšš  generating component 'kibana' with helm with repo https://github.com/helm/charts
INFO[05-02-2019 15:38:13] ๐Ÿ’พ  Loading components/prometheus-grafana/component.yaml
INFO[05-02-2019 15:38:13] ๐Ÿ’พ  Loading components/prometheus-grafana/config/common.yaml
INFO[05-02-2019 15:38:13] ๐Ÿšš  generating component 'prometheus-grafana' statically from path ./manifests
INFO[05-02-2019 15:38:13] ๐Ÿšš  generating component 'grafana' with helm with repo https://github.com/helm/charts
INFO[05-02-2019 15:38:13] ๐Ÿšš  generating component 'prometheus' with helm with repo https://github.com/helm/charts
INFO[05-02-2019 15:38:13] ๐Ÿ’พ  Loading components/istio/component.yaml
INFO[05-02-2019 15:38:13] ๐Ÿ’พ  Loading components/istio/config/common.yaml
INFO[05-02-2019 15:38:13] ๐Ÿšš  generating component 'istio' with helm with repo
INFO[05-02-2019 15:38:13] ๐Ÿšš  generating component 'istio-namespace' statically from path ./manifests
INFO[05-02-2019 15:38:13] ๐Ÿ’พ  Loading components/kured/component.yaml
INFO[05-02-2019 15:38:13] ๐Ÿ’พ  Loading components/kured/config/common.yaml
INFO[05-02-2019 15:38:13] ๐Ÿšš  generating component 'kured' with helm with repo https://github.com/helm/charts
INFO[05-02-2019 15:38:14] ๐Ÿ’พ  Loading components/jaeger/component.json
INFO[05-02-2019 15:38:14] ๐Ÿ’พ  Loading components/jaeger/config/prod.json
Error: failed to merge field `ComponentConfig.Subcomponents`: key 'jaeger': failed to merge field `ComponentConfig.Config`: key 'spark': Types do not match: map[interface {}]interface {}, map[string]interface {}
Usage:
  fab generate <env1> <env2> ... <envN> [flags]

Flags:
  -h, --help   help for generate

Global Flags:
      --verbose   Use verbose output logs

ERRO[05-02-2019 15:38:14] failed to merge field `ComponentConfig.Subcomponents`: key 'jaeger': failed to merge field `ComponentConfig.Config`: key 'spark': Types do not match: map[interface {}]interface {}, map[string]interface {}

Namespace creation error

After running k apply --recursive -f . on https://github.com/timfpark/fabrikate-cloud-native/ the errors below occur. They occur do to the cluster not completing the creation of the elasticsearch and grafana namespaces in time.

An operator can remedy this by manually re-apply the resources after a couple seconds. What would the expected behaviour be in the context of Flux?

namespace/elasticsearch created
namespace/fluentd created
namespace/kibana created
configmap/elasticsearch created
serviceaccount/elasticsearch-client created
serviceaccount/elasticsearch-data created
serviceaccount/elasticsearch-master created
service/elasticsearch-client created
service/elasticsearch-discovery created
deployment.apps/elasticsearch-client created
statefulset.apps/elasticsearch-data created
statefulset.apps/elasticsearch-master created
configmap/fluentd-elasticsearch created
serviceaccount/fluentd-elasticsearch created
clusterrole.rbac.authorization.k8s.io/fluentd-elasticsearch created
clusterrolebinding.rbac.authorization.k8s.io/fluentd-elasticsearch created
daemonset.apps/fluentd-elasticsearch created
configmap/kibana created
service/kibana created
deployment.apps/kibana created
podsecuritypolicy.extensions/grafana created
clusterrole.rbac.authorization.k8s.io/grafana-clusterrole created
clusterrolebinding.rbac.authorization.k8s.io/grafana-clusterrolebinding created
namespace/grafana created
namespace/prometheus created
configmap/prometheus-alertmanager created
configmap/prometheus-server created
persistentvolumeclaim/prometheus-alertmanager created
persistentvolumeclaim/prometheus-server created
serviceaccount/prometheus-alertmanager created
serviceaccount/prometheus-kube-state-metrics created
serviceaccount/prometheus-node-exporter created
serviceaccount/prometheus-pushgateway created
serviceaccount/prometheus-server created
clusterrole.rbac.authorization.k8s.io/prometheus-kube-state-metrics created
clusterrole.rbac.authorization.k8s.io/prometheus-server created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-kube-state-metrics created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-server created
service/prometheus-alertmanager created
service/prometheus-kube-state-metrics created
service/prometheus-node-exporter created
service/prometheus-pushgateway created
service/prometheus-server created
daemonset.extensions/prometheus-node-exporter created
deployment.extensions/prometheus-alertmanager created
deployment.extensions/prometheus-kube-state-metrics created
deployment.extensions/prometheus-pushgateway created
deployment.extensions/prometheus-server created
Error from server (NotFound): error when creating "elasticsearch-fluentd-kibana/elasticsearch-curator.yaml": namespaces "elasticsearch" not found
Error from server (NotFound): error when creating "elasticsearch-fluentd-kibana/elasticsearch-curator.yaml": namespaces "elasticsearch" not found
Error from server (NotFound): error when creating "prometheus-grafana/grafana.yaml": namespaces "grafana" not found
Error from server (NotFound): error when creating "prometheus-grafana/grafana.yaml": namespaces "grafana" not found
Error from server (NotFound): error when creating "prometheus-grafana/grafana.yaml": namespaces "grafana" not found
Error from server (NotFound): error when creating "prometheus-grafana/grafana.yaml": namespaces "grafana" not found
Error from server (NotFound): error when creating "prometheus-grafana/grafana.yaml": namespaces "grafana" not found
Error from server (NotFound): error when creating "prometheus-grafana/grafana.yaml": namespaces "grafana" not found
Error from server (NotFound): error when creating "prometheus-grafana/grafana.yaml": namespaces "grafana" not found
Error from server (NotFound): error when creating "prometheus-grafana/grafana.yaml": namespaces "grafana" not found
Error from server (NotFound): error when creating "prometheus-grafana/grafana.yaml": namespaces "grafana" not found

Scaffold environments from component.json file

It's somewhat tedious to have to guess, and often re-guess as to the correct nesting structure of the yaml/json that is consumed by fab generate. If my component.json consumes stacks which are consumed themselves of a number of number of components, then we end up with a deeply nested structure that looks like:

config:
subcomponents:
  my-cool-stack:
    config:
    subcomponents:
      component1:
        config:
           key: value
      component2:
         config:
           foo: bar

proposal: a fab scaffold my-new-environment command that will generate a yaml/json file within the "config" directory, enumerating the subcomponents defined by component.json, and automatically insert an empty subcomponent configuration into the yaml/json file.

  1. start with a component.json
name: "test-stack"
subcomponents:
- name: "elasticsearch-fluentd-kibana"
  source: "https://github.com/timfpark/fabrikate-elasticsearch-fluentd-kibana"
  method: "git"
- name: "prometheus-grafana"
  source: "https://github.com/timfpark/fabrikate-prometheus-grafana"
  method: "git"
  1. user runs fab scaffold cool-stack

  2. fabrikate generates an empty file, cool-stack.yaml in "generated", that contains the subcomponents (OT: should this iterate over subcomponents' subcomponents as well?):

config:
subcomponents:
  elasticsearch-fluentd-kibana:
    config: {}
    subcomponents: {}
  prometheus-grafana:
    config: {}
    subcomponents: {}

Verbose mode is always on

It is intended that fabrikate has a --verbose mode that turns on verbose debug logs. Currently, the cli tool is always logging in verbose mode, regardless of this flag.

Lock git references to specific version

As an operator, I want to lock my git references to a specific version to prevent the underlying resource from updating and breaking my deployment and so that my deployments are completely reproducible.

My proposal is to a version to both inlined references:

- name: "kafka"
  generator: "helm"
  repo: "https://github.com/helm/charts"
  version: "890e11b894d37cb25a8049d3d4fd87b4542d06f7"
  path: "incubator/kafka"  

and to remote components:

  - name: "istio"
    source: "https://github.com/evanlouie/fabrikate-istio"
    method: "git"
    version: "788ba8b8479061b861104355dfe16bdfcece9393"

that would lock them to this revision and Fabrikate would clone the underlying repo at that commit.

Fabrikate should pull chart dependencies before generating

Found an issue in which the jaeger chart was failing to generate due to it having other chart dependencies. Fabrikate doesn't appear to pull dependencies currently, thus failing. I had to modify the helm generation code to reveal the error (it's currently swallowed):

ERRO[15-01-2019 18:19:23] helm template failed with: Error: found in requirements.yaml, but missing in charts/ directory: cassandra, elasticsearch

Add ability to specify a branch of a git source for a component.

As a component author, I would like to be able to specify a development branch of a component repo so that I can test changes before PRing them to the master branch.

In support of this, add a branch field to the component spec such that I can clone a particular source utilizing the git method at a particular version SHA (if so desired) on a particular branch.

Tests are not deterministic

The assertions in TestGenerateJSON and TestGenerateYAML fail on different clones of the repo because the ordering of the results are non deterministic. Fix the tests to be tolerate of this non-determinism

before-generate & after-generate hooks

As an operator, I want to have before-generate and after-generate hooks so that I can do necessary install/cleanup jobs before/after generating helm templates.

AC: support the following component:

name: istio-system
generator: static
path: "./manifests"
hooks:
  before-install:
    - wget https://github.com/istio/istio/releases/download/1.0.5/istio-1.0.5-linux.tar.gz
    - tar xvf istio-1.0.5-linux.tar.gz -C tmp
  after-install:
    - rm istio-1.0.5-linux.tar.gz
  before-generate:
    - echo "I DO SOMETHING"
  after-generate:
    - rm --rf tmp
subcomponents:
  - name: istio
    generator: helm
    path: "./tmp/istio-1.0.5/install/kubernetes/helm/istio"

Allow including settings from external files in config

As a developer I want to specify file names that contain config settings so that the main config file is more readable, maintainable and organized based on the type of settings.

Describe the solution you'd like:
Example: Prometheus alerts and rules make the config file really long and hard to manage. These settings can be specified in different files and in the main config file eg. prod.yaml we can just indicate that we want to populate settings from those files.

fab generate should validate manifests

It's currently possible for fab generate to generate malformed yaml, which cannot be applied against a cluster (as a consequence of bad user input).

After generation, we can possibly use "kubectl --validate=true --dry-run --recursive -f ./generated/generated_environment" to perform validation against the output environment.

Tests need unexpected prep

Running the tests in ~/cmd require that fab generate has been run previously in ~/test/fixtures/generate

Fix this such that the repo can be cloned and test run directly.

Add support for Before and After hooks during installation

As an operator, I need to be able to hook into pre/post events during component installation so I can do basic orchestration for helm charts which do not belong in a help repository.

Example:

{
  "name": "istio",
  "generator": "static",
  "path": "./manifests",
  "pre": "wget -q0- https://github.com/istio/istio/releases/download/1.0.5/istio-1.0.5-linux.tar.gz | tar xvz",
  "post": "rm -rf istio-1.0.5",
  "subcomponents": [
    {
      "name": "istio",
      "generator": "helm",
      "path": "./istio-1.0.5/install/kubernetes/helm/istio"
    }
  ]
}

Trace lineage for how config value got set.

As an operator, I want to be able to xray the config that would be utilized for my high level deployment such that I can see where the config value that was applied originated from.

Fabrikate swallowing error messages

When attempting to adding new components via Fabrikate (e.g. Jaeger), any helm chart dependency failures are not displayed in logs. For example:

ERRO[29-01-2019 15:46:19] helm template failed with:

after investigating, the root cause of the issue was the following error:

jaeger git:(master) helm dependency update Error: no repository definition for https://kubernetes-charts-incubator.storage.googleapis.com/, https://kubernetes-charts-incubator.storage.googleapis.com/. Please add them via 'helm repo add'

Fab should support adjusting config values

fab generate should support runtime parameters which overwrite any config found in the merged configs found in the HLD. This will help us update image tag and repository when changes are made in ACR. Example fab generate <env> -p <jsonpath>=<value> -p <jsonpath>=<value>

Validation fails when running `fab generate ...` in Azure DevOps Pipeline build

The new version of Fabrikate appears to include validation of emitted resource manifests after generation. However this is a problem when running Fabrikate in Azure Pipeline Build because the agent that spawns is unaware of AKS, and thus will fail. This obviously does not fail when running Fabrikate locally.

The logs are showing the following error:

2019-02-07T23:17:22.5370522Z time="07-02-2019 23:17:22" level=error msg="helm template failed with: unable to recognize \"generated/prod/infra/elasticsearch-fluentd-kibana/elasticsearch-curator.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/elasticsearch-fluentd-kibana/elasticsearch-curator.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/elasticsearch-fluentd-kibana/elasticsearch-fluentd-kibana.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/elasticsearch-fluentd-kibana/elasticsearch-fluentd-kibana.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/elasticsearch-fluentd-kibana/elasticsearch-fluentd-kibana.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/elasticsearch-fluentd-kibana/elasticsearch.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/elasticsearch-fluentd-kibana/elasticsearch.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/elasticsearch-fluentd-kibana/elasticsearch.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/elasticsearch-fluentd-kibana/elasticsearch.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/elasticsearch-fluentd-kibana/elasticsearch.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/elasticsearch-fluentd-kibana/elasticsearch.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/elasticsearch-fluentd-kibana/elasticsearch.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/elasticsearch-fluentd-kibana/elasticsearch.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/elasticsearch-fluentd-kibana/elasticsearch.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/elasticsearch-fluentd-kibana/fluentd-elasticsearch.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/elasticsearch-fluentd-kibana/fluentd-elasticsearch.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/elasticsearch-fluentd-kibana/fluentd-elasticsearch.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/elasticsearch-fluentd-kibana/fluentd-elasticsearch.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/elasticsearch-fluentd-kibana/fluentd-elasticsearch.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/elasticsearch-fluentd-kibana/kibana.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/elasticsearch-fluentd-kibana/kibana.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/elasticsearch-fluentd-kibana/kibana.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/fabrikate-jaeger/fabrikate-jaeger.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/fabrikate-jaeger/jaeger.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/fabrikate-jaeger/jaeger.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/fabrikate-jaeger/jaeger.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/fabrikate-jaeger/jaeger.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/fabrikate-jaeger/jaeger.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/fabrikate-jaeger/jaeger.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/fabrikate-jaeger/jaeger.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/fabrikate-jaeger/jaeger.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/fabrikate-jaeger/jaeger.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/fabrikate-jaeger/jaeger.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/fabrikate-jaeger/jaeger.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/fabrikate-jaeger/jaeger.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/fabrikate-jaeger/jaeger.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/fabrikate-jaeger/jaeger.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/fabrikate-jaeger/jaeger.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/fabrikate-jaeger/jaeger.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/grafana.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/grafana.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/grafana.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/grafana.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/grafana.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/grafana.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/grafana.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/grafana.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/grafana.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/grafana.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/grafana.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/grafana.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/prometheus-adapter.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/prometheus-adapter.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/prometheus-adapter.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/prometheus-adapter.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/prometheus-adapter.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/prometheus-adapter.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/prometheus-adapter.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/prometheus-adapter.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/prometheus-adapter.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/prometheus-adapter.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/prometheus-adapter.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/prometheus-grafana.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/prometheus-grafana.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/prometheus.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/prometheus.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/prometheus.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/prometheus.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/prometheus.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/prometheus.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/prometheus.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/prometheus.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/prometheus.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/prometheus.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/prometheus.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/prometheus.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/prometheus.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/prometheus.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/prometheus.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/prometheus.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/prometheus.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/prometheus.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/prometheus.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/prometheus.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/prometheus.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/prometheus.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\nunable to recognize \"generated/prod/infra/prometheus-grafana/prometheus.yaml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\n: output: \n" 2019-02-07T23:17:22.5385509Z Error: exit status 1

Could we add an argument for running fab generate... (e.g. --no-validation) that will skip the kubectl validation?

Support fab generation using a private repos for subcomponents

Currently, fab generation operates normally when definition files are stored in public repos. However, in the case where definition files are stored in multiple private repos, fab generate will fail. Users should have the flexibility to use a personal access token set in an environment variable as part of the fabrikate definition file. This should provide users the ability to use multiple private repos from various git hosts (ADO, Github, etc.).

Example component.yaml:

{
    "name": "cloud-native-infra",
    "subcomponents": [
        {
            "name": "elasticsearch-fluentd-kibana",
            "source": "https://github.com/timfpark/fabrikate-elasticsearch-fluentd-kibana",
            "method": "git"
            "token": "$ACCESS_TOKEN"
        }
    ]
}

Add option to set to fail if setting value would create new setting.

Allow the option to fail the task if a new config pair is created versus updating an existing value by doing something like adding a fab set ... --newconfigfail switch. This would be good for where you're just updating an image tag value and want to ensure you don't accidentally add the wrong thing.

Multiple copies of Helm incubator and stable folders exist (one under each subcomponent directory) after fab install

fabricate Version : 0.2.0

OS : Tried on MAC and on Windows 10 (Bash for windows)

Steps to Replicate:

  • Followed the example on https://github.com/Microsoft/fabrikate
    ** Downloaded and unzipped 0.2.0 of fabricate
    ** git clone https://github.com/Microsoft/fabrikate
    ** cd fabrikate/examples/getting-started
    ** $FABRIKATE_INSTALLATION_PATH/fab install

Observations:

  • output of ls in directory fabrikate/examples/getting-started/infra/components/elasticsearch-fluentd-kibana/helm_repos
$ ls examples/getting-started/infra/components/elasticsearch-fluentd-kibana/helm_repos

elasticsearch		elasticsearch-curator	fluentd-elasticsearch	kibana 
  • output of ls of incubator repos for elasticsearch (examples/getting-started/infra/components/elasticsearch-fluentd-kibana/helm_repos/elasticsearch/incubator/)
    $ ls -al examples/getting-started/infra/components/elasticsearch-fluentd-kibana/helm_repos/elasticsearch/incubator/
total 0
drwxr-xr-x  54 mani  staff  1728 Feb  8 12:51 .
drwxr-xr-x  16 mani  staff   512 Feb  8 12:51 ..
drwxr-xr-x   7 mani  staff   224 Feb  8 12:51 artifactory
drwxr-xr-x   7 mani  staff   224 Feb  8 12:51 aws-alb-ingress-controller
drwxr-xr-x   8 mani  staff   256 Feb  8 12:51 azuremonitor-containers
drwxr-xr-x   8 mani  staff   256 Feb  8 12:51 burrow
drwxr-xr-x   9 mani  staff   288 Feb  8 12:51 cassandra
drwxr-xr-x   8 mani  staff   256 Feb  8 12:51 chartmuseum
drwxr-xr-x   6 mani  staff   192 Feb  8 12:51 check-mk
drwxr-xr-x   7 mani  staff   224 Feb  8 12:51 common
drwxr-xr-x   7 mani  staff   224 Feb  8 12:51 couchdb
drwxr-xr-x  10 mani  staff   320 Feb  8 12:51 distribution
drwxr-xr-x   7 mani  staff   224 Feb  8 12:51 drone
drwxr-xr-x  10 mani  staff   320 Feb  8 12:51 druid
drwxr-xr-x  10 mani  staff   320 Feb  8 12:51 elastic-stack
drwxr-xr-x   9 mani  staff   288 Feb  8 12:51 elasticsearch
drwxr-xr-x   8 mani  staff   256 Feb  8 12:51 elasticsearch-curator
drwxr-xr-x   7 mani  staff   224 Feb  8 12:51 etcd
drwxr-xr-x   6 mani  staff   192 Feb  8 12:51 fluentd
drwxr-xr-x   6 mani  staff   192 Feb  8 12:51 fluentd-cloudwatch
drwxr-xr-x   7 mani  staff   224 Feb  8 12:51 fluentd-elasticsearch
drwxr-xr-x  10 mani  staff   320 Feb  8 12:51 gogs
drwxr-xr-x   7 mani  staff   224 Feb  8 12:51 goldfish
drwxr-xr-x   8 mani  staff   256 Feb  8 12:51 haproxy-ingress
drwxr-xr-x   7 mani  staff   224 Feb  8 12:51 istio
drwxr-xr-x   8 mani  staff   256 Feb  8 12:51 jaeger
drwxr-xr-x   9 mani  staff   288 Feb  8 12:51 jenkins-operator
drwxr-xr-x  10 mani  staff   320 Feb  8 12:51 kafka
drwxr-xr-x   8 mani  staff   256 Feb  8 12:51 keycloak
drwxr-xr-x   7 mani  staff   224 Feb  8 12:51 keycloak-proxy
.
.

  • output of ls of stable repos for elasticsearch (examples/getting-started/infra/components/elasticsearch-fluentd-kibana/helm_repos/elasticsearch/incubator/)
$ ls -al examples/getting-started/infra/components/elasticsearch-fluentd-kibana/helm_repos/elasticsearch/stable/
total 0
drwxr-xr-x  258 mani  staff  8256 Feb  8 12:51 .
drwxr-xr-x   16 mani  staff   512 Feb  8 12:51 ..
drwxr-xr-x    7 mani  staff   224 Feb  8 12:51 acs-engine-autoscaler
drwxr-xr-x    7 mani  staff   224 Feb  8 12:51 aerospike
drwxr-xr-x   10 mani  staff   320 Feb  8 12:51 airflow
drwxr-xr-x   10 mani  staff   320 Feb  8 12:51 anchore-engine
drwxr-xr-x    7 mani  staff   224 Feb  8 12:51 apm-server
drwxr-xr-x    7 mani  staff   224 Feb  8 12:51 ark
drwxr-xr-x   10 mani  staff   320 Feb  8 12:51 artifactory
drwxr-xr-x   10 mani  staff   320 Feb  8 12:51 artifactory-ha
drwxr-xr-x   10 mani  staff   320 Feb  8 12:51 atlantis
drwxr-xr-x    7 mani  staff   224 Feb  8 12:51 auditbeat
drwxr-xr-x    7 mani  staff   224 Feb  8 12:51 aws-cluster-autoscaler
drwxr-xr-x    7 mani  staff   224 Feb  8 12:51 bitcoind
drwxr-xr-x   10 mani  staff   320 Feb  8 12:51 bookstack
drwxr-xr-x    8 mani  staff   256 Feb  8 12:51 buildkite
drwxr-xr-x   10 mani  staff   320 Feb  8 12:51 burrow
drwxr-xr-x    8 mani  staff   256 Feb  8 12:51 centrifugo
drwxr-xr-x    8 mani  staff   256 Feb  8 12:51 cerebro
drwxr-xr-x   12 mani  staff   384 Feb  8 12:51 cert-manager
drwxr-xr-x    8 mani  staff   256 Feb  8 12:51 chaoskube
.
.

** Similar incubator and stable helm charts (same ls output as above) exists for all the folders mentioned below :
examples/getting-started/infra/components/elasticsearch-fluentd-kibana/helm_repos/elasticsearch-curator/ , examples/getting-started/infra/components/elasticsearch-fluentd-kibana/helm_repos/fluentd-elasticsearch/ , and examples/getting-started/infra/components/elasticsearch-fluentd-kibana/helm_repos/kibana/ as well

** I wanted to know if having multiple copies of the entire incubator and stable helm charts under each of the subcomponent directories is expected?

Add ability to bundle tool hooks with Fabrikate components

As an operator, after utilizing a publicly available Fabrikate component like prometheus-grafana, I
find myself writing scripts like this to be able to bring up easily the grafana dashboard:

`sleep 2 && open http://localhost:3000/d/4XuMd2Iiz/kubernetes-cluster-prometheus?orgId=1` &
  kubectl --namespace grafana port-forward $(kubectl get pods -n grafana -o jsonpath="{.items[0].metadata.name}") 3000

It would be great if these sorts of scripts could ride along with the component itself for convenience.
Instead, as an operator, I want to be able to do something like this:

$ fab run grafana

where behind the scenes the author of a Fabrikate component has added a commands item to the component:

name: "prometheus-grafana"
generator: "static"
path: "./manifests"
subcomponents:
- name: "grafana"
  generator: "helm"
  repo: "https://github.com/helm/charts"
  path: "stable/grafana",
  commands:
    grafana:
    - sleep 1 && open http://localhost:3000/d/4XuMd2Iiz/kubernetes-cluster-prometheus?orgId=1 &
    - kubectl --namespace grafana port-forward $(kubectl get pods -n grafana -o jsonpath="{.items[0].metadata.name}") 3000
- name: "prometheus"
  generator: "helm"
  repo: "https://github.com/helm/charts"
  path: "stable/prometheus"

enabling Fabrikate to walk the component tree and find this grafana command and run each step one by one.

Add component model definition

As a developer, I want comprehensive documentation around the component model used in Fabrikate so I can effectively write Fabrikate definitions.

Part of epic #120.

fabrikate should be able to handle custom chart repositories

Some infrastructures (like istio) host their own chart repositories, and have subchart dependencies which pull from them. Stack maintainers should be able to specify custom chart repositories that their stack relies on so that helm dependency update or helm fetch can successfully be run.

Example component.yaml:

name: istio
generator: helm
repository:
    - istio.io
       https://storage.googleapis.com/istio-prerelease/daily-build/master-latest-daily/charts
    - incubator
       https://kubernetes-charts-incubator.storage.googleapis.com/
โ€ฆ

Perhaps the repository can be added during fab install and dependency updates, and removed right after, to avoid cluttering the user's helm repository list.

Add comprehensive documentation

This is an epic tracking issue to cover a range of issues in the documentation

  • How to get started contributing to the tool as a developer
  • Component object model documentation (including hooks)
  • Configuration file documentation
  • Documentation for each command in Fabrikate
  • Best practices in terms of how to structure your Fabrikate definition.

Fabrikate config merging algorithm does not produce intended configuration

When building the Cloud Native stack (https://github.com/timfpark/fabrikate-cloud-native/), fab is producing the wrong config through the process of merging, thus generating charts that deploy too many resources.

Example common.json from fabrikate-jaeger:

{
    "config": {},
    "subcomponents": {
        "jaeger": {
            "config": {
                "namespace": "jaeger"
            }
        }
    }
}

Example prod.json from fabrikate-jaeger:

{
    "config": {},
    "subcomponents": {
        "jaeger": {
            "config": {
                "provisionDataStore": {
                    "cassandra": false,
                    "elasticsearch": true
                },
                "storage": {
                    "type": "elasticsearch"
                },
                "elasticsearch": {
                    "rbac": {
                        "create": true
                    }
                },
                "spark": {
                    "enabled": true
                },
                "collector": {
                    "replicaCount": 5
                },
                "query": {
                    "replicaCount": 2
                }
            }
        }
    }

prod.yaml from cloud native stack:

  jaeger:
    config:
    subcomponents:
      jaeger:
        config:
          spark:
            enabled: true
            resources:
              limits:
                cpu: 500m
                memory: 512Mi
              requests:
                cpu: 256m
                memory: 128Mi
          provisionDataStore:
            cassandra: false
            elasticsearch: false
          storage:
            type: "elasticsearch"
            elasticsearch:
              scheme: "http"
              host: "elasticsearch-client.elasticsearch.svc.cluster.local"
              port: 9200
          collector:
            replicaCount: 3
            resources:
              limits:
                cpu: 1
                memory: 1Gi
              requests:
                cpu: 500m
                memory: 512Mi
          query:
            replicaCount: 2
            resources:
              limits:
                cpu: 500m
                memory: 512Mi
              requests:
                cpu: 256m
                memory: 128Mi
          agent:
            resources:
              limits:
                cpu: 500m
                memory: 512Mi
              requests:
                cpu: 256m
                memory: 128Mi

As you can see, the prod.yaml from the cloud native stack has a section provisionDataStore, that disables provisioning cassandra or elasticsearch:

          provisionDataStore:
            cassandra: false
            elasticsearch: false

but the merged configuration retains the use of elasticsearch: true from fabrikate-jaeger's prod.json file. Output logs from fab generate prod --verbose in the cloud native stack:

INFO[01-02-2019 14:30:25] ๐Ÿ’พ  Loading components/jaeger/component.json
INFO[01-02-2019 14:30:25] ๐Ÿ’พ  Loading components/jaeger/config/prod.json
INFO[01-02-2019 14:30:25] ๐Ÿ’พ  Loading components/jaeger/config/common.json
INFO[01-02-2019 14:30:25] ๐Ÿšš  generating component 'fabrikate-jaeger' statically from path ./manifests
DEBU[01-02-2019 14:30:25] Iterating component fabrikate-jaeger with config:
config: {}
subcomponents:
  jaeger:
    config:
      agent:
        resources:
          limits:
            cpu: 500m
            memory: 512Mi
          requests:
            cpu: 256m
            memory: 128Mi
      collector:
        replicaCount: 3
        resources:
          limits:
            cpu: 1
            memory: 1Gi
          requests:
            cpu: 500m
            memory: 512Mi
      elasticsearch:
        rbac:
          create: true
      namespace: jaeger
      provisionDataStore:
        cassandra: false
        elasticsearch: true
      query:
        replicaCount: 2
        resources:
          limits:
            cpu: 500m
            memory: 512Mi
          requests:
            cpu: 256m
            memory: 128Mi
      spark:
        enabled: true
        resources:
          limits:
            cpu: 500m
            memory: 512Mi
          requests:
            cpu: 256m
            memory: 128Mi
      storage:
        elasticsearch:
          host: elasticsearch-client.elasticsearch.svc.cluster.local
          port: 9200
          scheme: http
        type: elasticsearch
    subcomponents: {}

Add contribution documentation

Per epic issue #120, add documentation around how to contribute to the project:

  • How to build Fabrikate
  • How to run tests
  • How to run linter

Error on non-string namespace property

Context: fabrikate-ambassador component
The namespace is specified like this: https://github.com/helm/charts/blob/master/stable/ambassador/values.yaml

namespace:
  single: false
  # name: default

Steps:
In common.yaml file:

config:
subcomponents:
  ambassador: 
    config:
      replicaCount: 3
      namespace:
        name: ambassador

fab generate prod

Expected: deployment.yaml is generated

Seen:

INFO[25-02-2019 16:31:02] ๐Ÿ’พ  Loading config/common.yaml                
INFO[25-02-2019 16:31:02] ๐Ÿšš  generating component 'fabrikate-ambassador' statically from path ./manifests 
INFO[25-02-2019 16:31:02] ๐Ÿšš  generating component 'ambassador' with helm with repo https://github.com/helm/charts 
panic: interface conversion: interface {} is map[string]interface {}, not string

goroutine 1 [running]:
github.com/Microsoft/fabrikate/generators.(*HelmGenerator).Generate(0xcab0a8, 0xc00012a3c0, 0x0, 0x870bc0, 0x1, 0xcab0a8)
	/Users/evlouie/go/src/github.com/Microsoft/fabrikate/generators/helm.go:89 +0xba1
github.com/Microsoft/fabrikate/core.(*Component).Generate(0xc00012a3c0, 0x9709a0, 0xcab0a8, 0x2, 0x2)
	/Users/evlouie/go/src/github.com/Microsoft/fabrikate/core/component.go:217 +0xdd
github.com/Microsoft/fabrikate/cmd.Generate.func1(0x8e0808, 0x2, 0xc00012a3c0, 0x2, 0x2)
	/Users/evlouie/go/src/github.com/Microsoft/fabrikate/cmd/generate.go:68 +0x69
github.com/Microsoft/fabrikate/core.IterateComponentTree(0x8e0808, 0x2, 0xc0000472c0, 0x1, 0x1, 0x90e090, 0x32, 0xcabea0, 0x0, 0x600fb2, ...)
	/Users/evlouie/go/src/github.com/Microsoft/fabrikate/core/component.go:299 +0xcb8
github.com/Microsoft/fabrikate/cmd.Generate(0x8e0808, 0x2, 0xc0000472c0, 0x1, 0x1, 0xc000137c01, 0x652d40, 0xc000137cb8, 0xc000137c88, 0x0, ...)
	/Users/evlouie/go/src/github.com/Microsoft/fabrikate/cmd/generate.go:58 +0x80
github.com/Microsoft/fabrikate/cmd.glob..func1(0xc84ba0, 0xc0000472c0, 0x1, 0x1, 0x0, 0x0)
	/Users/evlouie/go/src/github.com/Microsoft/fabrikate/cmd/generate.go:106 +0x125
github.com/spf13/cobra.(*Command).execute(0xc84ba0, 0xc000047270, 0x1, 0x1, 0xc84ba0, 0xc000047270)
	/Users/evlouie/go/src/github.com/spf13/cobra/command.go:762 +0x473
github.com/spf13/cobra.(*Command).ExecuteC(0xc85060, 0x5847c6, 0xc000052270, 0x90e198)
	/Users/evlouie/go/src/github.com/spf13/cobra/command.go:852 +0x2fd
github.com/spf13/cobra.(*Command).Execute(0xc85060, 0x5820d1, 0xc000052240)
	/Users/evlouie/go/src/github.com/spf13/cobra/command.go:800 +0x2b
github.com/Microsoft/fabrikate/cmd.Execute()
	/Users/evlouie/go/src/github.com/Microsoft/fabrikate/cmd/root.go:36 +0x2d
main.main()
	/Users/evlouie/go/src/github.com/Microsoft/fabrikate/main.go:15 +0x6c

Handle fab set where 2+ matches exist in subpaths

If there is more than one tag in a subpath, fab set should detect this, fail and display useful information so the user can fix the command.

For instance imageTag is present for multiple charts under the staging environment with a command like this:
fab set --environment staging imageTag=foo:121

Fabrikate might fail and return something like this back in an error message:
imageTag exists in multiple directories chart1.app1.imageTag chart2.app2.imageTag. Specify a set path which matches only one item.

This command would succeed:
fab set --environment staging app1.imageTag=foo:121 app2.imageTag=bar:343

Support Component Templating

As an operator, I need to be able to inject runtime values into my component JSON so I can orchestrate dynamic values into my components.

Example:

{
  "name": "istio",
  "generator": "static",
  "path": "./manifests",
  "pre": "wget -q0- https://github.com/istio/istio/releases/download/${version}/istio-${version}-linux.tar.gz | tar xvz",
  "subcomponents": [
    {
      "name": "istio",
      "generator": "helm",
      "path": "./istio-${version}/install/kubernetes/helm/istio"
    }
  ]
}

Getting no matches for kind "APIResourceList" in version "v1" when kubectl apply --recursive -f .

Getting the belows errors when I kubectl apply --recursive -f . . But if I delete these files

rm -rf ./fabrikate/examples/getting-started/generated/prod/.kube ./fabrikate/examples/getting-started/.kube

Everything works second time around.

How do i get these .kube directories and cache files, not not get generated and to use the one in my starting directory.

creating kubernetes namespace elasticsearch namespace/elasticsearch created configmap/elasticsearch-curator-config created (dry run) cronjob.batch/elasticsearch-curator created (dry run) namespace/elasticsearch configured (dry run) namespace/fluentd created (dry run) namespace/kibana created (dry run) configmap/elasticsearch created (dry run) serviceaccount/elasticsearch-client created (dry run) serviceaccount/elasticsearch-data created (dry run) serviceaccount/elasticsearch-master created (dry run) service/elasticsearch-client created (dry run) service/elasticsearch-discovery created (dry run) deployment.apps/elasticsearch-client created (dry run) statefulset.apps/elasticsearch-data created (dry run) statefulset.apps/elasticsearch-master created (dry run) configmap/fluentd-elasticsearch created (dry run) serviceaccount/fluentd-elasticsearch created (dry run) clusterrole.rbac.authorization.k8s.io/fluentd-elasticsearch created (dry run) clusterrolebinding.rbac.authorization.k8s.io/fluentd-elasticsearch created (dry run) daemonset.apps/fluentd-elasticsearch created (dry run) configmap/kibana created (dry run) service/kibana created (dry run) deployment.apps/kibana created (dry run) unable to recognize ".kube/cache/discovery/aksalfresc_aksalfrescoresou_484e84_e37dc3df.hcp.eastus2.azmk8s.io_443/admissionregistration.k8s.io/v1alpha1/serverresources.json": no matches for kind "APIResourceList" in version "v1" unable to recognize ".kube/cache/discovery/aksalfresc_aksalfrescoresou_484e84_e37dc3df.hcp.eastus2.azmk8s.io_443/admissionregistration.k8s.io/v1beta1/serverresources.json": no matches for kind "APIResourceList" in version "v1" unable to recognize ".kube/cache/discovery/aksalfresc_aksalfrescoresou_484e84_e37dc3df.hcp.eastus2.azmk8s.io_443/apiextensions.k8s.io/v1beta1/serverresources.json": no matches for kind "APIResourceList" in version "v1" unable to recognize ".kube/cache/discovery/aksalfresc_aksalfrescoresou_484e84_e37dc3df.hcp.eastus2.azmk8s.io_443/apiregistration.k8s.io/v1beta1/serverresources.json": no matches for kind "APIResourceList" in version "v1" unable to recognize ".kube/cache/discovery/aksalfresc_aksalfrescoresou_484e84_e37dc3df.hcp.eastus2.azmk8s.io_443/apps/v1/serverresources.json": no matches for kind "APIResourceList" in version "v1" unable to recognize ".kube/cache/discovery/aksalfresc_aksalfrescoresou_484e84_e37dc3df.hcp.eastus2.azmk8s.io_443/apps/v1beta1/serverresources.json": no matches for kind "APIResourceList" in version "v1" unable to recognize ".kube/cache/discovery/aksalfresc_aksalfrescoresou_484e84_e37dc3df.hcp.eastus2.azmk8s.io_443/apps/v1beta2/serverresources.json": no matches for kind "APIResourceList" in version "v1" unable to recognize ".kube/cache/discovery/aksalfresc_aksalfrescoresou_484e84_e37dc3df.hcp.eastus2.azmk8s.io_443/authentication.k8s.io/v1/serverresources.json": no matches for kind "APIResourceList" in version "v1" unable to recognize ".kube/cache/discovery/aksalfresc_aksalfrescoresou_484e84_e37dc3df.hcp.eastus2.azmk8s.io_443/authentication.k8s.io/v1beta1/serverresources.json": no matches for kind "APIResourceList" in version "v1" unable to recognize ".kube/cache/discovery/aksalfresc_aksalfrescoresou_484e84_e37dc3df.hcp.eastus2.azmk8s.io_443/authorization.k8s.io/v1/serverresources.json": no matches for kind "APIResourceList" in version "v1" unable to recognize ".kube/cache/discovery/aksalfresc_aksalfrescoresou_484e84_e37dc3df.hcp.eastus2.azmk8s.io_443/authorization.k8s.io/v1beta1/serverresources.json": no matches for kind "APIResourceList" in version "v1" unable to recognize ".kube/cache/discovery/aksalfresc_aksalfrescoresou_484e84_e37dc3df.hcp.eastus2.azmk8s.io_443/autoscaling/v1/serverresources.json": no matches for kind "APIResourceList" in version "v1" unable to recognize ".kube/cache/discovery/aksalfresc_aksalfrescoresou_484e84_e37dc3df.hcp.eastus2.azmk8s.io_443/autoscaling/v2beta1/serverresources.json": no matches for kind "APIResourceList" in version "v1" unable to recognize ".kube/cache/discovery/aksalfresc_aksalfrescoresou_484e84_e37dc3df.hcp.eastus2.azmk8s.io_443/batch/v1/serverresources.json": no matches for kind "APIResourceList" in version "v1" unable to recognize ".kube/cache/discovery/aksalfresc_aksalfrescoresou_484e84_e37dc3df.hcp.eastus2.azmk8s.io_443/batch/v1beta1/serverresources.json": no matches for kind "APIResourceList" in version "v1" unable to recognize ".kube/cache/discovery/aksalfresc_aksalfrescoresou_484e84_e37dc3df.hcp.eastus2.azmk8s.io_443/certificates.k8s.io/v1beta1/serverresources.json": no matches for kind "APIResourceList" in version "v1" unable to recognize ".kube/cache/discovery/aksalfresc_aksalfrescoresou_484e84_e37dc3df.hcp.eastus2.azmk8s.io_443/events.k8s.io/v1beta1/serverresources.json": no matches for kind "APIResourceList" in version "v1" unable to recognize ".kube/cache/discovery/aksalfresc_aksalfrescoresou_484e84_e37dc3df.hcp.eastus2.azmk8s.io_443/extensions/v1beta1/serverresources.json": no matches for kind "APIResourceList" in version "v1" unable to recognize ".kube/cache/discovery/aksalfresc_aksalfrescoresou_484e84_e37dc3df.hcp.eastus2.azmk8s.io_443/networking.k8s.io/v1/serverresources.json": no matches for kind "APIResourceList" in version "v1" unable to recognize ".kube/cache/discovery/aksalfresc_aksalfrescoresou_484e84_e37dc3df.hcp.eastus2.azmk8s.io_443/policy/v1beta1/serverresources.json": no matches for kind "APIResourceList" in version "v1" unable to recognize ".kube/cache/discovery/aksalfresc_aksalfrescoresou_484e84_e37dc3df.hcp.eastus2.azmk8s.io_443/rbac.authorization.k8s.io/v1/serverresources.json": no matches for kind "APIResourceList" in version "v1" unable to recognize ".kube/cache/discovery/aksalfresc_aksalfrescoresou_484e84_e37dc3df.hcp.eastus2.azmk8s.io_443/rbac.authorization.k8s.io/v1beta1/serverresources.json": no matches for kind "APIResourceList" in version "v1" error validating ".kube/cache/discovery/aksalfresc_aksalfrescoresou_484e84_e37dc3df.hcp.eastus2.azmk8s.io_443/servergroups.json": error validating data: [ValidationError(APIGroupList.groups[0]): missing required field "serverAddressByClientCIDRs" in io.k8s.apimachinery.pkg.apis.meta.v1.APIGroup, ValidationError(APIGroupList.groups[1]): missing required field "serverAddressByClientCIDRs" in io.k8s.apimachinery.pkg.apis.meta.v1.APIGroup, ValidationError(APIGroupList.groups[2]): missing required field "serverAddressByClientCIDRs" in io.k8s.apimachinery.pkg.apis.meta.v1.APIGroup, ValidationError(APIGroupList.groups[3]): missing required field "serverAddressByClientCIDRs" in io.k8s.apimachinery.pkg.apis.meta.v1.APIGroup, ValidationError(APIGroupList.groups[4]): missing required field "serverAddressByClientCIDRs" in io.k8s.apimachinery.pkg.apis.meta.v1.APIGroup, ValidationError(APIGroupList.groups[5]): missing required field "serverAddressByClientCIDRs" in io.k8s.apimachinery.pkg.apis.meta.v1.APIGroup, ValidationError(APIGroupList.groups[6]): missing required field "serverAddressByClientCIDRs" in io.k8s.apimachinery.pkg.apis.meta.v1.APIGroup, ValidationError(APIGroupList.groups[7]): missing required field "serverAddressByClientCIDRs" in io.k8s.apimachinery.pkg.apis.meta.v1.APIGroup, ValidationError(APIGroupList.groups[8]): missing required field "serverAddressByClientCIDRs" in io.k8s.apimachinery.pkg.apis.meta.v1.APIGroup, ValidationError(APIGroupList.groups[9]): missing required field "serverAddressByClientCIDRs" in io.k8s.apimachinery.pkg.apis.meta.v1.APIGroup, ValidationError(APIGroupList.groups[10]): missing required field "serverAddressByClientCIDRs" in io.k8s.apimachinery.pkg.apis.meta.v1.APIGroup, ValidationError(APIGroupList.groups[11]): missing required field "serverAddressByClientCIDRs" in io.k8s.apimachinery.pkg.apis.meta.v1.APIGroup, ValidationError(APIGroupList.groups[12]): missing required field "serverAddressByClientCIDRs" in io.k8s.apimachinery.pkg.apis.meta.v1.APIGroup, ValidationError(APIGroupList.groups[13]): missing required field "serverAddressByClientCIDRs" in io.k8s.apimachinery.pkg.apis.meta.v1.APIGroup, ValidationError(APIGroupList.groups[14]): missing required field "serverAddressByClientCIDRs" in io.k8s.apimachinery.pkg.apis.meta.v1.APIGroup, ValidationError(APIGroupList.groups[15]): missing required field "serverAddressByClientCIDRs" in io.k8s.apimachinery.pkg.apis.meta.v1.APIGroup, ValidationError(APIGroupList.groups[16]): missing required field "serverAddressByClientCIDRs" in io.k8s.apimachinery.pkg.apis.meta.v1.APIGroup]; if you choose to ignore these errors, turn validation off with --validate=false unable to recognize ".kube/cache/discovery/aksalfresc_aksalfrescoresou_484e84_e37dc3df.hcp.eastus2.azmk8s.io_443/servicecatalog.k8s.io/v1beta1/serverresources.json": no matches for kind "APIResourceList" in version "v1" unable to recognize ".kube/cache/discovery/aksalfresc_aksalfrescoresou_484e84_e37dc3df.hcp.eastus2.azmk8s.io_443/storage.k8s.io/v1/serverresources.json": no matches for kind "APIResourceList" in version "v1" unable to recognize ".kube/cache/discovery/aksalfresc_aksalfrescoresou_484e84_e37dc3df.hcp.eastus2.azmk8s.io_443/storage.k8s.io/v1beta1/serverresources.json": no matches for kind "APIResourceList" in version "v1" unable to recognize ".kube/cache/discovery/aksalfresc_aksalfrescoresou_484e84_e37dc3df.hcp.eastus2.azmk8s.io_443/v1/serverresources.json": no matches for kind "APIResourceList" in version "v1"

Wholesale coercion to strings causing issues when type expected

Current approach, of using coercion to force all values to strings, is causing errors when the downstream YAML expects a different type, like the Grafana one:

t=2019-02-10T18:08:55+0000 lvl=eror msg="Server shutdown" logger=server reason="Service init failed: Datasource provisioning error: yaml: unmarshal errors:\n line 1: cannot unmarshal !!str 1 into int64"

Since the simplest possible thing isn't working, investigate a more complex approach around coercing to the type that will win the merge (eg. the existing type).

Remote components should respect path

A devops role on a development team, I want to be able to specify a path in a remote component spec such that the component is loaded from that path, so that I can combine a Fabrikate definition into a larger repo.

Install is slow, with repetitive git clones of the same helm repos

Fabrikate, as currently written, git clones the helm repo for a particular chart with --depth 1 (to only fetch the specific commit desired by the user) but it also clones all of the other charts in the other paths that are unneeded. As an operator or developer, I don't want to have to wait for slow CI/CD builds.

This issue tracks work, yet undesigned, that would make this faster for developers and operators, but yet still enables functionality like differing locked versions in different components.

Fabrikate allows namespaces to be added to Kubernetes objects that don't support namespaces

The Issue
Adding namespaces to the manifest yaml files of un-namespaced objects such as ClusterRole and ClusterRoleBinding means that when those objects are applied to a cluster the kubectl describe function will not yield the namespace defined in the manifest yaml.

How it affects users
Consequently tools like kubediff that show differences between your running configuration and your version controlled configuration will report an error.

If the manifest repo in a GitOps context is supposed to match the reality of what is the cluster then we shouldn't have (or allow) namespaces on Kubernetes objects that don't allow them.

To Reproduce:
An example of this issue is the is in the Istio fabrikate repo (part of the Cloud Native Fabrikate stack). A namespace override is applied in common.yaml file. This seems to append namespace: istio-system to all kubernetes objects in the Istio Helm chart at the .metadata .namespace level

For instance in the Istio helm chart template for a ClusterRole kind in Istio has no Namespace in the .metadata.* path however a metadata.namespace is created when namespace: istio-system is added to .config.namespace path in common.yaml

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  annotations:
    helm.sh/hook: post-delete
    helm.sh/hook-delete-policy: hook-succeeded
    helm.sh/hook-weight: "2"
  labels:
    app: security
    chart: security-1.0.6
    heritage: Tiller
    release: istio
  name: istio-cleanup-secrets-istio-system
  namespace: istio-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: istio-cleanup-secrets-istio-system
subjects:
- kind: ServiceAccount
  name: istio-cleanup-secrets-service-account
  namespace: istio-system

This could be an issue with the Fabrikate definition, not Fabrikate itself. It's unclear at this point.

Expected behavior:
When I remove .config.Namespace from the common.yaml fab generate prod produces the below with no .metadata.namespace value. Note that default become the value for the namespace.

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: istio-cleanup-secrets-default
  annotations:
    "helm.sh/hook": post-delete
    "helm.sh/hook-delete-policy": hook-succeeded
    "helm.sh/hook-weight": "2"
  labels:
    app: security
    chart: security-1.0.6
    heritage: Tiller
    release: istio
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: istio-cleanup-secrets-default
subjects:
  - kind: ServiceAccount
    name: istio-cleanup-secrets-service-account
    namespace: default

Additional context:
A list of namespaced and un-namespaced objects in Kubernetes can be seen via the commands:
kubectl api-resources --namespaced=true
kubectl api-resources --namespaced=false

In closing
One to many things may be true:

  • The Istio Fabrikate definition is incorrectly configured for Fabrikate
  • Fabrikate has an issue
  • Fabrikate should not care about Kubernetes specific schemas
  • The Istio Helm chart has some sort of issue

I'm using version 0.3.0 of Fabrikate

Example output from Kubediff
Below is output from a modified version of Kubediff (that exposes more logging). Kubediff compare the local istio.yaml file (from a manifest repo) against what is on the cluster

$ ./kubediff ../manifest-sample/cloud-native/istio/istio.yaml
File:	../manifest-sample/cloud-native/istio/istio.yaml
Checking ConfigMap.v1. 'istio-galley-configuration'
Checking ConfigMap.v1. 'istio-statsd-prom-bridge'
Checking ConfigMap.v1. 'istio-security-custom-resources'
Checking ConfigMap.v1. 'istio'
Checking ConfigMap.v1. 'istio-sidecar-injector'
Checking ServiceAccount.v1. 'istio-galley-service-account'
Checking ServiceAccount.v1. 'istio-egressgateway-service-account'
Checking ServiceAccount.v1. 'istio-ingressgateway-service-account'
Checking ServiceAccount.v1. 'istio-mixer-service-account'
Checking ServiceAccount.v1. 'istio-pilot-service-account'
Checking ServiceAccount.v1. 'istio-cleanup-secrets-service-account'
Checking ClusterRole.v1beta1.rbac.authorization.k8s.io  'istio-cleanup-secrets-istio-system'
diff found...
## istio-system/istio-cleanup-secrets-istio-system (ClusterRole.v1beta1.rbac.authorization.k8s.io)

.metadata: 'namespace' missing
Checking ClusterRoleBinding.v1beta1.rbac.authorization.k8s.io 'istio-cleanup-secrets-istio-system'
diff found...
## istio-system/istio-cleanup-secrets-istio-system (ClusterRoleBinding.v1beta1.rbac.authorization.k8s.io)

.metadata: 'namespace' missing
Checking Job.v1.batch 'istio-cleanup-secrets'
Checking ServiceAccount.v1. 'istio-security-post-install-account'
Checking ClusterRole.v1beta1.rbac.authorization.k8s.io 'istio-security-post-install-istio-system'
diff found...
...

Do not require an environment during generation

We always use common as the ultimate fallback configuration. Given this, we do not need to require specifying a config environment, and we should be able to simply fab generate when environment specific config is not required.

fab generate fails validation on CRDs

INFO[12-02-2019 15:42:16] ๐Ÿ”ฌ  validating generated manifests in path generated/prod
ERRO[12-02-2019 15:42:18] validating generated manifests failed with: unable to recognize "generated/prod/istio/istio.yaml": no matches for kind "Gateway" in version "networking.istio.io/v1alpha3"
unable to recognize "generated/prod/istio/istio.yaml": no matches for kind "attributemanifest" in version "config.istio.io/v1alpha2"
unable to recognize "generated/prod/istio/istio.yaml": no matches for kind "attributemanifest" in version "config.istio.io/v1alpha2"
unable to recognize "generated/prod/istio/istio.yaml": no matches for kind "stdio" in version "config.istio.io/v1alpha2"
unable to recognize "generated/prod/istio/istio.yaml": no matches for kind "logentry" in version "config.istio.io/v1alpha2"
unable to recognize "generated/prod/istio/istio.yaml": no matches for kind "logentry" in version "config.istio.io/v1alpha2"
unable to recognize "generated/prod/istio/istio.yaml": no matches for kind "rule" in version "config.istio.io/v1alpha2"
unable to recognize "generated/prod/istio/istio.yaml": no matches for kind "rule" in version "config.istio.io/v1alpha2"
unable to recognize "generated/prod/istio/istio.yaml": no matches for kind "metric" in version "config.istio.io/v1alpha2"
unable to recognize "generated/prod/istio/istio.yaml": no matches for kind "metric" in version "config.istio.io/v1alpha2"
unable to recognize "generated/prod/istio/istio.yaml": no matches for kind "metric" in version "config.istio.io/v1alpha2"
unable to recognize "generated/prod/istio/istio.yaml": no matches for kind "metric" in version "config.istio.io/v1alpha2"
unable to recognize "generated/prod/istio/istio.yaml": no matches for kind "metric" in version "config.istio.io/v1alpha2"
unable to recognize "generated/prod/istio/istio.yaml": no matches for kind "metric" in version "config.istio.io/v1alpha2"
unable to recognize "generated/prod/istio/istio.yaml": no matches for kind "prometheus" in version "config.istio.io/v1alpha2"
unable to recognize "generated/prod/istio/istio.yaml": no matches for kind "rule" in version "config.istio.io/v1alpha2"
unable to recognize "generated/prod/istio/istio.yaml": no matches for kind "rule" in version "config.istio.io/v1alpha2"
unable to recognize "generated/prod/istio/istio.yaml": no matches for kind "kubernetesenv" in version "config.istio.io/v1alpha2"
unable to recognize "generated/prod/istio/istio.yaml": no matches for kind "rule" in version "config.istio.io/v1alpha2"
unable to recognize "generated/prod/istio/istio.yaml": no matches for kind "rule" in version "config.istio.io/v1alpha2"
unable to recognize "generated/prod/istio/istio.yaml": no matches for kind "kubernetes" in version "config.istio.io/v1alpha2"
unable to recognize "generated/prod/istio/istio.yaml": no matches for kind "DestinationRule" in version "networking.istio.io/v1alpha3"
unable to recognize "generated/prod/istio/istio.yaml": no matches for kind "DestinationRule" in version "networking.istio.io/v1alpha3"

I noticed this issue when running fab generate prod on the cloud native stack (https://github.com/timfpark/fabrikate-cloud-native/).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.