GithubHelp home page GithubHelp logo

dekorateio / dekorate Goto Github PK

View Code? Open in Web Editor NEW
464.0 12.0 99.0 18.13 MB

Tools for generating Kubernetes related manifests.

License: Apache License 2.0

Java 99.73% Shell 0.11% Scala 0.01% Dockerfile 0.07% Groovy 0.08% Starlark 0.01%
kubernetes openshift knative tekton java

dekorate's Introduction

jbang ./scripts/ChangeVersion.java readme.md io.dekorate 4.0.1

Features

Experimental features

  • Register hooks for triggering builds and deployment

Rationale

There are tons of tools out there for scaffolding / generating kubernetes manifests. Sooner or later these manifests will require customization. Handcrafting is not an appealing option. Using external tools, is often too generic. Using build tool extensions and adding configuration via xml, groovy etc is a step forward, but still not optimal.

Annotation processing has quite a few advantages over external tools or build tool extensions:

  • Configuration is validated by the compiler.
  • Leverages tools like the IDE for writing type safe config (checking, completion etc).
  • Works with all build tools.
  • Can "react" to annotations provided by the framework.

Hello World

This section provides examples on how to get started based on the framework you are using.

NOTE: All examples in README using the version that corresponds to the target branch. On github master that is the latest 2.x release.

Hello Spring Boot

Add the following dependency to your project:

<dependency>
  <groupId>io.dekorate</groupId>
  <artifactId>kubernetes-spring-starter</artifactId>
  <version>4.1.3</version>
</dependency>

That's all! Next time you perform a build, using something like:

mvn clean package

The generated manifests can be found under target/classes/META-INF/dekorate.

asciicast

related examples

Hello Quarkus

Add the following dependency to your project:

<dependency>
  <groupId>io.quarkus</groupId>
  <artifactId>quarkus-kubernetes</artifactId>
  <version>1.0.0.Final</version>
</dependency>

That's all! Next time you perform a build, using something like:

mvn clean package

The generated manifests can be found under target/kubernetes. Note: Quarkus is using its own dekorate based Kubernetes extension (see more at Quarkus).

asciicast

Hello Thorntail

Add the following dependency to your project:

<dependency>
  <groupId>io.dekorate</groupId>
  <artifactId>thorntail-spring-starter</artifactId>
  <version>4.1.3</version>
</dependency>

That's all! Next time you perform a build, using something like:

mvn clean package

The generated manifests can be found under target/classes/META-INF/dekorate.

asciicast

related examples

Hello Generic Java Application

Add the following dependency to your project:

<dependency>
  <groupId>io.dekorate</groupId>
  <artifactId>kubernetes-annotations</artifactId>
  <version>4.1.3</version>
</dependency>

Then add the @Dekorate annotation to one of your Java source files.

package org.acme;

import io.dekorate.annotation.Dekorate;

@Dekorate
public class Application {
}

Note: It doesn't have to be the Main class. Next time you perform a build, using something like:

mvn clean package

The generated manifests can be found under target/classes/META-INF/dekorate.

asciicast

related examples

Usage

To start using this project you just need to add one of the provided dependencies to your project. For known frameworks like spring boot, quarkus, or thorntail that's enough. For generic java projects, we also need to add an annotation that expresses our intent to enable dekorate.

This annotation can be either @Dekorate or a more specialized one, which also gives us access to more specific configuration options. Further configuration is feasible using:

  • Java annotations
  • Configuration properties (application.properties)
  • Both

A complete reference of the supported properties can be found in the configuration options guide.

Kubernetes

@KubernetesApplication is a more specialized form of @Dekorate. It can be added to your project like:

import io.dekorate.kubernetes.annotation.KubernetesApplication;

@KubernetesApplication
public class Main {

    public static void main(String[] args) {
      //Your application code goes here.
    }
}

When the project gets compiled, the annotation will trigger the generation of a Deployment in both json and yml that will end up under 'target/classes/META-INF/dekorate'.

The annotation comes with a lot of parameters, which can be used in order to customize the Deployment and/or trigger the generations of addition resources, like Service and Ingress.

Adding the kubernetes annotation processor to the classpath

This module can be added to the project using:

<dependency>
  <groupId>io.dekorate</groupId>
  <artifactId>kubernetes-annotations</artifactId>
  <version>4.1.3</version>
</dependency>

Name and Version

So where did the generated Deployment gets its name, docker image etc from?

Everything can be customized via annotation parameters, application configuration and system properties. On top of that, lightweight integration with build tools is provided in order to reduce duplication.

Note, that part-of, name and version are part of multiple annotations / configuration groups etc.

When a single application configuration is found and no explict image configuration value has been used for (group, name & version), values from the application configuration will be used.

For example:

@KubernetesApplication(name="my-app")
@DockerBuild(registry="quay.io")
public class Main {
}

In the example above, docker is configured with no explicit value on name. In this case that name from @KubernetesApplication(name="my-app") will be used.

The same applies when property configuration is used:

io.dekorate.kubernetes.name=my-app
io.dekorate.docker.registry=quay.io

Note: Application configuration part-of corresponds to image configuration group.

Lightweight build tool integration

Lightweight integration with build tools, refers to reading information from the build tool config without bringing in the build tool itself into the classpath. The information read from the build tool is limited to:

  • name / artifactId
  • version
  • output file

For example in the case of maven it refers to parsing the pom.xml with DOM in order to fetch the artifactId and version.

Supported build tools:

  • maven
  • gradle
  • sbt
  • bazel

For all other build tools, the name and version need to be provided via application.properties:

dekorate.kubernetes.name=my-app
dekorate.kubernetes.version=1.1.0.Final

or the core annotations:

@KubernetesApplication(name = "my-app", version="1.1.0.Final")
public class Main {
}

or

@OpenshiftApplication(name = "my-app", version="1.1.0.Final")
public class Main {
}

and so on...

The information read from the build tool, is added to all resources as labels (name, version). They are also used to name images, containers, deployments, services etc.

For example for a gradle app, with the following gradle.properties:

name = my-gradle-app
version = 1.0.0

The following deployment will be generated:

apiVersion: "apps/v1"
kind: "Deployment"
metadata:
  name: "kubernetes-example"
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: "my-gradle-app"
      app.kubernetes.io/version: "1.0-SNAPSHOT"
  template:
    metadata:
      labels:
        app.kubernetes.io/name: "my-gradle-app"
        app.kubernetes.io/version: "1.0-SNAPSHOT"
    spec:
      containers:
      - env:
        - name: "KUBERNETES_NAMESPACE"
          valueFrom:
            fieldRef:
              fieldPath: "metadata.namespace"
        image: "default/my-gradle-app:1.0-SNAPSHOT"
        imagePullPolicy: "IfNotPresent"
        name: "my-gradle-app"

The output file name may be used in certain cases, to set the value of JAVA_APP_JAR an environment variable that points to the build jar.

Adding extra ports and exposing them as services

To add extra ports to the container, you can add one or more @Port into your @KubernetesApplication:

import io.dekorate.kubernetes.annotation.Port;
import io.dekorate.kubernetes.annotation.KubernetesApplication;

@KubernetesApplication(ports = @Port(name = "web", containerPort = 8080))
public class Main {

  public static void main(String[] args) {
    //Your code goes here
  }
}

This will trigger the addition of a container port to the Deployment but also will trigger the generation of a Service resource.

Everything that can be defined using annotations, can also be defined using application.properties. To add a port using application.properties:

dekorate.kubernetes.ports[0].name=web
dekorate.kubernetes.ports[0].container-port=8080

NOTE: This doesn't need to be done explicitly, if the application framework is detected and support, ports can be extracted from there (see below).

IMPORTANT: When mixing annotations and application.properties the latter will always take precedence overriding values that defined using annotations. This allows users to define the configuration using annotations and externalize configuration to application.properties.

REMINDER: A complete reference on all the supported properties can be found in the configuration options guide.

Adding container environment variables

To add extra environment variables to the container, you can add one or more @EnvVar into your @KubernetesApplication :

import io.dekorate.kubernetes.annotation.Env;
import io.dekorate.kubernetes.annotation.KubernetesApplication;

@KubernetesApplication(envVars = @Env(name = "key1", value = "var1"))
public class Main {

  public static void main(String[] args) {
    //Your code goes here
  }
}

Additional options are provided for adding environment variables from fields, config maps and secrets.

To add environment variables using application.properties:

dekorate.kubernetes.env-vars[0].name=key1
dekorate.kubernetes.env-vars[0].value=value1

Adding environment variables from ConfigMap

To add an environment variable that points to a ConfigMap property, you need to specify the configmap using the configmap property in the @Env annotation. The configmap key will be specified by the value property. So, in this case value has the meaning of value from key.

import io.dekorate.kubernetes.annotation.Env;
import io.dekorate.kubernetes.annotation.KubernetesApplication;

@KubernetesApplication(envVars = @Env(name = "key1", configmap="my-config", value = "key1"))
public class Main {

  public static void main(String[] args) {
    //Your code goes here
  }
}

To add an environment variable referencing a config map using application.properties:

dekorate.kubernetes.env-vars[0].name=key1
dekorate.kubernetes.env-vars[0].value=key1
dekorate.kubernetes.env-vars[0].config-map=my-config

Adding environment variables from Secrets

To add an environment variable that points to a Secret property, you need to specify the configmap using the secret property in the @Env annotation. The secret key will be specified by the value property. So, in this case value has the meaning of value from key.

import io.dekorate.kubernetes.annotation.Env;
import io.dekorate.kubernetes.annotation.KubernetesApplication;

@KubernetesApplication(envVars = @Env(name = "key1", secret="my-secret", value = "key1"))
public class Main {

  public static void main(String[] args) {
    //Your code goes here
  }
}

To add an environment variable referencing a secret using application.properties:

dekorate.kubernetes.env-vars[0].name=key1
dekorate.kubernetes.env-vars[0].value=key1
dekorate.kubernetes.env-vars[0].secret=my-config

Working with volumes and mounts

To define volumes and mounts for your application, you can use something like:

import io.dekorate.kubernetes.annotation.Mount;
import io.dekorate.kubernetes.annotation.PersistentVolumeClaimVolume;
import io.dekorate.kubernetes.annotation.KubernetesApplication;

@KubernetesApplication(pvcVolumes = @PersistentVolumeClaimVolume(volumeName = "mysql-volume", claimName = "mysql-pvc"),
  mounts = @Mount(name = "mysql-volume", path = "/var/lib/mysql")
)
public class Main {

  public static void main(String[] args) {
    //Your code goes here
  }
}

To define the same volume and mount via application.properties:

dekorate.kubernetes.pvc-volumes[0].volume-name=mysql-volume
dekorate.kubernetes.pvc-volumes[0].claim-name=mysql-pvc
dekorate.kubernetes.mounts[0].name=mysql-volume
dekorate.kubernetes.mounts[0].path=/var/lib/mysql

Currently, the supported annotations for specifying volumes are:

  • @PersistentVolumeClaimVolume
  • @SecretVolume
  • @ConfigMapVolume
  • @AwsElasticBlockStoreVolume
  • @AzureDiskVolume
  • @AzureFileVolume

Vcs Options

Most of the generated resources contain the kubernetes recommended annotations for specifying things like:

  • vcs url
  • commit id

These are extracted from the project .git/config file (Currently only git is supported). Out of the box, the url of the origin remote will be used verbatim.

Specifying remote

In some cases users may prefer to use another remote. This can be done with the use of @VcsOptions annotation:

import io.dekorate.options.annotation.JvmOptions;
import io.dekorate.options.annotation.GarbageCollector;
import io.dekorate.kubernetes.annotation.KubernetesApplication;

@KubernetesApplication
@VcsOptions(remote="myfork")
public class Main {

  public static void main(String[] args) {
    //Your code goes here
  }
}

In the example above myfork will be used as the remote. So, generated resources will be annotated with the url of the myfork remote.

For users that prefer using application.properties:

dekorate.vcs.remote=myfork
Converting vcs urls to https

The vcs related annotations are mostly used by tools. For public repositories its often simpler for tools, to access the repository anonymous access. This is possible when using git over https, but not possible when using git over ssh. So, there are cases where users would rather develop using git+ssh but have 3d-party tools use https instead. To force dekorate covnert vcs urls to https one case use the httpsPreferred parameter of @VcsOptions. Or using properties:

dekorate.vcs.https-preferred=true

Jvm Options

It's common to pass the JVM options in the manifests using the JAVA_OPTS or JAVA_OPTIONS environment variable of the application container. This is something complex as it usually difficult to remember all options by heart and thus its error prone. The worst part is that you usually don't realize the mistake until it's TOO late.

Dekorate provides a way to manage those options using the @JvmOptions annotation, which is included in the options-annotations module.

import io.dekorate.options.annotation.JvmOptions;
import io.dekorate.options.annotation.GarbageCollector;
import io.dekorate.kubernetes.annotation.KubernetesApplication;

@KubernetesApplication
@JvmOptions(server=true, xmx=1024, preferIpv4Stack=true, gc=GarbageCollector.SerialGC)
public class Main {

  public static void main(String[] args) {
    //Your code goes here
  }
}

or via application.properties:

dekorate.jvm.server=true
dekorate.jvm.xmx=1024
dekorate.jvm.prefer-ipv4-stack=true
dekorate.jvm.gc=GarbageCollector.SerialGC

This module can be added to the project using:

<dependency>
  <groupId>io.dekorate</groupId>
  <artifactId>option-annotations</artifactId>
  <version>4.1.3</version>
</dependency>

Note: The module is included in all starters.

Container Resources

Kubernets allwos setting rules about container resources:

  • Request CPU: The amount of CPU the container needs.
  • Request Memory: The amount of memory the container needs.
  • Limit CPU: The maximum amount of CPU the container will get.
  • Limit Memory: The maximum amount of memory the container will get.

More information: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers

Dekorate supports these options for both the application container and / or any of the side car containers.

Application Container resources

Using annotations

There are parameters availbe for @KubernetesApplication, @KnativeApplication and @OpenshiftApplication.

Using the @KubernetesApplication one could set the resources like:

import io.dekorate.kubernetes.annotation.ResourceRequirements;
import io.dekorate.kubernetes.annotation.KubernetesApplication;

@KubernetesApplication(requestResources=@ResourceRequirements(memory="64Mi", cpu="1m"), limitResources=@ResourceRequirements(memory="256Mi", cpu="5m"))
public class Main {
}

In the same spirit it workds for @KnativeApplication and @OpenshiftApplication.

Using properties

Users that prefer to configure dekorate using property configuration can use the following options:

dekorate.kubernetes.request-resources.cpu=1m
dekorate.kubernetes.request-resources.memory=64Mi
dekorate.kubernetes.limit-resources.cpu=5m
dekorate.kubernetes.limit-resources.memory=256Mi

In a similar manner works for openshift:

dekorate.openshift.request-resources.cpu=1m
dekorate.openshift.request-resources.memory=64Mi
dekorate.openshift.limit-resources.cpu=5m
dekorate.openshift.limit-resources.memory=256Mi

Init Containers

If for any reason the application requires the use of init containers, they can be easily defined using the initContainer property, as demonstrated below.

import io.dekorate.kubernetes.annotation.Container;
import io.dekorate.kubernetes.annotation.KubernetesApplication;

@KubernetesApplication(initContainers = @Container(image="foo/bar:latest", command="foo"))
public class Main {

  public static void main(String[] args) {
    //Your code goes here
  }
}

or via application.properties:

dekorate.kubernetes.init-containers[0].image=foo/bar:latest
dekorate.kubernetes.init-containers[0].command=foo

The @Container supports the following fields:

  • Image
  • Image Pull Policy
  • Commands
  • Arguments
  • Environment Variables
  • Mounts
  • Probes

Sidecars

Similarly, to init containers support for sidecars is also provided using the sidecars property. For example:

import io.dekorate.kubernetes.annotation.Container;
import io.dekorate.kubernetes.annotation.KubernetesApplication;

@KubernetesApplication(sidecars = @Container(image="jaegertracing/jaeger-agent",
                                             args="--collector.host-port=jaeger-collector.jaeger-infra.svc:14267"))
public class Main {

  public static void main(String[] args) {
    //Your code goes here
  }
}

or via application.properties:

dekorate.kubernetes.sidecars[0].image=jaegertracing/jaeger-agent
dekorate.kuberentes.args=--collector.host-port=jaeger-collector.jaeger-infra.svc:14267

As in the case of init containers the @Container supports the following fields:

  • Image
  • Image Pull Policy
  • Commands
  • Arguments
  • Environment Variables
  • Mounts
  • Probes

Adding the kubernetes annotation processor to the classpath

This module can be added to the project using:

<dependency>
  <groupId>io.dekorate</groupId>
  <artifactId>kubernetes-annotations</artifactId>
  <version>4.1.3</version>
</dependency>

OpenShift

@OpenshiftApplication works exactly like @KubernetesApplication , but will generate resources in a file name openshift.yml / openshift.json instead. Also instead of creating a Deployment it will create a DeploymentConfig.

NOTE: A project can use both @KubernetesApplication and @OpenshiftApplication. If both the kubernetes and OpenShift annotation processors are present both kubernetes and OpenShift resources will be generated.

Adding the OpenShift annotation processor to the classpath

This module can be added to the project using:

<dependency>
  <groupId>io.dekorate</groupId>
  <artifactId>openshift-annotations</artifactId>
  <version>4.1.3</version>
</dependency>

Integrating with S2i

Out of the box resources for s2i will be generated.

  • ImageStream
    • builder
    • target
  • BuildConfig

Here's an example:

import io.dekorate.openshift.annotation.OpenshiftApplication;

@OpenshiftApplication(name = "doc-example")
public class Main {

    public static void main(String[] args) {
      //Your code goes here
    }
}

The same can be expressed via application.properties:

dekorate.openshift.name=doc-example

IMPORTANT: All examples of application.properties demonstrated in the Kubernetes section can be applied here, by replacing the prefix dekorate.kubernetes with dekorate.openshift.

The generated BuildConfig will be a binary config. The actual build can be triggered from the command line with something like:

oc start-build doc-example --from-dir=./target --follow

NOTE: In the example above we explicitly set a name for our application, and we referenced that name from the cli. If the name was implicitly created the user would have to figure the name out before triggering the build. This could be done either by oc get bc or by knowing the conventions used to read names from build tool config (e.g. if maven then name the artifactId).

related examples

Tekton

Dekorate supports generating tekton pipelines. Since Dekorate knows, how your project is build, packaged into containers and deployed, converting that knowledge into a pipeline comes natural.

When the tekton module is added to the project:

<dependency>
  <groupId>io.dekorate</groupId>
  <artifactId>tekton-annotations</artifactId>
  <version>4.1.3</version>
</dependency>

Two sets of resources will be generated, each representing a different configuration style the use user can choose from:

  • Pipeline based
    • tekton-pipeline.yml
    • tekton-pipeline-run.yml
    • tekton-pipeline.json
    • tekton-pipeline-run.json
  • Task based
    • tekton-task.yml
    • tekton-task-run.yml
    • tekton-task.json
    • tekton-task-run.json

Pipeline

This set of resources contains:

  • Pipeline
  • PipelineResource (git, output image)
  • PipelineRun
  • Task (build, package and push, deploy)
  • RBAC resources

These are the building blocks of a Tekton pipeline that grabs your project from scm, builds and containerizes the project (in cluster) and finally deploys it.

Task

This set of resources provides the some functionality as above, but everything is collapsed under a single task (for usability reasons), In detail it contains:

  • PipelineResource (git, output image)
  • Task
  • TaskRun
  • RBAC resources

Pipeline vs Task

If unsure which style to pickup, note that the task style has less configuration requirements and thus easier to begin with. The pipeline style is easier to slice and dice, once your are more comfortable with tekton.

Regardless of the choice, Dekorate provides a rich set of configuration options to make using tekton as easy as it gets.

Tekton Configuration

Git Resource

The generated tasks and pipelines, assume the project is under version control and more specifically git. So, in order to run the pipeline or the task a PiepelineResource of type git is required. If the project is added to git, the resource will be generated for you. If for any reason the use of an external resource is preferred then it needs to be configured, like:

dekorate.tekton.external-git-pipeline-resource=<<the name of the resource goes here>>
Builder Image

Both the pipeline and the task based resources include steps that perform a build of the project. Dekorate, tries to identify a suitable builder image for the project. Selection is based on the build tool, jdk version, jdk flavor and build tool version (in that order). At the moment only maven and gradle are supported.

You can customize the build task by specifying:

  • custom builder image: dekorate.tekton.builder-image
  • custom build command: dekorate.tekton.builder-command
  • custom build arguments: dekorate.tekton.builder-arguments
Configuring a Workspace PVC

One of the main differences between the two styles of configuration, is that Pipelines require a PersistentVolumeClaim in order to share the workspace between Tasks. On the contrary when all steps are part of single bit fat Task (which is baked by a Pod) and EmptyDir volume will suffice.

Out of the box, for the pipeline style resources a PersistentVolumeClaim named after the application will be generated and used.

The generated pvc can be customized using the following properties:

  • dekorate.tekton.source-workspace-claim.size (defaults to 1Gi)
  • dekorate.tekton.source-workspace-claim.storage-class (defaults to standard)

The option to provide an existing pvc (by name) instead of generating one is also provided, using dekorate.tekton.source-workspace-claim.

Configuring the Docker registry for Tekton

The generated Pipeline / Task includes steps for building a container image and pushing it to a registry.

The registry can be configured using dekorate.docker.registry as is done for the rest of the resources.

For the push to succeed credentials for the registry are required. The user is able to:

  • Provide own Secret with registry credentials
  • Provide username and password
  • Upload local .docker/config.json

To provide an existing secret for the job (e.g. my-secret):

dekorate.tekton.image-builder-secert=my-secert

To provide username and password:

dekorate.tekton.registry-usernmae=myusername
dekorate.tekton.registry-password=mypassword

If none of the above is provided and a .docker/config.json exists, it can be used if explicitly requested:

dekorate.tekton.use-local-docker-config-json=true

Knative

Dekorate also supports generating manifests for knative. To make use of this feature you need to add:

<dependency>
  <groupId>io.dekorate</groupId>
  <artifactId>knative-annotations</artifactId>
  <version>4.1.3</version>
</dependency>

This module provides the @KnativeApplication works exactly like @KubernetesApplication , but will generate resources in a file name knative.yml / knative.json instead. Also instead of creating a Deployment it will create a knative serving Service.

Cluster local services

Knative exposes services out of the box. You can use the @KnativeApplication(expose=false) or the property dekorate.knative.expose set to false, in order to mark a service as cluster local.

Autoscaling

Dekorate provides access to both revision and global autoscaling configuration (see Knative Autoscaling.

Global autoscaling configuration is supported via configmaps (KnativeServing is not supported yet).

Class

To set the autoscaler class for the target revision:

dekorate.knative.revision-auto-scaling.autoscaler-class=hpa

The allowed values are:

  • hpa: Horizontal Pod Autoscaler
  • kpa: Knative Pod Autoscaler (default)

In the same spirit the global autoscaler class can be set using:

dekorate.knative.global-auto-scaling.autoscaler-class=hpa
Metric

To select the autoscaling metric:

dekorate.knative.revision-auto-scaling.metric=rps

The allowed values are:

  • concurrency: Concurrency (default)
  • rps: Requests per second
  • cpu: CPU (requires hpa revision autoscaler class).
Target

Metric specifies the metric kind. To sepcify the target value the autoscaler should aim to maintain, the target can be used:

dekorate.knative.revision-auto-scaling.target=100

There is no option to set a generic global target. Instead specific keys per metric kind are provided. See below:

Requests per second

To set the requests per second:

dekorate.knative.global-auto-scaling.requests-per-second=100
Target utilization

To set the target utilization:

dekorate.knative.global-auto-scaling.target-utilization-percentage=100

Framework integration

Framework integration modules are provided that we are able to detect framework annotations and adapt to the framework (e.g. expose ports).

The frameworks supported so far:

  • Spring Boot
  • Quarkus
  • Thorntail

Spring Boot

With spring boot, we suggest you start with one of the provided starters:

<dependency>
  <groupId>io.dekorate</groupId>
  <artifactId>kubernetes-spring-starter</artifactId>
  <version>4.1.3</version>
</dependency>

Or if you are on OpenShift:

<dependency>
  <groupId>io.dekorate</groupId>
  <artifactId>openshfit-spring-starter</artifactId>
  <version>4.1.3</version>
</dependency>

Automatic configuration

For Spring Boot application, dekorate will automatically detect known annotation and will align generated manifests accordingly.

Exposing services

Dekorate tunes the generated manifest based on the presence of web annotations in the project:

  • Automatic service expose
  • Application path detection

When known web annotations are available in the project, dekorate will automatically detect and expose the http port as a Service. That service will also be expose as an Ingress or Route (in case of Openshift) if the expose option is set to true.

Kubernetes
@KubernetesApplication(expose=true)

An alternative way of configuration is via application properties:

dekorate.kubernetes.ingress.expose=true
Openshift
@OpenshiftApplication(route=@Route(expose=true))

An alternative way of configuration is via application properties:

dekorate.openshift.route.expose=true

There are cases where the Ingress or Route host needs to be customized. This is done using the host parameter either via annotation or property configuration.

Kubernetes
@KubernetesApplication(expose=true, host="foo.bar.com")

An alternative way of configuration is via application properties:

dekorate.kubernetes.ingress.expose=true
dekorate.kubernetes.ingress.host=foo.bar.com
Openshift
@OpenshiftApplication(route = @Route(expose=true, host="foo.bar.com"))

An alternative way of configuration is via application properties:

dekorate.openshift.route.expose=true
dekorate.openshift.route.host=foo.bar.com
RequestMapping

When one RequestMapping annotation is added on a Controller or multiple RequestMapping that share a common path are added on multiple Controller classes, dekorate will detect the shortest common path and configure it so that its available on the expose Ingress or Route.

Annotation less configuration

It is possible to completely bypass annotations by utilizing already-existing, framework-specific metadata. This mode is currently only supported for Spring Boot applications (i.e. at least one project class is annotated with @SpringBootApplication).

So, for Spring Boot applications, all you need to do is add one of the starters (io.dekorate:kubernetes-spring-starter or io.dekorate:openshift-spring-starter) to the classpath. No need to specify an additional annotation. This provides the fastest way to get started using dekorate with Spring Boot.

To customize the generated manifests you can add dekorate properties to your application.yml or application.properties descriptors, or even use annotations along with application.yml / application.properties though if you define dekorate properties then the annotation configuration will be replaced by the one specified using properties.

Dekorate looks for supported configuration as follows in increasing order of priority, meaning any configuration found in an application descriptor will override any existing annotation-specified configuration:

  1. Annotations
  2. application.properties
  3. application.yaml
  4. application.yml

Then, it will use the properties file depending on the active Dekorate dependencies in use. For example, if we're using the dependency io.dekorate:kubernetes-annotations, then:

  1. application-kubernetes.properties
  2. application-kubernetes.yaml
  3. application-kubernetes.yml

| Note that only the openshift, kubernetes and knative modules are providing additional properties files.

Then, for Spring Boot applications, it will also take into account the Spring property spring.profiles.active if set:

  1. application-${spring.profiles.active}.properties
  2. application-${spring.profiles.active}.yaml
  3. application-${spring.profiles.active}.yml

Finally, if the Dekorate profile property dekorate.options.properties-profile is set:

  1. if property dekorate.options.properties-profile is set, then application-${dekorate.options.properties-profile}.properties
  2. if property dekorate.options.properties-profile is set, then application-${dekorate.options.properties-profile}.yaml
  3. if property dekorate.options.properties-profile is set, then application-${dekorate.options.properties-profile}.yml

It's important to repeat that the override that occurs by fully replacing any lower-priority configuration and not via any kind of merge between the existing and higher-priority values. This means that if you choose to override the annotation-specified configuration, you need to repeat all the configuration you want in the @Env annotation-less configuration.

Here's the full list of supported configuration options. Special attention should be paid to the path of these properties. The properties' path match the annotation properties and not what would end up in the manifest, meaning the annotation-less configuration matches the model defined by the annotations. More precisely, what is being configured using properties is the same model as what is configured using annotations. While there is some overlap between how the annotations are configured and the resulting manifest, the properties (or YAML file) still need to provide values for the annotation fields, hence why they need to match how the annotations are configured. Always refer to the configuration options guide if in doubt.

Generated resources when not using annotations

When no annotations are used, the kind of resources to be generated is determined by the dekorate artifacts found in the classpath.

File Required Dependency
kubernetes.json/yml io.dekorate:kubernetes-annotations
openshift.json/yml io.dekorate:openshift-annotations

Note: that starter modules for kubernetes and openshift do transitively add kubernetes-annotations and openshift-annotations respectively.

Quarkus

quarkus provides rich set of extensions including one for kubernetes. The kubernetes extension uses internally dekorate for generating and customizing manifests.

The extension can be added to any quarkus project:

mvn quarkus:add-extension -Dextensions="io.quarkus:quarkus-kubernetes"

After the project compilation the generated manifests will be available under: target/kubernetes/.

At the moment this extension will handle ports, health checks etc, with zero configuration from the user side.

It's important to note, that by design this extension will NOT use the dekorate annotations for customizing the generated manifests.

For more information please check: the extension docs.

Thorntail

With Thorntail, it is recommended to add a dependency on one of the provided starters:

<dependency>
  <groupId>io.dekorate</groupId>
  <artifactId>kubernetes-thorntail-starter</artifactId>
  <version>4.1.3</version>
  <scope>provided</scope>
</dependency>

Or, if you use OpenShift:

<dependency>
  <groupId>io.dekorate</groupId>
  <artifactId>openshfit-thorntail-starter</artifactId>
  <version>4.1.3</version>
  <scope>provided</scope>
</dependency>

Then, you can use the annotations described above, such @KubernetesApplication, @OpenShiftApplication, etc.

Note that the Thorntail annotation processor reads the thorntail.http.port configuration from the usual project-defaults.yml. It doesn't read any other project-*.yml profiles.

Experimental features

Apart from the core feature, which is resource generation, there are a couple of experimental features that do add to the developer experience.

These features have to do with things like building, deploying and testing.

Building and Deploying?

Dekorate does not generate Docker files, neither it provides internal support for performing docker or s2i builds. It does however allow the user to hook external tools (e.g. the docker or oc) to trigger container image builds after the end of compilation.

So, at the moment as an experimental feature the following hooks are provided:

  • docker build hook (requires docker binary, triggered with -Ddekorate.build=true)
  • docker push hook (requires docker binary, triggered with -Ddekorate.push=true)
  • OpenShift s2i build hook (requires oc binary, triggered with -Ddekorate.deploy=true)
  • KiND docker images loading hook (requires kind, triggered with -Ddekorate.kind.autoload=true)

Docker build hook

This hook will just trigger a docker build, using an existing Dockerfile at the root of the project. It will not generate or customize the docker build in any way.

To enable the docker build hook you need:

  • a Dockerfile in the project/module root
  • the docker binary configured to point the docker daemon of your kubernetes environment.

To trigger the hook, you need to pass -Ddekorate.build=true as an argument to the build, for example:

mvn clean install -Ddekorate.build=true

or if you are using gradle:

gradle build -Ddekorate.build=true   

When push is enabled, the registry can be specified as part of the annotation, or via system properties. Here's an example via annotation configuration:

@DockerBuild(registry="quay.io")
public class Main {
}

Here's how it can be done via build properties (system properties):

mvn clean install -Ddekorate.docker.registry=quay.io -Ddekorate.push=true    

Note: Dekorate will NOT push images on its own. It will delegate to the docker binary. So the user needs to make sure beforehand they are logged in and have taken all necessary actions for a docker push to work.

S2i build hook

This hook will just trigger an s2i binary build, that will pass the output folder as an input to the build

To enable the docker build hook you need:

  • the openshift-annotations module (already included in all OpenShift starter modules)
  • the oc binary configured to point the docker daemon of your kubernetes environment.

Finally, to trigger the hook, you need to pass -Ddekorate.build=true as an argument to the build, for example:

mvn clean install -Ddekorate.build=true

or if you are using gradle:

gradle build -Ddekorate.build=true  

Jib build hook

This hook will just trigger a jib build in order to perform a container build.

In order to use it, one needs to add the jib-annotations dependency.

<dependency>
    <groupId>io.dekorate</groupId>
    <artifactId>jib-annotations</artifactId>
    <version>4.1.3</version>
</dependency>

Without the need of any additional configuration, one trigger the hook by passing -Ddekorate.build=true as an argument to the build, for example:

mvn clean install -Ddekorate.build=true

or if you are using gradle:

gradle build -Ddekorate.build=true
Jib modes

At the moment Jib allows you to create and push images in two different ways:

  • using the docker daemon
  • dockerless

At the moment performing a build through the docker daemon is slightly safer, and thus is used as a default option. You can easily switch to dockerless mode, by setting the @JibBuild(dockerBuild=false) or if using properties configuration dekorate.jib.docker-build=false.

In case of the dockerless mode, an openjdk-8 image is going to be used as a base image. The image can be changed through the from property on the @JibBuild annotation or dekorate.jib.from when using property configuration.

related examples

Junit5 extensions

Dekorate provides two junit5 extensions for:

  • Kubernetes
  • OpenShift

These extensions are dekorate aware and can read generated resources and configuration, in order to manage end to end tests for the annotated applications.

Features

  • Environment conditions
  • Container builds
  • Apply generated manifests to test environment
  • Inject test with:
    • client
    • application pod

Kubernetes extension for JUnit5

The kubernetes extension can be used by adding the following dependency:

<dependency>
  <groupId>io.dekorate</groupId>
  <artifactId>kubernetes-junit</artifactId>
  <version>4.1.3</version>
</dependency>

This dependency gives access to @KubernetesIntegrationTest which is what enables the extension for your tests.

By adding the annotation to your test class the following things will happen:

  1. The extension will check if a kubernetes cluster is available (if not tests will be skipped).
  2. If @DockerBuild is present in the project, a docker build will be triggered.
  3. All generated manifests will be applied.
  4. Will wait until applied resources are ready.
  5. Dependencies will be injected (e.g. KubernetesClient, Pod etc)
  6. Test will run
  7. Applied resources will be removed.
Dependency injection

Supported items for injection:

  • KubernetesClient
  • Pod (the application pod)
  • KubernetesList (the list with all generated resources)

To inject one of this you need a field in the code annotated with @Inject.

For example:

@Inject
KubernetesClient client;

When injecting a Pod, it's likely we need to specify the pod name. Since the pod name is not known in advance, we can use the deployment name instead. If the deployment is named hello-world then you can do something like:

@Inject
@Named("hello-world")
Pod pod;

Note: It is highly recommended to also add maven-failsafe-plugin configuration so that integration tests only run in the integration-test phase. This is important since in the test phase the application is not packaged. Here's an example of how it you can configure the project:

<plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-failsafe-plugin</artifactId>
  <version>${version.maven-failsafe-plugin}</version>
  <executions>
    <execution>
      <goals>
        <goal>integration-test</goal>
        <goal>verify</goal>
      </goals>
      <phase>integration-test</phase>
      <configuration>
        <includes>
          <include>**/*IT.class</include>
        </includes>
      </configuration>
    </execution>
  </executions>
</plugin>

related examples

OpenShift extension for JUnit5

Similarly, to using the kubernetes junit extension you can use the extension for OpenShift, by adding @OpenshiftIntegrationTest. To use that you need to add:

<dependency>
  <groupId>io.dekorate</groupId>
  <artifactId>openshift-junit</artifactId>
  <version>4.1.3</version>
</dependency>

By adding the annotation to your test class the following things will happen:

  1. The extension will check if a kubernetes cluster is available (if not tests will be skipped).
  2. A docker build will be triggered.
  3. All generated manifests will be applied.
  4. Will wait until applied resources are ready.
  5. Dependencies will be injected (e.g. KubernetesClient, Pod etc)
  6. Test will run
  7. Applied resources will be removed.

related examples

Configuration externalization

It is often desired to externalize configuration in configuration files, instead of hard coding things inside annotations.

Dekorate provides the ability to externalize configuration to configuration files (properties or yml). This can be done to either override the configuration values provided by annotations, or to use dekorate without annotations.

For supported frameworks, this is done out of the box, as long as the corresponding framework jar is present. The frameworks supporting this feature are:

  • spring boot
  • thorntail

For these frameworks, the use of annotations is optional, as everything may be configured via configuration files. Each annotation may be expressed using properties or yaml using the following steps.

  • Each annotation property is expressed using a key/value pair.
  • All keys start with the dekorate.<annotation kind>. prefix, where annotation kind is the annotation class name in lowercase, stripped of the Application suffix.
  • The remaining part of key is the annotation property name.
  • For nesting properties the key is also nested following the previous rule.

For all other frameworks or generic java application this can be done with the use of the @Dekorate annotation. The presence of this annotation will trigger the dekorate processes. Dekorate will then look for application.properites or application.yml resources. If present, they will be loaded. If not the default configuration will be used.

Examples:

The following annotation configuration:

@KubernetesApplication(labels=@Label(key="foo", value="bar"))
public class Main {
}

Can be expressed using properties:

dekorate.kubernetes.labels[0].key=foo
dekorate.kubernetes.labels[0].value=bar

or using yaml:

dekorate:
  kubernetes:
    labels:
      - key: foo
        value: bar

In the examples above, dekorate is the prefix that we use to namespace the dekorate configuration. kubernetes defines the annotation kind (its @KubernetesApplication in lower case and stripped of the Application suffix). labels, key and value are the property names and since the Label is nested under @KubernetesApplication so are the properties.

The exact same example for OpenShift (where @OpenshiftApplication is used instead) would be:

@OpenshiftApplication(labels=@Label(key="foo", value="bar"))
public class Main {
}

Can be expressed using properties:

dekorate.openshift.labels[0].key=foo
dekorate.openshift.labels[0].value=bar

or using yaml:

dekorate:
  openshift:
    labels:
      - key: foo
        value: bar
Spring Boot

For spring boot, dekorate will look for configuration under:

  • application.properties
  • application.yml
  • application.yaml

Also, it will look for the same files under the kubernetes profile:

  • application-kubernetes.properties
  • application-kubernetes.yml
  • application-kubernetes.yaml
Vert.x & generic Java

For generic java, if the @Dekorate annotation is present, then dekorate will look for confiugration under:

  • application.properties
  • application.yml

These files can be overridden using the configFiles property on the @Dekorate annotation.

For example:

A generic java application annotated with @Dekorate:

    import io.dekorate.annotation.Dekorate;
    
    @Dekorate
    public class Main {
        //do stuff
    }

During compilation kubernetes, OpenShift or both resources will be generated (depending on what dekorate jars are present in the classpath). These resources can be customized using properties:

dekorate.openshift.labels[0].key=foo
dekorate.openshift.labels[0].value=bar

or using yaml:

dekorate:
  openshift:
    labels:
      - key: foo
        value: bar

related examples

Testing Multi-Module projects

The Dekorate testing framework supports multi-module projects either using the OpenShift JUnit 5 extension or using the Kubernetes JUnit 5 extension.

A multi-module project consist of multiple modules, all using Dekorate to generate the cluster manifests and a tests module that will run the integration tests:

multi-module-parent
โ””โ”€โ”€โ”€module-1
โ””โ”€โ”€โ”€module-2
โ””โ”€โ”€โ”€tests

In the tests module, we can now specify the location of the additional modules via the field additionalModules which is part of the @OpenshiftIntegrationTest and @KubernetesIntegrationTest annotations:

@OpenshiftIntegrationTest(additionalModules = { "../module-1", "../module-2" })
class SpringBootForMultipleAppsOnOpenshiftIT {

  @Inject
  private KubernetesClient client;

  @Inject
  @Named("module-1")
  Pod podForModuleOne;

  @Inject
  @Named("module-2")
  Pod podForModuleTwo;

  // ...
}

Doing so, the test framework will locate the Dekorate manifests that have been previously generated to build and deploy the application for each integration test.

related examples

Prometheus annotations

The prometheus annotation processor provides annotations for generating prometheus related resources. In particular, it can generate ServiceMonitor which are used by the Prometheus Operator in order to configure prometheus to collect metrics from the target application.

This is done with the use of @EnableServiceMonitor annotation.

Here's an example:

import io.dekorate.kubernetes.annotation.KubernentesApplication;
import io.dekorate.prometheus.annotation.EnableServiceMonitor;

@KubernetesApplication
@EnableServiceMonitor(port = "http", path="/prometheus", interval=20)
public class Main {
    public static void main(String[] args) {
      //Your code goes here
    }
}

The annotation processor, will automatically configure the required selector and generate the ServiceMonitor. Note: Some framework integration modules may further decorate the ServiceMonitor with framework specific configuration. For example, the Spring Boot module will decorate the monitor with the Spring Boot specific path, which is /actuator/prometheus.

related examples

Jaeger annotations

The jaeger annotation processor provides annotations for injecting the jaeger-agent into the application pod.

Most of the work is done with the use of the @EnableJaegerAgent annotation.

Using the Jaeger Operator

When the jaeger operator is available, you set the operatorEnabled property to true. The annotation processor will automatically set the required annotations to the generated deployment, so that the jaeger operator can inject the jaeger-agent.

Here's an example:

import io.dekorate.kubernetes.annotation.KubernentesApplication;
import io.dekorate.jaeger.annotation.EnableJaegerAgent;

@KubernetesApplication
@EnableJaegerAgent(operatorEnabled = true)
public class Main {
    public static void main(String[] args) {
      //Your code goes here
    }
}
Manually injection the agent sidecar

For the cases, where the operator is not present, you can use the @EnableJaegerAgent to manually configure the sidecar.

import io.dekorate.kubernetes.annotation.KubernentesApplication;
import io.dekorate.jaeger.annotation.EnableJaegerAgent;

@KubernetesApplication
@EnableJaegerAgent
public class Main {
    public static void main(String[] args) {
      //Your code goes here
    }
}

related examples

ServiceBinding CRD

Service Binding Operator enables the application developers to bind the services that are backed by Kubernetes operators to an application that is deployed in kubernetes without having to perform manual configuration. Dekorate supports generation of ServiceBinding CR. The generation of ServiceBinding CR is triggered by annotating one of your classes with @ServiceBinding annotation and by adding the below dependency to the project and when the project gets compiled, the annotation will trigger the generation of ServiceBinding CR in both json and yml formats under the target/classes/META-INF/dekorate. The name of the ServiceBinding CR would be the name of the applicationName + "-binding", for example if the application name is sample-app, the binding name would be sample-app-binding

<dependency>
  <groupId>io.dekorate</groupId>
  <artifactId>servicebinding-annotations</artifactId>
</dependency>

Here is the simple example of using ServiceBinding annotations in SpringBoot application.

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import io.dekorate.servicebinding.annotation.Service;
import io.dekorate.servicebinding.annotation.ServiceBinding;
import io.dekorate.servicebinding.annotation.BindingPath;
@ServiceBinding(
  services = {
    @Service(group = "postgresql.dev", name = "demo-database", kind = "Database", version = "v1alpha1", id = "postgresDB") })
@SpringBootApplication
public class Main {
  public static void main(String[] args) {
    SpringApplication.run(Main.class, args);
  }
}

For someone who wants to configure the ServiceBinding CR using system properties, they can do it in the application.properties. The ServiceBinding CR can be customized either via annotation parameters or via system properties. The parameter values provided via annotations can be overrided by configuring the ServiceBinding CR in application.properties.

dekorate.servicebinding.services[0].name=demo-database
dekorate.servicebinding.services[0].group=postgresql.dev
dekorate.servicebinding.services[0].kind=Database
dekorate.servicebinding.services[0].id=postgresDB

Generated ServiceBinding CR would look something like this:

apiVersion: operators.coreos.com/v1beta1
kind: ServiceBinding
metadata:
  name: servicebinding-binding-example
spec:
  application:
    group: apps
    resource: Deployment
    name: servicebinding-example
    version: v1
  services:
  - group: postgresql.dev
    kind: Database
    name: demo-database
    version: v1alpha1
    id: postgresDB
  detectBindingResources: false
  bindAsFiles: false

If the application's bindingPath needs to configured, @BindingPath annotation can be used directly under @ServicingBinding annotation. For example:

@ServiceBinding(
  bindingPath = @BindingPath(containerPath="spec.template.spec.containers")
  services = {
    @Service(group = "postgresql.dev", name = "demo-database", kind = "Database", version = "v1alpha1", id = "postgresDB") }, envVarPrefix = "postgresql")
@SpringBootApplication

Note : ServiceBinding annotations are already usuable though still highly experimental. The Service Binding operator is still in flux and may change in the near future.

Cert-Manager

Dekorate supports to generate a X.509 certificate with the help of the Certificate and Issuer CRD resources handled by the Cert-Manager. When these CRD resources are deployed on the cluster, the Cert-Manager will process them in order to populate a Secret containing by example a: CA certificate, private key, server certificate or java keystores, etc.

To let Dekorate to generate the certificate and issuer resources, simply declare the following dependency part of your pom file:

<dependency>
  <groupId>io.dekorate</groupId>
  <artifactId>certmanager-annotations</artifactId>
</dependency>

And provide the certificate configuration. The minimal information that the Dekorate needs is:

  • secretName : the name of the Kubernetes Secret resource that will include the Cert-Manager generated files.
  • the Issuer that represents the certificate authority (CA). See all the supported options in the Issuer section.

To know more about how to use the Cert-Manager extension, please go to the Cert-Manager Dekorate documentation.

related examples

External generator integration

No matter how good a generator/scaffolding tool is, its often desirable to handcraft part of it. Other times it might be desirable to combine different tools together (e.g. to generate the manifests using fmp but customize them via dekorate annotations)

No matter what the reason is, dekorate supports working on existing resources and decorating them based on the provided annotation configuration. This is as simple as letting dekorate know where to read the existing manifests and where to store the generated ones. By adding the @GeneratorOptions.

Integration with Fabric8 Maven Plugin.

The fabric8-maven-plugin can be used to package applications for kubernetes and OpenShift. It also supports generating manifests. A user might choose to build images using fmp, but customize them using dekorate annotations instead of xml.

An example could be to expose an additional port:

This can be done by configuring dekorate to read the fmp generated manifests from META-INF/fabric8 which is where fmp stores them and save them back there once decoration is finished.

@GeneratorOptions(inputPath = "META-INF/fabric8", outputPath = "META-INF/fabric8")
@KubernetesApplication(port = @Port(name="srv", containerPort=8181)
public class Main {
   ... 
}

related examples

Debugging and Logging

To control how verbose the dekorate output is going to be you can set the log level level threshold, using the io.dekorate.log.level system property-drawer.

Allowed values are:

  • OFF
  • ERROR
  • WARN
  • INFO (default)
  • DEBUG

Explicit configuration of annotation processors

By default, Dekorate doesn't require any specific configuration of its annotation processors. However, it is possible to manually define the annotation processors if required.

In the maven pom.xml configure the annotation processor path in the maven compiler plugin settings.

The example below configures the Mapstruct, Lombok and Dekorate annotation processors

            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>${maven-compiler-plugin.version}</version>
                <configuration>
                    <annotationProcessorPaths>
                        <path>
                            <groupId>org.mapstruct</groupId>
                            <artifactId>mapstruct-processor</artifactId>
                            <version>${mapstruct.version}</version>
                        </path>
                        <path>
                            <groupId>org.projectlombok</groupId>
                            <artifactId>lombok</artifactId>
                            <version>${lombok.version}</version>
                        </path>
                        <path>
                            <groupId>io.dekorate</groupId>
                            <artifactId>kubernetes-annotations</artifactId>
                            <version>4.1.3</version>
                        </path>
                    </annotationProcessorPaths>
                </configuration>
            </plugin> 

Using the bom

Dekorate provides a bom, that offers dependency management for dekorate artifacts.

The bom can be imported like:

    <dependencyManagement>
        <dependencies>
            <dependency>
               <groupId>io.dekorate</groupId>
               <artifactId>dekorate-bom</artifactId>
               <version>4.1.3</version>
               <type>pom</type>
               <scope>import</scope>
            </dependency>
        </dependencies>
    </dependencyManagement>

Using with downstream BOMs

In case, that dekorate bom is imported by a downstream project (e.g. snowdrop) and its required to override the bom version, all you need to do is to import the dekorate bom with the version of your choice first.

Versions and Branches

The current version of dekorate is <version>2.0.0</version>.

What's changed in 2.x

Most of the changes that happend inside 2.x are internal and are related to the maintainance of the project.

New features

  • Configurable logging threshold
  • Git options
  • Inferring image configuration from application config
  • JaxRS support (without requiring Thorntail)
  • Integration testing framework improvements (detailed diagnostics on error)
  • Updated to kubernetes-client and model v5.1.1

Annotation naming

  • EnableDockerBuild -> DockerBuild
  • EnableS2iBuild -> S2iBuild
  • EnableJibBuild -> JibBuild

Dropped modules

The following features were dropped:

  • service catalog
  • halkyon
  • application crd
  • crd generator (functionality moved to the fabric8 kubernetes-client).
  • dependencies uberjar

Dropped dependencies shadowed uber jar

Earlier version of dekorate used a shadowed uberjar containing all dependencies. As of 2.0.0 the dependencies uberjar is no more. Downstream projects using dekorate as a library will need to switch from io.dekorate.deps.xxx to the original packages.

Component naming

Earlier version of dekorate used names for its core components that we too generic. So, in 2.0.0 the name changed so that they are more descriptive. Naming changes:

  • Generator -> ConfigGenerator
  • Hanlder -> ManifestGenerator

Branches

All dekorate development takes place on the master branch. From that branch current releases are created. Bug fixes for older releases are done through their correspnding branch.

  • master (active development, pull requests should point here)
  • 1.0.x
  • 0.15.x

Pull request guidelines

All pull requests should target the main branch and from there things are backported to where it makes sense.

Frequently asked questions

How do I tell dekorate to use a custom image name?

By default the image name used is ${group}/${name}:${version} as extracted by the project / environment or explicitly configured by the user. If you don't want to tinker those properties then you can:

Using annotations

Add @DockerBuild(image="foo/bar:baz") to the your main or whatever class you use to configure dekorate. If instead of docker you are using jib or s2i you can use @JibBuild(image="foo/bar:baz") or @S2iBuild(image="foo/bar:baz") respectively.

Using annotations

Add the following to your application.properties

dekorate.docker.image=foo/bar:baz

Using annotations

Add the following to your application.yaml

dekorate:
  docker:
    image: foo/bar:baz

related examples

Want to get involved?

By all means please do! We love contributions! Docs, Bug fixes, New features ... everything is important!

Make sure you take a look at contributor guidelines. Also, it can be useful to have a look at the dekorate design.

dekorate's People

Contributors

actions-user avatar akuma8 avatar albertoimpl avatar alexlungu10 avatar ammbra avatar aureamunoz avatar cmoulliard avatar dependabot[bot] avatar dswiecki avatar eformat avatar geoand avatar glefloch avatar iocanel avatar jbeaken avatar kabir avatar ladicek avatar longzl2015 avatar manusa avatar maxandersen avatar metacosm avatar nebojsakrtolica avatar noelo avatar php-coder avatar ppatierno avatar ryanjbaxter avatar sgitario avatar thlaegler avatar torrespro avatar vinche59 avatar wtrocki avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dekorate's Issues

Caused by: java.lang.ClassNotFoundException: io.ap4k.deps.openshift.api.model.BuildSpec

This error is reported if I change the version of the ap4k deps to use 1.0-SNAPSHOT
and execute mvn clean install

[INFO] --- maven-install-plugin:2.4:install (default-install) @ ap4k-micronaut ---
[INFO] Installing /Users/dabou/Code/snowdrop/ap4k/ap4k-annotations/frameworks/micronaut/target/ap4k-micronaut-1.0-SNAPSHOT.jar to /Users/dabou/.m2/repository/io/ap4k/ap4k-micronaut/1.0-SNAPSHOT/ap4k-micronaut-1.0-SNAPSHOT.jar
[INFO] Installing /Users/dabou/Code/snowdrop/ap4k/ap4k-annotations/frameworks/micronaut/pom.xml to /Users/dabou/.m2/repository/io/ap4k/ap4k-micronaut/1.0-SNAPSHOT/ap4k-micronaut-1.0-SNAPSHOT.pom
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] AP4K 1.0-SNAPSHOT .................................. SUCCESS [  0.306 s]
[INFO] AP4K :: Core ....................................... SUCCESS [ 13.281 s]
[INFO] AP4K :: Annotations ................................ SUCCESS [  0.011 s]
[INFO] AP4K :: Annotations :: Kubernetes .................. SUCCESS [  3.253 s]
[INFO] AP4K :: Annotations :: Openshift ................... SUCCESS [  7.634 s]
[INFO] AP4K :: Annotations :: Istio ....................... SUCCESS [  4.190 s]
[INFO] AP4K :: Annotations :: Service Catalog ............. SUCCESS [  4.754 s]
[INFO] AP4K :: Annotations :: Component Operator .......... SUCCESS [  6.043 s]
[INFO] AP4K :: Frameworks ................................. SUCCESS [  0.013 s]
[INFO] AP4K :: Frameworks :: Spring Boot .................. SUCCESS [  1.701 s]
[INFO] AP4K :: Examples ................................... SUCCESS [  0.008 s]
[INFO] AP4K :: Examples :: Service Catalog ................ SUCCESS [  1.211 s]
[INFO] AP4K :: Examples :: Kubernetes ..................... SUCCESS [  2.825 s]
[INFO] AP4K :: Examples :: Openshift ...................... SUCCESS [  3.932 s]
[INFO] AP4K :: Examples :: Istio .......................... SUCCESS [  1.298 s]
[INFO] AP4K :: Examples :: Spring Boot on Kubernetes ...... SUCCESS [  2.922 s]
[INFO] AP4K :: Examples :: Spring Boot on Openshift ....... SUCCESS [  3.164 s]
[INFO] AP4K :: Examples :: Component Operator ............. SUCCESS [  0.856 s]
[INFO] AP4K :: Testing .................................... SUCCESS [  0.007 s]
[INFO] AP4K :: Testing :: Openshift ....................... SUCCESS [  0.890 s]
[INFO] AP4K :: Examples :: Source to Image Example ........ SUCCESS [ 54.237 s]
[INFO] AP4K :: Frameworks :: Thorntail .................... SUCCESS [  0.557 s]
[INFO] AP4K :: Frameworks :: Micronaut 1.0-SNAPSHOT ....... SUCCESS [  0.489 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:53 min
[INFO] Finished at: 2018-11-21T18:18:12+01:00
[INFO] ------------------------------------------------------------------------
Applied: s2i-java
Applied: source-to-image-example
Applied: source-to-image-example
Applied: source-to-image-example
Exception in thread "Thread-35" java.lang.NoClassDefFoundError: Lio/ap4k/deps/openshift/api/model/BuildSpec;
        at java.lang.Class.getDeclaredFields0(Native Method)
        at java.lang.Class.privateGetDeclaredFields(Class.java:2583)
        at java.lang.Class.getDeclaredFields(Class.java:1916)
        at io.ap4k.deps.jackson.databind.util.ClassUtil$ClassMetadata.getDeclaredFields(ClassUtil.java:1087)
        at io.ap4k.deps.jackson.databind.util.ClassUtil.getDeclaredFields(ClassUtil.java:386)
        at io.ap4k.deps.jackson.databind.introspect.AnnotatedClass._findFields(AnnotatedClass.java:809)
        at io.ap4k.deps.jackson.databind.introspect.AnnotatedClass.resolveFields(AnnotatedClass.java:575)
        at io.ap4k.deps.jackson.databind.introspect.AnnotatedClass.fields(AnnotatedClass.java:353)
        at io.ap4k.deps.jackson.databind.introspect.POJOPropertiesCollector._addFields(POJOPropertiesCollector.java:350)
        at io.ap4k.deps.jackson.databind.introspect.POJOPropertiesCollector.collectAll(POJOPropertiesCollector.java:283)
        at io.ap4k.deps.jackson.databind.introspect.POJOPropertiesCollector.getPropertyMap(POJOPropertiesCollector.java:248)
        at io.ap4k.deps.jackson.databind.introspect.POJOPropertiesCollector.getProperties(POJOPropertiesCollector.java:155)
        at io.ap4k.deps.jackson.databind.introspect.BasicBeanDescription._properties(BasicBeanDescription.java:142)
        at io.ap4k.deps.jackson.databind.introspect.BasicBeanDescription.findProperties(BasicBeanDescription.java:217)
        at io.ap4k.deps.jackson.databind.deser.BasicDeserializerFactory._findCreatorsFromProperties(BasicDeserializerFactory.java:330)
        at io.ap4k.deps.jackson.databind.deser.BasicDeserializerFactory._constructDefaultValueInstantiator(BasicDeserializerFactory.java:312)
        at io.ap4k.deps.jackson.databind.deser.BasicDeserializerFactory.findValueInstantiator(BasicDeserializerFactory.java:252)
        at io.ap4k.deps.jackson.databind.deser.BeanDeserializerFactory.buildBeanDeserializer(BeanDeserializerFactory.java:221)
        at io.ap4k.deps.jackson.databind.deser.BeanDeserializerFactory.createBeanDeserializer(BeanDeserializerFactory.java:143)
        at io.ap4k.deps.jackson.databind.deser.DeserializerCache._createDeserializer2(DeserializerCache.java:406)
        at io.ap4k.deps.jackson.databind.deser.DeserializerCache._createDeserializer(DeserializerCache.java:352)
        at io.ap4k.deps.jackson.databind.deser.DeserializerCache._createAndCache2(DeserializerCache.java:264)
        at io.ap4k.deps.jackson.databind.deser.DeserializerCache._createAndCacheValueDeserializer(DeserializerCache.java:244)
        at io.ap4k.deps.jackson.databind.deser.DeserializerCache.findValueDeserializer(DeserializerCache.java:142)
        at io.ap4k.deps.jackson.databind.DeserializationContext.findRootValueDeserializer(DeserializationContext.java:477)
        at io.ap4k.deps.jackson.databind.ObjectMapper._findRootDeserializer(ObjectMapper.java:3908)
        at io.ap4k.deps.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:3803)
        at io.ap4k.deps.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2874)
        at io.ap4k.deps.kubernetes.client.utils.Serialization.unmarshal(Serialization.java:235)
        at io.ap4k.deps.kubernetes.client.utils.Serialization.unmarshal(Serialization.java:190)
        at io.ap4k.deps.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:383)
        at io.ap4k.deps.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:360)
        at io.ap4k.deps.openshift.client.dsl.internal.BuildConfigOperationsImpl.fromInputStream(BuildConfigOperationsImpl.java:274)
        at io.ap4k.deps.openshift.client.dsl.internal.BuildConfigOperationsImpl.fromFile(BuildConfigOperationsImpl.java:231)
        at io.ap4k.deps.openshift.client.dsl.internal.BuildConfigOperationsImpl.fromFile(BuildConfigOperationsImpl.java:68)
        at io.ap4k.openshift.hook.JavaBuildHook.run(JavaBuildHook.java:70)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: io.ap4k.deps.openshift.api.model.BuildSpec
        at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        ... 37 more

Prevent dublicate resource.

Currently, we heavily rely on visitor to perform any sort of config and model updates.

This is powerful, but if not carefull its possible to register a visitor twice leading to dublicate entries in the generated resources.

We need to prevent duplicates but also have a better way to track which visitor is registered from where.

It should be possible to customize existing resources

User provided or external tool provided resources should be picked up from ap4k and customized based on the annotations found.

For example a user should be able to generate resources using fmp fabric8:resources and have them customized using ap4k annotations.

Wrong ApiVersion version added

Description of the issue

When we generate the resource for a Component, then the apiVersion included is apiVersion: "v1beta1" and not the apiVersion of the CRD which is here component.k8s.io/v1alpha1

Generated

---
apiVersion: "v1"
kind: "List"
items:
- apiVersion: "v1beta1"
  kind: "Component"
  metadata:
    annotations: {}
    labels: {}
    name: ""
  spec:
    deploymentMode: "innerloop"
    runtime: "spring-boot"
    exposeService: false
    image: []
    env:
    - name: "key1"
      value: "val1"
    - name: "key1"
      value: "val1"
    feature: []
    link: []
    service: []

Bug: OpenshiftConfigBuilder produces values erroneous string values containing "null"

If one tries to do something like:

    final OpenshiftConfigBuilder openshiftConfigBuilder = OpenshiftConfigAdapter.newBuilder(new HashMap());
    System.out.println(openshiftConfigBuilder.build().getGroup());

the result of the print statement will be: "null" instead of something like a proper null or an empty string.
This prevents ApplyProjectInfo from properly applying the project info when the corresponding values haven't been set on OpenshiftApplication

No resource type found for:v1beta1#Component at [Source: N/A; line: -1, column: -1]

Description

When we test this class : https://github.com/ap4k/ap4k/blob/issue-20/examples/component-example/src/test/java/io/ap4k/examples/ComponentSpringBootExampleTest.java
then we got this error

Error

io.ap4k.Ap4kException: No resource type found for:v1beta1#Component
 at [Source: N/A; line: -1, column: -1] (through reference chain: io.ap4k.deps.kubernetes.api.model.KubernetesList["items"]->java.util.ArrayList[0])

	at io.ap4k.Ap4kException.launderThrowable(Ap4kException.java:26)
	at io.ap4k.Ap4kException.launderThrowable(Ap4kException.java:16)
	at io.ap4k.utils.Serialization.unmarshal(Serialization.java:97)
	at io.ap4k.utils.Serialization.unmarshal(Serialization.java:72)
	at io.ap4k.utils.Serialization.unmarshal(Serialization.java:61)
	at io.ap4k.examples.ComponentSpringBootExampleTest.shouldContainComponent(ComponentSpringBootExampleTest.java:14)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:436)
	at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:115)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:170)
	at org.junit.jupiter.engine.execution.ThrowableCollector.execute(ThrowableCollector.java:40)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:166)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:113)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:58)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor$NodeExecutor.lambda$executeRecursively$3(HierarchicalTestExecutor.java:112)
	at org.junit.platform.engine.support.hierarchical.SingleTestExecutor.executeSafely(SingleTestExecutor.java:66)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor$NodeExecutor.executeRecursively(HierarchicalTestExecutor.java:108)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor$NodeExecutor.execute(HierarchicalTestExecutor.java:79)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor$NodeExecutor.lambda$executeRecursively$2(HierarchicalTestExecutor.java:120)
	at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
	at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
	at java.util.Iterator.forEachRemaining(Iterator.java:116)
	at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
	at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
	at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
	at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
	at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
	at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
	at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor$NodeExecutor.lambda$executeRecursively$3(HierarchicalTestExecutor.java:120)
	at org.junit.platform.engine.support.hierarchical.SingleTestExecutor.executeSafely(SingleTestExecutor.java:66)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor$NodeExecutor.executeRecursively(HierarchicalTestExecutor.java:108)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor$NodeExecutor.execute(HierarchicalTestExecutor.java:79)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor$NodeExecutor.lambda$executeRecursively$2(HierarchicalTestExecutor.java:120)
	at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
	at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
	at java.util.Iterator.forEachRemaining(Iterator.java:116)
	at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
	at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
	at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
	at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
	at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
	at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
	at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor$NodeExecutor.lambda$executeRecursively$3(HierarchicalTestExecutor.java:120)
	at org.junit.platform.engine.support.hierarchical.SingleTestExecutor.executeSafely(SingleTestExecutor.java:66)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor$NodeExecutor.executeRecursively(HierarchicalTestExecutor.java:108)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor$NodeExecutor.execute(HierarchicalTestExecutor.java:79)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:55)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:43)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:170)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:154)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:90)
	at com.intellij.junit5.JUnit5IdeaTestRunner.startRunnerWithArgs(JUnit5IdeaTestRunner.java:74)
	at com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
	at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
	at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
Caused by: io.ap4k.deps.jackson.databind.JsonMappingException: No resource type found for:v1beta1#Component
 at [Source: N/A; line: -1, column: -1] (through reference chain: io.ap4k.deps.kubernetes.api.model.KubernetesList["items"]->java.util.ArrayList[0])
	at io.ap4k.deps.jackson.databind.JsonMappingException.from(JsonMappingException.java:255)
	at io.ap4k.deps.jackson.databind.DeserializationContext.mappingException(DeserializationContext.java:982)
	at io.ap4k.deps.kubernetes.internal.KubernetesDeserializer.deserialize(KubernetesDeserializer.java:78)
	at io.ap4k.deps.kubernetes.internal.KubernetesDeserializer.deserialize(KubernetesDeserializer.java:32)
	at io.ap4k.deps.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:277)
	at io.ap4k.deps.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:249)
	at io.ap4k.deps.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:26)
	at io.ap4k.deps.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:490)
	at io.ap4k.deps.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:95)
	at io.ap4k.deps.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:260)
	at io.ap4k.deps.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:125)
	at io.ap4k.deps.jackson.databind.ObjectMapper._readValue(ObjectMapper.java:3779)
	at io.ap4k.deps.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2050)
	at io.ap4k.deps.jackson.databind.ObjectMapper.treeToValue(ObjectMapper.java:2547)
	at io.ap4k.deps.kubernetes.internal.KubernetesDeserializer.deserialize(KubernetesDeserializer.java:80)
	at io.ap4k.deps.kubernetes.internal.KubernetesDeserializer.deserialize(KubernetesDeserializer.java:32)
	at io.ap4k.deps.jackson.databind.ObjectReader._bindAndClose(ObjectReader.java:1578)
	at io.ap4k.deps.jackson.databind.ObjectReader.readValue(ObjectReader.java:1166)
	at io.ap4k.utils.Serialization.unmarshal(Serialization.java:95)

Add visitor to enrich the Component created with the runtime's type

Add a visitor to the ComponentGenerator in oder to enrich the Component created with the runtime's type (Spring Boot, Eclipse Vert.x, Thorntail, ...).

The runtime field is used by the Operator to pickup the corresponding s2i image to be used to configure the DeploymentConfig/ImageStream accordingly

[feature] Able to enable/disable to create k8s resources

Feature

As a user I want to have the possibility to enable / disable the generation of k8s yaml/json files when customResources are used as in this case, it makes more sense the install the custom resources such as Istio, Component CRD, ...

Link - Env annotation appears twice within yaml generated

Class annotated

@SpringBootApplication
@CompositeApplication(
        name = "fruit-client-sb",
        links = @Link(
                  name = "Env var to be injected within the target component -> fruit-backend",
                  targetcomponentname = "fruit-client-sb",
                  kind = "Env",
                  ref = "",
                  envVars = @Env(
                          name  = "OPENSHIFT_ENDPOINT_BACKEND",
                          value = " http://fruit-backend-sb:8080/api/fruits"
                  )
))

Yaml generated

---
apiVersion: "v1"
kind: "List"
items:
- apiVersion: "component.k8s.io/v1alpha1"
  kind: "Component"
  metadata:
    name: "fruit-client-sb"
  spec:
    deploymentMode: "innerloop"
    runtime: "spring-boot"
    exposeService: false
    link:
    - kind: "Env"
      name: "Env var to be injected within the target component -> fruit-backend"
      targetComponentName: "fruit-client-sb"
      envs:
      - name: "OPENSHIFT_ENDPOINT_BACKEND"
        value: " http://fruit-backend-sb:8080/api/fruits"
      ref: ""
    - kind: "Env"
      name: "Env var to be injected within the target component -> fruit-backend"
      targetComponentName: "fruit-client-sb"
      envs:
      - name: "OPENSHIFT_ENDPOINT_BACKEND"
        value: " http://fruit-backend-sb:8080/api/fruits"
      ref: ""

Same issue too using this config (see double Envs and double Vars)

@CompositeApplication(
        name = "fruit-backend-sb",
        exposeService = true,
        envVars = @Env(
                name = "SPRING_PROFILES_ACTIVE",
                value = "openshift-catalog"),
        links = @Link(
                name = "Secret to be injected as EnvVar using Service's secret",
                targetcomponentname = "fruit-backend-sb",
                kind = "Secret",
                ref = "postgresql-db"))
@ServiceCatalog(
   instances = @ServiceCatalogInstance(
        name = "postgresql-db",
        serviceClass = "dh-postgresql-apb",
        servicePlan = "dev",
        bindingSecret = "postgresql-db",
        parameters = {
                @Parameter(key = "postgresql_user", value = "luke"),
                @Parameter(key = "postgresql_password", value = "secret"),
                @Parameter(key = "postgresql_database", value = "my_data"),
                @Parameter(key = "postgresql_version", value = "9.6")
        }
   )
)

Result

---
apiVersion: "v1"
kind: "List"
items:
- apiVersion: "component.k8s.io/v1alpha1"
  kind: "Component"
  metadata:
    name: "fruit-backend-sb"
  spec:
    deploymentMode: "innerloop"
    runtime: "spring-boot"
    exposeService: true
    env:
    - name: "SPRING_PROFILES_ACTIVE"
      value: "openshift-catalog"
    - name: "SPRING_PROFILES_ACTIVE"
      value: "openshift-catalog"
    link:
    - kind: "Secret"
      name: "Secret to be injected as EnvVar using Service's secret"
      targetComponentName: "fruit-backend-sb"
      ref: "postgresql-db"
    - kind: "Secret"
      name: "Secret to be injected as EnvVar using Service's secret"
      targetComponentName: "fruit-backend-sb"
      ref: "postgresql-db"

Regression affecting hooks

The hooks are feed the resource path incorectly.

This renders the OcBuildHook completely useless atm.

Error "java.lang.IllegalStateException: Could not read child element: parent" is generated for a maven module

Issue

The following error is generated when we compile a maven module where there is no maven pom parent

Error

Caused by: java.lang.IllegalStateException: Could not read child element: parent
    at io.ap4k.project.MavenInfo.lambda$getElement$6 (MavenInfo.java:145)
    at java.util.Optional.orElseThrow (Optional.java:290)
    at io.ap4k.project.MavenInfo.getElement (MavenInfo.java:145)
    at io.ap4k.project.MavenInfo.getParentVersion (MavenInfo.java:109)
    at io.ap4k.project.MavenInfo.getVersion (MavenInfo.java:89)
    at io.ap4k.project.MavenInfo.<init> (MavenInfo.java:45)
    at io.ap4k.project.MavenInfoReader.getInfo (MavenInfoReader.java:45)
    at io.ap4k.project.MavenInfoReader.getInfo (MavenInfoReader.java:29)
    at io.ap4k.project.FileProjectFactory.lambda$getProjectInfo$2 (FileProjectFactory.java:78)
    at java.util.Optional.map (Optional.java:215)
    at io.ap4k.project.FileProjectFactory.getProjectInfo (FileProjectFactory.java:78)
    at io.ap4k.project.FileProjectFactory.createInternal (FileProjectFactory.java:59)
    at io.ap4k.project.FileProjectFactory.create (FileProjectFactory.java:48)
    at io.ap4k.project.AptProjectFactory.createInternal (AptProjectFactory.java:52)
    at io.ap4k.project.AptProjectFactory.create (AptProjectFactory.java:42)
    at io.ap4k.processor.AbstractAnnotationProcessor.init (AbstractAnnotationProcessor.java:54)

Component Api Operrator doesn't like empty []

Description

The component API operator doesn't like to process a yaml resource containing empty []

E1127 18:10:36.560007       1 reflector.go:205] github.com/snowdrop/component-operator/vendor/sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:126: 
Failed to list *v1alpha1.Component: v1alpha1.ComponentList.Items: []v1alpha1.Component: 
v1alpha1.Component.Spec: v1alpha1.ComponentSpec.Link: readObjectStart: expect { or n, but found
 [, error found in #10 byte of ...|],"link":[],"runtime|..., bigger context ...|oseService":false,"feature":
[],"image":[],"link":[],"runtime":"spring-boot","service":[]}}],"kind":"|...

Resource used

---
apiVersion: "v1"
kind: "List"
items:
- apiVersion: "component.k8s.io/v1alpha1"
  kind: "Component"
  metadata:
    labels: {}
    name: "hello-app"
  spec:
    deploymentMode: "innerloop"
    runtime: "spring-boot"
    exposeService: false
    image: []
    feature: []
    link: []
    service: []

Should be

I removed manually the [] for the List fields such as Image, features, link, service

  spec:
    deploymentMode: "innerloop"
    runtime: "spring-boot"
    exposeService: false
    image:
    feature:
    link:
    service:

WDYT @iocanel

[Feature] Able to generate a resource item instead of a list

Feature's request

As a user, I would like to have the possibility to get a generated resource created as an item and not as as a list

What is currently populated

---
apiVersion: "v1"
kind: "List"
items:
- apiVersion: "v1beta1"
  kind: "Component"
  metadata:
    annotations: {}
    labels: {}
    name: ""
  spec:
    deploymentMode: "innerloop"
    runtime: "spring-boot"
    exposeService: false
    image: []
    env:
    - name: "key1"
      value: "val1"
    - name: "key1"
      value: "val1"
    feature: []
    link: []
    service: []

What I want

apiVersion: component.k8s.io/v1alpha1
kind: "Component"
metadata:
  annotations: {}
  labels: {}
  name: ""
spec:
    deploymentMode: "innerloop"
    runtime: "spring-boot"
    exposeService: false
    image: []
    env:
    - name: "key1"
      value: "val1"
    feature: []
    link: []
    service: []

build failing for the master branch, missing kubernetes config files

Hello Team,
I want to run mvn install under the master branch and it seems that the build is failing, some files from the kubernetes/config path are missing.... maybe under windows, the generated-sources are not generated 100%
ap4k/core/src/main/java/io/ap4k/kubernetes/config/
image
The missing files are:
import io.ap4k.kubernetes.config.Annotation;
import io.ap4k.kubernetes.config.AwsElasticBlockStoreVolume;
import io.ap4k.kubernetes.config.AzureDiskVolume;
import io.ap4k.kubernetes.config.AzureFileVolume;
import io.ap4k.kubernetes.config.ConfigMapVolume;
import io.ap4k.kubernetes.config.Env;
import io.ap4k.kubernetes.config.KubernetesConfig;
import io.ap4k.kubernetes.config.Label;
import io.ap4k.kubernetes.config.Mount;
import io.ap4k.kubernetes.config.PersistentVolumeClaimVolume;
import io.ap4k.kubernetes.config.Port;
import io.ap4k.kubernetes.config.SecretVolume;

Ensure that container decorators are only applied to the target container.

Currently, ap4k doesn't alwyas check which is the target deployement or the target container. So there might be cases, where an option is eagerly applied to more deployments/containers than the desired (this is possible when editing external resources or by using sidecars or init containers).

We need to make sure that each decorator is only applied where it needs to.

Spring Boot framework integration modules should consult springs PropertySource(s)

Currently, some things are hard coded for spring (e.g. port number, prometheus path etc).

We could possibly create a CompositePropertySource that would combine PropertiesPropertySource with YamlPropertySource in order to read configuration.

The tricky part here is how to read those files. The annotation processor facilities, do have everything required, but we'll need to keep those abstracted if possible. A similar case is here: https://github.com/ap4k/ap4k/blob/d7c904d18277545cd80df4496b6c6bf91d5cdd5f/annotations/openshift-annotations/src/main/java/io/ap4k/openshift/generator/S2iBuildGenerator.java#L77 (getOutputDirectory() is abstracted in order to avoid leaking apt specific stuff into the generator.)

Note: I am not 100% sold on reusing spring boot own classes. Currently, we have no dependency on spring boot, and reusing these classes would mean that we will introduce one (which I am not really fond of). This needs some thought.

exposeService value is always equal to false

Description of the issue

The exposeService spec field value is always equal to false even if we set to tru within the annotation

@CompositeApplication(name = "hello-spring-boot", exposeService = true)

Generated File

---
apiVersion: "v1"
kind: "List"
items:
- apiVersion: "v1beta1"
  kind: "Component"
  metadata:
    annotations: {}
    labels: {}
    name: "hello-spring-boot"
  spec:
    deploymentMode: "innerloop"
    runtime: "spring-boot"
    exposeService: false
    image: []
    env: []
    feature: []
    link: []
    service: []

[feature] Move the @Link annotation to a separate maven module

Feature request

Move the @link annotation to a separate maven module. Why : As a link represents a relation which exists virtually between a component and an endpoint or service, then it should be used as standalone Annotation like the ServiceCatalog to describe such METADATA to be injected within the Component target (= DeploymentConfig)

Is CompositeApplication the most appropriate wording to be used ?

Is @CompositeApplication the most appropriate wording to be used ? To be honest, I'm not sure.
We can keep it for the moment but the information that it includes is next used by the controller/operator deployed on openshift to generate the following resources which are mandatory for every microservices deployment : DeploymentConfig, Service, Route (optional)

A better name should be @ComponentAnnotation or even better @MicroserviceAnnotation

WDYT @iocanel ?

Questions: KubeApplication annotation is mandatory, ...

KubernetesApplication mandatory

I must add the @KubernetesApplication in order to generate the resource files for the annotation @CompositeAnnotation. Can we avoid that @iocanel ?

List of k8s

Why do you generate a list of k8s's items for the Component @iocanel and not a Component object ?

---
apiVersion: "v1"
kind: "List"
items:
- apiVersion: "v1beta1"
  kind: "Component"
  metadata:
    annotations: {}
    labels: {}
    name: "fruit-client-sb"
  spec:
    deploymentMode: "innerloop"
    exposeService: false
    image: []
    env: []
    feature: []
    link: []
    service: []

Not output the empty fields

Can we avoid to include within the generated resource files the empty fields ?
E.g : image: [],labels: {}

Gradle spring boot error : Invalid relative name: META-INF\ap4k\kubernetes.json

Hello, I've made a Gradle project with Spring Boot and I've added the gradle dependencies plus the kubernetes Annotations.

I receive this error:
### Invalid relative name: META-INF\ap4k\kubernetes.json

I've also tried with : @GeneratorOptions(outputPath = "myfile") , but same error.
The problem seems to be under:
AbstractAnnotationProcessor ----> protected void write(String group, KubernetesList list) {
....
FileObject json = processingEnv.getFiler().createResource(StandardLocation.CLASS_OUTPUT, PACKAGE, project.getResourceOutputPath() + File.separatorChar + String.format(FILENAME, group, JSON));
It seems that the second argument for createResource() does not accept separators, please try to remove the separator an keep a simple filename maybe.
Thanks.

My app is simple:
@KubernetesApplication
@SpringBootApplication
public class KubernetesDemo2Application {
........

and the Gradle dependencies:
...
dependencies {
implementation('org.springframework.boot:spring-boot-starter-web')
testImplementation('org.springframework.boot:spring-boot-starter-test')
annotationProcessor("io.ap4k:kubernetes-annotations:${ap4kVersion}")
compileOnly("io.ap4k:kubernetes-annotations:${ap4kVersion}")

annotationProcessor("io.ap4k:ap4k-core:${ap4kVersion}")
compileOnly("io.ap4k:ap4k-core:${ap4kVersion}")

// compile("io.ap4k:ap4k-spring-boot:${ap4kVersion}")
}

...
For build I've used Gradle 4.1 and 5.1. Save issue...

Thank you verry much !

Distinguish between config and model visitors

At the moment we are using two kind of visitors:

  • config
  • model

We should use a different interface, package etc for each kind and the api should protect us from using the wrong kind in the wrong place.

Pipeline pattern to generate the resources

Feature proposition

As the different annotations will generate K8s or Custom K8s resources, it could be interesting that we support also pipeline / steps pattern able to install different resources or to perform different actions.
Such a pattern is used by example by the syndesis, camel-k project or even by the Component operator

Benefits :

  • Decouple the process to generate the resources from the annotation
  • Better document what is produced (E.g k8s annotation -> generate Deployment, Service, route, ...)
  • Reuse pipelines between annotations

Step interface

https://github.com/snowdrop/component-operator/blob/master/pkg/pipeline/step.go#L26
// Action --
type Step interface {

	// a user friendly name for the action
	Name() string

	// returns true if the action can handle the integration
	CanHandle(component *v1alpha1.Component) bool

	// executes the handling function
	Handle(component *v1alpha1.Component, client *client.Client, namespace string) error
}

Pipeline

https://github.com/snowdrop/component-operator/blob/master/pkg/controller/component/handler.go#L74-L83
		innerLoopSteps: []pipeline.Step{
			innerloop.NewInstallStep(),
		},
		serviceCatalogSteps: []pipeline.Step{
			servicecatalog.NewServiceInstanceStep(),
		},
		linkSteps: []pipeline.Step{
			link.NewLinkStep(),
		},
	}

WDYT @iocanel ?

Component CRD spec support ENV vars but not @CompositeAnnotation

Component CRD spec support ENV vars but not @CompositeAnnotation. Even if we specify such info

@CompositeApplication(envVars = @Env(name = "key1", value = "val1"))

the yaml file generated is empty

---
apiVersion: "v1"
kind: "List"
items:
- apiVersion: "v1beta1"
  kind: "Component"
  metadata:
    annotations: {}
    labels: {}
    name: "component-example"
  spec:
    deploymentMode: "innerloop"
    runtime: "spring-boot"
    exposeService: false
    image: []
    env: []
    feature: []
    link: []
    service: []

Allow prometheus annotations to also configure spring boot management endpoints.

Currently, in order to use prometheus with spring boot, the user has to manually set:

management.endpoints.enabled-by-default=true
management.endpoints.web.exposure.include=health,info,metrics,prometheus

IMHO, addding @EnableServiceMonitor on top the main class is good enough to express the intention to expose the prometheus endpoint. And thus, the manual configuration should be made optional.

So, what ways do we have in order to pass this configuration to the application?

  1. Environment variables.
  2. Use a configmap with override properties and mount it to the application pod.
  3. Have the apt processor modify the actual application.properties (or yaml).

While option 3 seems to be the simplest, I want to keep the project as decoupled as possible from apt.

Separate annotations from processors

Gradle and possibly other tools, have special treatment for artifacts containing annoation processors (e.g. exclude them from the compile path). To reason with gradle its required that the annotation and the processor are in different artifacts and that the processor is explicitly defined as processor.

The best way to achieve this without introducing too much noise is to use shade plugin to split the artifacts and add them to the reactor with different classifiers.

Introduce an use a Logger abstraction

From @cmoulliard:"As the process is relatively complex as it will need different steps as described within the sequence diagram, it is very important to be able to log at the end of the process with debug level the following info -> annotations discovered, config created -> configurators applied -> k8s model or custom resource definition model or openshift model ... created -> decorators applied to let the users understand how the rules have been applied"

Runtime's field is empty if we generate enVARs

Steps to reproduce

Do mvn compile of the component-example project

package io.ap4k.examples.component;

import io.ap4k.annotation.Env;
import io.ap4k.annotation.KubernetesApplication;
import io.ap4k.component.annotation.CompositeApplication;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

@KubernetesApplication
@CompositeApplication(envVars = @Env(name = "key1", value = "val1"))
@SpringBootApplication
public class Main {

  public static void main(String[] args) {
    SpringApplication.run(Main.class, args);
  }

}

File generated

As you can see the file generated contains the envVar but runtime's field is empty

---
apiVersion: "v1"
kind: "List"
items:
- apiVersion: "v1beta1"
  kind: "Component"
  metadata:
    annotations: {}
    labels: {}
    name: "component-example"
  spec:
    deploymentMode: "innerloop"
    exposeService: false
    image: []
    env:
    - name: "key1"
      value: "val1"
    - name: "key1"
      value: "val1"
    feature: []
    link: []
    service: []

Project can't be compiled as ap4k deps has not been released for 1.0.2

Project can't be compiled as ap4k deps has not been released for 1.0.2

[ERROR] Failed to execute goal on project ap4k-core: Could not resolve dependencies for project io.ap4k:ap4k-core:jar:1.0-SNAPSHOT: Failure to find io.ap4k:ap4k-dependencies:jar:1.0.2 in http://repo1.maven.org/maven2/ was cached in the local repository, resolution will not be reattempted until the update interval of Maven Central has elapsed or updates are forced -> [Help 1]
[ERROR] 

Explicit resource groups do not work as expected.

Currently, its possible to define groups that only accpet resource that have been explicitly specified.

For example: We don't want to add resources to component.yml unless explicity specified.

At the moment this doesn't work as expected and if the generators are not registered in the
ideal order the functionality breaks.

Finalize annotation names

There have been talks about the annotation names and how descriptive they are.

For example:

  • KubernetesApplication
  • OpenshiftApplication

don't quite describe how they are used.

  • SourceToImage
  • DockerBuild

Neither describe nor provide the necessary context.

This issue is about discussing naming styles and alternatives:

  1. Spring Boot sytle
  • KubernetesApplication
  • OpenshiftApplication

It might doesn't describe the intention of generating resources, but then again I don't feel that it has too. sundrio, immutables, lobmbok and other projects that generate code, or perform bytecode manipulation don't use names that reflect that. Instead they use names that try to describe in a way the annotated class.

In the same sprit we could use the spring boot them and also go with:

  • EnableS2iBuild
  • EnableDockerBuild

For the sevice catalog I can't find an equivalent. But it could go like:

  • WithServiceCatalog(instances = @ServiceInstance(...))
  1. A diffrent take on Spring Boot style
  • KubernetesResourceGenerationEnabled
  • OpenshidtResourceGenerationEnabled

which is TOO long IMHO.

  1. Descriptive names
  • GenerateKubernetesResources
  • GenerateOpenshiftResources
  1. Annotation combo:
  • @generate( kubernetes = @KubernetesResources(...))
  • @generate( openshift = @OpenshiftResources( ... )).

or a variation that doesn't have coupling with openshift, servicecatalog etc:

@generate({@KubernetesResouces(...), @ServiceCatalogResource()).

Its not as simple as I would want.

Can't compile Component annotation project

Description

If we change a field of the ComponentSpec class

package io.ap4k.component.model;

import io.ap4k.deps.jackson.annotation.JsonInclude;
import io.ap4k.deps.jackson.annotation.JsonPropertyOrder;
import io.ap4k.deps.kubernetes.api.model.Doneable;
import io.sundr.builder.annotations.Buildable;
import io.sundr.builder.annotations.Inline;

import javax.annotation.Generated;

/**
 *
 *
 */
@JsonInclude(JsonInclude.Include.NON_NULL)
@Generated("org.jsonschema2pojo")
@JsonPropertyOrder({
    "name",
      "type",
      "packagingMode",
      "deploymentMode",
      "runtime",
      "version",
      "exposeService",
      "cpu",
      "strorage",
      "image",
      "env",
      "feature",
      "link"
      })
      @Buildable(editableEnabled = false, validationEnabled = false, generateBuilderPackage = false, builderPackage = "io.ap4k.deps.kubernetes.api.builder", inline = @Inline(type = Doneable.class, prefix = "Doneable", value = "done"))
      public class ComponentSpec {

        private String name;
        private String packagingMode;
        private String type;
        private DeploymentType deployment; // deploymentMode -> deployment

And next we re-compile -> mvn clean compile, then we get such errors

[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 2.619 s
[INFO] Finished at: 2018-11-28T10:29:50+01:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:2.3.2:compile (default-compile) on project component-annotations: Compilation failure: Compilation failure: 
[ERROR] /Users/dabou/Code/snowdrop/ap4k/ap4k-annotations/annotations/component-annotations/src/main/java/io/ap4k/component/decorator/AddServiceInstanceToComponent.java:[19,30] error: cannot find symbol
[ERROR]  package io.ap4k.component.model
[ERROR] /Users/dabou/Code/snowdrop/ap4k/ap4k-annotations/annotations/component-annotations/src/main/java/io/ap4k/component/decorator/AddServiceInstanceToComponent.java:[27,61] error: cannot find symbol
[ERROR]  class ComponentSpecBuilder
[ERROR] /Users/dabou/Code/snowdrop/ap4k/ap4k-annotations/annotations/component-annotations/src/main/java/io/ap4k/component/decorator/AddServiceInstanceToComponent.java:[36,20] error: cannot find symbol
[ERROR]  class AddServiceInstanceToComponent
[ERROR] /Users/dabou/Code/snowdrop/ap4k/ap4k-annotations/annotations/component-annotations/src/main/java/io/ap4k/component/decorator/AddServiceInstanceToComponent.java:[49,29] error: cannot find symbol
[ERROR]  class AddServiceInstanceToComponent
[ERROR] /Users/dabou/Code/snowdrop/ap4k/ap4k-annotations/annotations/component-annotations/src/main/java/io/ap4k/component/decorator/AddRuntimeToComponent.java:[19,30] error: cannot find symbol
[ERROR]  package io.ap4k.component.model
[ERROR] /Users/dabou/Code/snowdrop/ap4k/ap4k-annotations/annotations/component-annotations/src/main/java/io/ap4k/component/decorator/AddRuntimeToComponent.java:[24,53] error: cannot find symbol
[ERROR]  class ComponentSpecBuilder
[ERROR] /Users/dabou/Code/snowdrop/ap4k/ap4k-annotations/annotations/component-annotations/src/main/java/io/ap4k/component/decorator/AddRuntimeToComponent.java:[33,20] error: cannot find symbol
[ERROR]  class AddRuntimeToComponent
[ERROR] /Users/dabou/Code/snowdrop/ap4k/ap4k-annotations/annotations/component-annotations/src/main/java/io/ap4k/component/processor/CompositeAnnotationProcessor.java:[23,30] error: package io.ap4k.component.adapt does not exist
[ERROR] /Users/dabou/Code/snowdrop/ap4k/ap4k-annotations/annotations/component-annotations/src/main/java/io/ap4k/component/processor/CompositeAnnotationProcessor.java:[25,31] error: package io.ap4k.component.config does not exist
[ERROR] /Users/dabou/Code/snowdrop/ap4k/ap4k-annotations/annotations/component-annotations/src/main/java/io/ap4k/component/processor/CompositeAnnotationProcessor.java:[26,31] error: package io.ap4k.component.config does not exist
[ERROR] /Users/dabou/Code/snowdrop/ap4k/ap4k-annotations/annotations/component-annotations/src/main/java/io/ap4k/component/processor/CompositeAnnotationProcessor.java:[39,78] error: cannot find symbol
[ERROR]  class CompositeConfig
[ERROR] /Users/dabou/Code/snowdrop/ap4k/ap4k-annotations/annotations/component-annotations/src/main/java/io/ap4k/component/processor/CompositeAnnotationProcessor.java:[57,31] error: cannot find symbol
[ERROR]  class CompositeAnnotationProcessor
[ERROR] /Users/dabou/Code/snowdrop/ap4k/ap4k-annotations/annotations/component-annotations/src/main/java/io/ap4k/component/decorator/AddEnvToComponent.java:[19,30] error: cannot find symbol
[ERROR]  package io.ap4k.component.model
[ERROR] /Users/dabou/Code/snowdrop/ap4k/ap4k-annotations/annotations/component-annotations/src/main/java/io/ap4k/component/decorator/AddEnvToComponent.java:[25,49] error: cannot find symbol
[ERROR]  class ComponentSpecBuilder
[ERROR] /Users/dabou/Code/snowdrop/ap4k/ap4k-annotations/annotations/component-annotations/src/main/java/io/ap4k/component/decorator/AddEnvToComponent.java:[34,20] error: cannot find symbol
[ERROR]  class AddEnvToComponent
[ERROR] /Users/dabou/Code/snowdrop/ap4k/ap4k-annotations/annotations/component-annotations/src/main/java/io/ap4k/component/ComponentHandler.java:[21,31] error: package io.ap4k.component.config does not exist
[ERROR] /Users/dabou/Code/snowdrop/ap4k/ap4k-annotations/annotations/component-annotations/src/main/java/io/ap4k/component/ComponentHandler.java:[22,31] error: package io.ap4k.component.config does not exist
[ERROR] /Users/dabou/Code/snowdrop/ap4k/ap4k-annotations/annotations/component-annotations/src/main/java/io/ap4k/component/ComponentHandler.java:[26,30] error: cannot find symbol
[ERROR]  package io.ap4k.component.model
[ERROR] /Users/dabou/Code/snowdrop/ap4k/ap4k-annotations/annotations/component-annotations/src/main/java/io/ap4k/component/ComponentHandler.java:[33,49] error: cannot find symbol
[ERROR]  class CompositeConfig
[ERROR] /Users/dabou/Code/snowdrop/ap4k/ap4k-annotations/annotations/component-annotations/src/main/java/io/ap4k/component/ComponentHandler.java:[49,21] error: cannot find symbol
[ERROR]  class ComponentHandler
[ERROR] /Users/dabou/Code/snowdrop/ap4k/ap4k-annotations/annotations/component-annotations/src/main/java/io/ap4k/component/ComponentHandler.java:[75,36] error: cannot find symbol
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.

[feature] Adopt plural words for link, service, env, parameter array

Description

Currently, an array of Link, Service, Env is generated using as yaml/json word the singular form

    env:
    - name: "SPRING_PROFILES_ACTIVE"
      value: "openshift-catalog"
    link:
    - kind: "Secret"
      name: "Secret to be injected as EnvVar using Service's secret"
      targetComponentName: "fruit-backend-sb"
      ref: "postgresql-db"
    service:
    - name: "postgresql-db"
      class: "dh-postgresql-apb"
      plan: "dev"
      secretName: "postgresql-db"
      parameters:

Proposition

I propose as this is the case for most of the k8s arrays (and also supported by the component operator) that we use the plural form to name the links, services, envs 's arrays.

See Snowdrop Security K8s example : https://github.com/snowdrop/spring-boot-http-secured-booster/blob/master/.openshiftio/service.sso.yaml

Then, the generated component should be

---
apiVersion: "v1"
kind: "List"
items:
- apiVersion: "component.k8s.io/v1alpha1"
  kind: "Component"
  metadata:
    name: "fruit-backend-sb"
  spec:
    deploymentMode: "innerloop"
    runtime: "spring-boot"
    exposeService: true
    envs:
    - name: "SPRING_PROFILES_ACTIVE"
      value: "openshift-catalog"
    links:
    - kind: "Secret"
      name: "Secret to be injected as EnvVar using Service's secret"
      targetComponentName: "fruit-backend-sb"
      ref: "postgresql-db"
    services:
    - name: "postgresql-db"
      class: "dh-postgresql-apb"
      plan: "dev"
      secretName: "postgresql-db"
      parameters:
      - name: "postgresql_user"
        value: "luke"
      - name: "postgresql_password"
        value: "secret"
      - name: "postgresql_database"
        value: "my_data"
      - name: "postgresql_version"
        value: "9.6"

Make sense @iocanel @gytis @metacosm @geoand

No parameter added to the yaml file generated

GAVs used

        <!-- To generate CRD -->
        <dependency>
            <groupId>io.ap4k</groupId>
            <artifactId>ap4k-core</artifactId>
            <version>1.0-SNAPSHOT</version>
        </dependency>
        <dependency>
            <groupId>io.ap4k</groupId>
            <artifactId>component-annotations</artifactId>
            <version>1.0-SNAPSHOT</version>
        </dependency>
        <dependency>
            <groupId>io.ap4k</groupId>
            <artifactId>servicecatalog-annotations</artifactId>
            <version>1.0-SNAPSHOT</version>
        </dependency>
        <dependency>
            <groupId>io.ap4k</groupId>
            <artifactId>ap4k-spring-boot</artifactId>
            <version>1.0-SNAPSHOT</version>
        </dependency>

Description

The following java class annotated using @ServiceCatalog, @ServiceInstances and @Parameter doesn' t include within the generated yml file the parameters

package com.example.demo;

import io.ap4k.component.annotation.CompositeApplication;
import io.ap4k.kubernetes.annotation.Env;
import io.ap4k.servicecatalog.annotation.Parameter;
import io.ap4k.servicecatalog.annotation.ServiceCatalog;
import io.ap4k.servicecatalog.annotation.ServiceCatalogInstance;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

@CompositeApplication(
        exposeService = true, 
        envVars = @Env(name = "SPRING_PROFILES_ACTIVE", value = "openshift-catalog"))
@ServiceCatalog(
   instances = @ServiceCatalogInstance(
        name = "postgresql-db",
        serviceClass = "dh-postgresql-apb",
        servicePlan = "dev",
        bindingSecret = "postgresql-db",
        parameters = {
                @Parameter(key = "postgresql_user", value = "luke"),
                @Parameter(key = "postgresql_password", value = "secret"),
                @Parameter(key = "postgresql_database", value = "my_data"),
                @Parameter(key = "postgresql_version", value = "9.6")
        }
   )
)

Yaml file generated

---
apiVersion: "v1"
kind: "List"
items:
- apiVersion: "component.k8s.io/v1alpha1"
  kind: "Component"
  metadata:
    annotations: {}
    labels: {}
    name: ""
  spec:
    deploymentMode: "innerloop"
    runtime: "spring-boot"
    exposeService: true
    image: []
    env:
    - name: "SPRING_PROFILES_ACTIVE"
      value: "openshift-catalog"
    - name: "SPRING_PROFILES_ACTIVE"
      value: "openshift-catalog"
    feature: []
    link: []
    service:
    - name: "postgresql-db"
      class: "dh-postgresql-apb"
      plan: "dev"
      secretName: "postgresql-db"
      parameters: []

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.