GithubHelp home page GithubHelp logo

doytsujin / nginx-lb-operator Goto Github PK

View Code? Open in Web Editor NEW

This project forked from tuxinvader/nginx-lb-operator

0.0 1.0 0.0 167 KB

K8s/OCP Operator for managing external NGINX Load Balancers

Dockerfile 23.12% HTML 76.88%

nginx-lb-operator's Introduction

NGINX Load Balancer Operator

About

This repository contains a Kubernetes Operator for use with Kubernetes or RedHat OpenShift Container Platform (OCP). The operator is built using the Operator-SDK from RedHat, and can manage NGINX Plus Load Balancers running outside the container platform.

The Operator is designed to work with NGINX Controller to provide an Application Centric deployment model. The NGINX Controller provides a declarative API which is RBAC enabled and enables you to link roles and environments to NGINX Instances ("Gateways").

Multiple environments and RBAC users can deploy to the same Gateways. The controller builds a best practice NGINX configuration which incorporates all services and pushes it out to NGINX when configuration is changed. This allows multiple "projects" or "namespaces" to deploy configuration to the same or different NGINX instances.

There is a complete example walk through in the examples folder.

How does it work?

The operator configures an external NGINX instance (via controller) to Load Balance onto a Kubernetes Service.

If the service is configured with the NodePort ServiceType, then the external Load Balancer will use the Kubernetes/OCP node IPs with the assigned port. If the ServiceType is not NodePort then the configuration will try to reach the pods directly.

The Operator includes several Custom Resources which it watches for configuration. The CRs largely tie into the Controllers Application Centric View. After deploying the operator it will start watching for these CRs.

There are currently 5 Custom Resources watched by the operator and it also watches Deployment.apps.

When a resource which is being watched is modifed a reconciliation process happens using the playbook specified in the watches.yaml file.

Table 1. CRDs
CRD Description

lb.nginx.com_controllers

Controller Configuration, Includes FQDN, Auth details, and the Environment to use.

lb.nginx.com_applications

The Application as represented in NGINX Controller

lb.nginx.com_gateways

A Service gateway, The NGINX instances on which components will be deployed

lb.nginx.com_components

The component is deployed into an Application on a Gateway

lb.nginx.com_certificates

A TLS Certificate for use in a Gateway or Component

The Controller needs to be configured first because it holds connection information for the NGINX Controller. This includes the user_email, fqdn, environment and a reference to a Secret resource holding the user_password.

All of the other resources require a Controller resource. The Application and Certificate resources have no other dependencies. The Gateway may require a Certificate if the service is using TLS, and the Component links all other resources together to form the "Application Endpoint".

The Kubernetes Service which makes up the workgroup (pool or upstream if you prefer) should be named in the Component resource under the spec.ingressService setting. The Service can be in a different namespace to the CRs if the Operator is running with ClusterScope, if that is the case then set the spec.ingressNamespace to match.

The ClusterScope is useful if you want the NGINX instance to load balance onto a shared Router or Ingress Controller. See Operator Scopes below.

Desired State

The controller uses a declarative API, and the configuration is supplied in a desiredState object.

You can suppply the desiredState in the Gateway,Component, and Certificate resources and that state will be sent to the controller. With the following caveats:

  • Gateway: The desiredState is used as is to create a new gateway object. The gateway will be called namespace.name in controller

  • Certificate: The desiredState can be taken from the resource directly or merged with data from a kubernetes Secret. Any fields set in the optional Secret will take precedence over those given in desiredState.

  • Component: The desiredState can enable any controller functionality, but the ingress gateway will be overwritten by a link to the Gateway set on the resource, and the backend workload group URIs list will be populated from the Service named in spec.ingress

Deployment Scaling

If you want the Operator to update the workgroups in your Component when your Deployment is scaled, then you should add a label to the Component metadata called deployment-hash and set it to a SHA224 hash of deployment_Name dot deploymentNamespace. Eg:

$ echo -n "router-default.openshift-ingress" | sha224sum
dbb9f04a45371603a573de5160fa7fd32fa765194be63a7f70b066db  -

and include that hash in your component resource.

apiVersion: lb.nginx.com/v1alpha1
kind: Component
metadata:
  name: my-component
  namespace: my-app
  labels:
    deployment-hash: dbb9f04a45371603a573de5160fa7fd32fa765194be63a7f70b066db

Usage with Ingress Controller

This operator can be used in conjunction with the NGINX Ingress Controller to provide external load balancing onto an NGINX Ingress layer running inside your container platform. This is recommended in large scale deployments because it keeps the bulk of the Application Deliver Controller (ADC) functionality within the container platform.

NGINX Ingress Controller provides several Custom Resources which enable many more advanced features beyond those included with standard Ingress objects or OCP Routes.

While it has become common for Load Balancers to extend Ingress using annotations, that method is error prone and fragments the Ingress spec. Using NGINX Custom Resources instead of Ingress has the following advantages:

  • NGINX Custom Resources are fully validated by the Kubernetes API

  • VirtualServer, VirtualServerRoute, TransportServer, etc all RBAC enabled

  • Routing can be based on anything within the request (header, cookie, method, etc)

  • Blue/Green traffic splitting and Canary testing of application

  • Circuit Breaker patterns

  • Redirects and Error Pages

See the Documentation for more information.

When you deploy this operator with NGINX KIC, you will need to map the Component to the KIC using a Service or Route. See the Component CR Example below.

Usage without Ingress Controller

The Service can either point to an NGINX Plus Ingress Controller (to provide additional ADC features), or to any other service or route. See the Component CR Example below.

Setting Up and Building

You will need the Operator-SDK and a recent version of Docker installed on your build machine.

If you are playing around on a Codeready Containers setup, then follow these notes instead.

Build the Operator

build and push the operator to your repository

export IMAGE=myrepo.example.com:5000/nginx/nginx-lb-operator:latest
operator-sdk build $IMAGE
docker push $IMAGE

Install the CRDs

kubectl create -f deploy/crds/lb.nginx.com_controllers_crd.yaml
kubectl create -f deploy/crds/lb.nginx.com_applications_crd.yaml
kubectl create -f deploy/crds/lb.nginx.com_certificates_crd.yaml
kubectl create -f deploy/crds/lb.nginx.com_components_crd.yaml
kubectl create -f deploy/crds/lb.nginx.com_gateways_crd.yaml

You’re ready to deploy the operator container, but you can also test it locally using the SDK. See Running the Operator Locally if you want to test/debug.

Operator Scopes

The Operator can be deployed with a namespace scope, in which case it only watches resources in a single namespace - the one in which it is deployed. All Custom Resources (CRs), Services, and Secrets must be deployed into that same namespace.

Alternatively you may deploy the Operator with a cluster scope, in which case it can be configured to watch ALL or multiple namespaces.

Deploy the Operator

Chose one of the options below, Cluster or Namespace.

Deploy with Namespace Scope

A namespace-scoped operator can only access resources in it’s own namespace, so each project needs to run it’s own operator.

Create the role, service_account, and role_bindings with the manifests:

kubectl create -f deploy/namespace/service_account.yaml
kubectl create -f deploy/namespace/role.yaml
kubectl create -f deploy/namespace/role_binding.yaml

Next we deploy the operator…​.

Replace the REPLACE_IMAGE placeholder in the Operator manifest with the actual location and name of the image you built above, and deploy.

export IMAGE=myrepo.example.com:5000/nginx/nginx-lb-operator:latest
sed -e "s|REPLACE_IMAGE|${IMAGE}|g" deploy/namespace/operator.yaml > deploy/operator-for-reals.yaml
kubectl create -f deploy/operator-for-reals.yaml

That should be it. Your operator is running.

Deploy with Cluster Scope

A cluster-scoped operator will watch all namespaces by default, if you only want to watch a subset, such as an ingress-namespace, and a few other projects, then modify the WATCH_NAMESPACE parameter in the deployment manifest to limit them.

For example. Lets assume we have two projects under our control. Each project has it’s own namespace, and they create Ingress resources consumed by a shared Ingress Controller running in a third namespace. We might set the WATCH_NAMESPACE as follows:

  env:
    - name: WATCH_NAMESPACE
      value: "nginx-ingress,project-101,project-102"

The above would allow the operator to create LB config for the two namespaces: project101 and project102, the Services referenced by the project Components would be the NGINX KIC running in the nginx-ingress namespace. Also if the NGINX KIC deployment is scaled, then the components in the projects would be notified and reconfigured.

To deploy a cluster-scoped operator, first create ClusterRole, ClusterRoleBindings and the service_account. You will need to modify the service_account and the role_binding namespaces to match the namespace in which the operator will run. Make your modifications and the apply them:

kubectl create -f deploy/cluster/service_account.yaml
kubectl create -f deploy/cluster/role.yaml
kubectl create -f deploy/cluster/role_binding.yaml

Next we deploy the operator…​.

Set any namespace restrictions you need in the WATCH_NAMESPACE environment value.

Then replace the REPLACE_IMAGE placeholder in the Operator manifest with the actual location and name of the image you built above, and deploy.

export IMAGE=myrepo.example.com:5000/nginx/nginx-lb-operator:latest
sed -e "s|REPLACE_IMAGE|${IMAGE}|g" deploy/cluster/operator.yaml > deploy/operator-for-reals.yaml
kubectl create -f deploy/operator-for-reals.yaml

That should be it. Your operator is running.

The Operator Custom Resources

Below is an example for each of the Custom Resources which configure the Application.

Controller CRD

The Controller CRD take a user_email, FQDN, and Environment. It also needs a password stored in a Kubernetes Secret

Such as:

kind: Secret
apiVersion: v1
metadata:
  name: dev-controller
data:
  user_password: bm90cmVhbGx5bXlwYXNzd29yZAo=
type: Opaque

The Operator will use the user_password in the Secret, with the user_email in the Controller resource to log in and retrieve an auth token. The auth token will be cached for 30 minutes, after which time the next reconciliation will perform a new login.

A Controller resource using the above secret would look like this:

apiVersion: lb.nginx.com/v1alpha1
kind: Controller
metadata:
  name: dev-controller
spec:
  user_email: "[email protected]"
  secret: "dev-controller"
  fqdn: "ctrl.nginx.lab"
  environment: "ocp-dev-1"
  validate_certs: true

The user account and the environment should already exist on the controller. All Applications, Gateways, Components, and Certificates will reference a Controller resource by name and be deployed into the environment specified.

Application CRD

The Application is a simple object, but it groups the components and helps with analytics visualisation

apiVersion: lb.nginx.com/v1alpha1
kind: Application
metadata:
  name: my-application
spec:
  controller: "dev-controller"
  displayName: "My Kubernetes Application"
  description: "An application deployed in Kubernetes"

Gateway CRD

The Gateways object takes a desiredState whch is sent to controller as is, so you can enable any features exposed in the Controller API. Check your controller API for more information.

apiVersion: lb.nginx.com/v1alpha1
kind: Gateway
metadata:
  name: my-gateway
spec:
  controller: "dev-controller"
  displayName: "My OCP Gateway"
  description: "A gateway deployed by Kubernetes"
  desiredState:
    ingress:
      placement:
        instancerefs:
          - ref: /infrastructure/locations/unspecified/instances/nginx1
      uris:
        'http://www.uk.nginx.lab': {}
        'http://www.foo.com': {}

Certificate CRD

The certificate Resource can be specified either by providing the details in the object directly within the desiredState or by referencing a Kubernetes Secret in secret.

apiVersion: lb.nginx.com/v1alpha1
kind: Certificate
metadata:
  name: my-certificate
spec:
  controller: "dev-controller"
  displayName: "My Kubernetes Certificate"
  description: "A certificated deployed in Kubernetes"
  #secret: secret-containing-the-cert
  desiredState:
    type: PEM
    caCerts: []
    privateKey: |-
      -----BEGIN PRIVATE KEY-----
      MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQDQYBXFTj1ZdJGH
      7IfomkeJfedaIueD01L6X6jj8TvS2xwTRHL4LIkZP882qHs2VfEpgbVi6a96lvWP
      TRUNCATED  TRUNCATED  TRUNCATED  TRUNCATED  TRUNCATED  TRUNCATED
      6bug7eceyafsFTTEghcNloHWnYBARA3878X5RQkLVUNocrZLkBG2Dn2d3aiEpWww
      CZ+gbhraYKAflzD6wTJL29D5dLGF5k/88RTN60Gzoaxq7CkvlLwXCZjQSvjEGq5i
      whJYgXwWvqy5VXxLc5amLXk=
      -----END PRIVATE KEY-----
    publicCert: |-
      -----BEGIN CERTIFICATE-----
      MIIDpzCCAo+gAwIBAgIUb+NqxHIP0Z15aqy5FY8+bb1vq6IwDQYJKoZIhvcNAQEL
      1Xnimah+mQMOuWiJU9W9omet5Y9OemQLHmeSVFbfQXBkTNKGO+2iKtWJNO8+zzT7
      TRUNCATED  TRUNCATED  TRUNCATED  TRUNCATED  TRUNCATED  TRUNCATED
      5WZTPiggaDbDAwjK2QP2N933lHxR5JDmkHHH6GHKLWXgYgxY0zx8R2+eFyvxJvGB
      yaw7SnX8i5mjkgwwGhgTMBnSdf3F9eLcMHPgceMOuTyynpe9SSE9Bck3LykgvQDW
      InWB8mhlndb/p8ZYVLx9y2LDq1h3iymbnoHM
      -----END CERTIFICATE-----

When referencing the cert as a kubernetes secret, then it should be an Opaque or tls type and the certificate details should be stored in tls.key and tls.crt.

kind: Secret
apiVersion: v1
metadata:
  name: my-cert
data:
  tls.crt: >-
    LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURwekNDQW8rZ0F3SUJBZ0lVYitOcXhISVAw
  tls.key: >-
    LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZB
  type: UEVN
type: Opaque

and the Certificate would look like this

apiVersion: lb.nginx.com/v1alpha1
kind: Certificate
metadata:
  name: my-certificate
spec:
  controller: "dev-controller"
  displayName: "My Kubernetes Certificate"
  description: "A certificated deployed in Kubernetes"
  secret: my-cert

Component CRD

The Component object also takes a desiredState, but the operator expects to configure both the ingress→gatewayRefs using the gateway provided, and the backend→workloadGroups→group using the pods or NodePorts found in the ingress* settings. The workload uris are built using workload.scheme and workload.path

The ingressType can be Service, Route, or None. See the relevant sections below for deploying against a service, route or with a manually configured workload group (ie none).

Deploying Component with a Service

When deployed with a service, you must set the ingressType to Service, and set the ingressName to match the service. If the service is in a different namespace, then you can set the ingressNamespace to match. The Operator must be runing with a ClusterRole if the service is in a different namespace. Eg:

apiVersion: lb.nginx.com/v1alpha1
kind: Component
metadata:
  name: my-component
  namespace: my-app
spec:
  controller: dev-controller
  application: my-application
  ingressType: Service
  ingressName: my-nginx-ingress-controller
  ingressNamespace: nginx-igress

If the Ingress service is discovered to be using NodePort, then the workload groups will be set to the k8s nodes with the dynamically assigned port. Otherwise the workloads will be set to the pod IP and the workload.targetPort

Deploying Component with an OpenShift Route

When deploying with an OpenShift Route, you must set the ingressType to Route and the ingressName to the name of the Route resource, again you can set ingressNamespace if the route is not in the same namespace.

The Operator will look the router for the provided Route and attempt to locate its pods. So the Operator will need read access to the namespace in which the Router is running. This can be set with ingressRouterNamespace but will default to openshift-ingress. It is likely that the Operator will need a ClusterRole account. Eg:

apiVersion: lb.nginx.com/v1alpha1
kind: Component
metadata:
  name: my-component
  namespace: my-app
spec:
  controller: dev-controller
  application: my-application
  ingressType: Route
  ingressName: my-route
  ingressRouterNamespace: openshift-ingress

If you are using Codeready Containers The workload.crcOverride can be set to the IP of your CRC VM.

Deploying Component with a manual workload

You can also set the ingressType to None in which case the node list will not be generated and the workload groups in the desiredSpec will need to be set manually. Eg:

apiVersion: lb.nginx.com/v1alpha1
kind: Component
metadata:
  name: my-component
  namespace: my-app
spec:
  controller: dev-controller
  application: my-application
  ingressType: None
  ...
  desiredSpec:
    backend:
      workloadGroups:
        group:
          uris:
            'https://node1.cluster:443/': {}
            'https://node2.cluster:443/': {}
          loadBalancingMethod:
            type: ROUND_ROBIN

General Example

apiVersion: lb.nginx.com/v1alpha1
kind: Component
metadata:
  name: my-component
spec:
  controller: "dev-controller"
  application: "my-application"
  ingressType: Service
  ingressName: "my-nginx-ingress-controller"
  ingressNamespace: "my-nginx-ingress-namespace"
  gateway: "my-gateway"
  workload:
    scheme: "http"
    path: "/"
    targetPort: 443
    crcOverride: 192.168.130.11
  displayName: "My Component"
  description: "A component deployed by Kubernetes"
  desiredState:
    backend:
      monitoring:
        response:
          status:
            match: true
            range:
              endCode: 302
              startCode: 200
        uri: /
      workloadGroups:
        # group uris will be populated from "ingress" pods or nodeports
        group:
          loadBalancingMethod:
            type: ROUND_ROBIN
    # ingress gatewayRefs will be populated from "gateway"
    ingress:
      uris:
        /: {}

The above would result in a desiredState similar to:

  "desiredState": {
    "ingress": {
      "gatewayRefs": [
        {
          "ref": "/services/environments/ocp-dev-1/gateways/<project>.my-gateway"
        }
      ],
      "uris": {
        "/": {}
      }
    },
    "backend": {
      "workloadGroups": {
        "group": {
          "loadBalancingMethod": {
            "type": "ROUND_ROBIN"
          },
          "uris": {
            "http://<k8s-node-ip>:<nodeport>/": { }
          }
        }
      },
      "monitoring": {
        "uri": "/",
        "response": {
          "status": {
            "range": {
              "endCode": 302,
              "startCode": 200
            },
            "match": true
          }
        }
      }
    }
  }

nginx-lb-operator's People

Contributors

tuxinvader avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.