GithubHelp home page GithubHelp logo

kubernetes / ingress-nginx Goto Github PK

View Code? Open in Web Editor NEW
16.7K 287.0 8.1K 137.92 MB

Ingress-NGINX Controller for Kubernetes

Home Page: https://kubernetes.github.io/ingress-nginx/

License: Apache License 2.0

Makefile 0.77% Go 85.76% Lua 7.74% Shell 3.97% HTML 0.01% Python 0.31% Dockerfile 0.76% Mustache 0.34% Smarty 0.14% CMake 0.08% JavaScript 0.12%
ingress-controller kubernetes nginx

ingress-nginx's Introduction

Ingress NGINX Controller

CII Best Practices Go Report Card GitHub license GitHub stars GitHub stars

Overview

ingress-nginx is an Ingress controller for Kubernetes using NGINX as a reverse proxy and load balancer.

Learn more about Ingress on the Kubernetes documentation site.

Get started

See the Getting Started document.

Troubleshooting

If you encounter issues, review the troubleshooting docs, file an issue, or talk to us on the #ingress-nginx channel on the Kubernetes Slack server.

Changelog

See the list of releases for all changes. For detailed changes for each release, please check the changelog-$version.md file for the release version. For detailed changes on the ingress-nginx helm chart, please check the changelog folder for a specific version. CHANGELOG-$current-version.md file.

Supported Versions table

Supported versions for the ingress-nginx project mean that we have completed E2E tests, and they are passing for the versions listed. Ingress-Nginx versions may work on older versions, but the project does not make that guarantee.

Supported Ingress-NGINX version k8s supported version Alpine Version Nginx Version Helm Chart Version
πŸ”„ v1.10.1 1.29, 1.28, 1.27, 1.26 3.19.1 1.25.3 4.10.1*
πŸ”„ v1.10.0 1.29, 1.28, 1.27, 1.26 3.19.1 1.25.3 4.10.0*
πŸ”„ v1.9.6 1.29, 1.28, 1.27, 1.26, 1.25 3.19.0 1.21.6 4.9.1*
πŸ”„ v1.9.5 1.28, 1.27, 1.26, 1.25 3.18.4 1.21.6 4.9.0*
πŸ”„ v1.9.4 1.28, 1.27, 1.26, 1.25 3.18.4 1.21.6 4.8.3
πŸ”„ v1.9.3 1.28, 1.27, 1.26, 1.25 3.18.4 1.21.6 4.8.*
πŸ”„ v1.9.1 1.28, 1.27, 1.26, 1.25 3.18.4 1.21.6 4.8.*
πŸ”„ v1.9.0 1.28, 1.27, 1.26, 1.25 3.18.2 1.21.6 4.8.*
v1.8.4 1.27, 1.26, 1.25, 1.24 3.18.2 1.21.6 4.7.*
v1.7.1 1.27, 1.26, 1.25, 1.24 3.17.2 1.21.6 4.6.*
v1.6.4 1.26, 1.25, 1.24, 1.23 3.17.0 1.21.6 4.5.*
v1.5.1 1.25, 1.24, 1.23 3.16.2 1.21.6 4.4.*
v1.4.0 1.25, 1.24, 1.23, 1.22 3.16.2 1.19.10† 4.3.0
v1.3.1 1.24, 1.23, 1.22, 1.21, 1.20 3.16.2 1.19.10† 4.2.5

See this article if you want upgrade to the stable Ingress API.

Get Involved

Thanks for taking the time to join our community and start contributing!

  • This project adheres to the Kubernetes Community Code of Conduct. By participating in this project, you agree to abide by its terms.

  • Contributing: Contributions of all kinds are welcome!

    • Read CONTRIBUTING.md for information about setting up your environment, the workflow that we expect, and instructions on the developer certificate of origin that we require.
    • Join our Kubernetes Slack channel for developer discussion : #ingress-nginx-dev.
    • Submit GitHub issues for any feature enhancements, bugs, or documentation problems.
      • Please make sure to read the Issue Reporting Checklist before opening an issue. Issues not conforming to the guidelines may be closed immediately.
    • Join our ingress-nginx-dev mailing list
  • Support:

License

Apache License 2.0

ingress-nginx's People

Contributors

agile6v avatar akx avatar aledbf avatar antoineco avatar aramase avatar asifdxtreme avatar bprashanth avatar caiyixiang avatar chentao11596 avatar danielqsj avatar dependabot[bot] avatar elvinefendi avatar esigo avatar gacko avatar gianrubio avatar jcmoraisjr avatar k8s-ci-robot avatar kundan2707 avatar longwuyuan avatar maxlaverse avatar mbssaiakhil avatar nicksardo avatar oilbeater avatar rikatz avatar saumyabhushan avatar strongjz avatar szekeresb avatar tao12345666333 avatar tonglil avatar wayt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ingress-nginx's Issues

Figure out examples

We need to consolidate our examples. There are a few common scenarios everyone wants to achieve at l7, and we should surface how to do that either natively all with 1 controller, or by stacking controllers (eg: gce -> nginx).

I'm using this issue to build a list of common cases we need to provide full examples/documentation for

url routing
  background on url regex matching across controllers
  hostname
  regex
  redirect

tls
  background on how ingress exposes tls, updating certs etc
  letsencrypt/kube-lego
  termination
  passthrough
  reencrypt
  chained certs

health checks
  background on kubeproxy/readiness probes/cloud health checks
  default health check, updating health checks, timeouts

TODO sticky sessions 
TODO source ip
TODO websockets
TODO http2/grpc

Ideally each example would have a section for on-prem,gce,aws. If a feature isn't possible on gce for example, the section should say that so it's clear to the user that they need to use nginx.

Nginx Ingress Controller - Secure default backend and/or support for wildcards?

From @alanhartless on April 18, 2016 23:56

Currently it's not possible to support wildcard subdomains for the Nginx ingress controller due to Kubernetes Ingress resources only allowing a legitimate domain for host. In the scenario that a wildcard domain SSL cert needs to be used for a lot of subdomains (think SaaS with custom subdomain names), it's a bit overkill to list every subdomain.

I thought of two solutions.

One - support via a flag to secure the default backend server with the default SSL certificate where the backend server can be something more than just a simple 404. This of course limits to a single cert and wouldn't be able to support multiple wildcard domains.

Two - use a flag with a "wildcard" placeholder that's replaced with * when generating the hosts. The placeholder can be a little funny looking since the host is limited to alphanumeric keys (like 0wildcard0, wildcard- or just wildcard if for sure not being used.

I've implemented both for my uses but wanted to see if either (or both) would be something interested in before I submit a PR (they feel a little hackish :-) ). Thoughts?

It would be GREAT if paths would support Nginx location regex but I know that's an Ingress limitation.

Copied from original issue: kubernetes-retired/contrib#799

AWS ingress

Currently people just stick a Service of Type=LB in front of the nginx ingress on AWS. Apparently, this causes some issues, and an ALB works better (sky-uk/feed#111). Maybe we should actually write an AWS ingress controller that spins up an ALB so people can tier it over the nginx ingress key-ing off ingress.class=aws/nginx ?

ref kubernetes-retired/contrib#346
@jsravn @justinsb

Nginx: support hiding version number (server_tokens off;) ?

We have a customer whose hired auditors asked us not publish our nginx version in the Server: header of our responses. I think this is a little silly but we want to comply

I don't however want to have to switch to using a custom nginx.conf template just for this.

Its a simple boolean and would easily match the existing implementation pattern on configMap / annotations. If I write a PR for this will it be accepted or is beyond some pale of Feature Creep?

Are there plans to allow injection of simple custom config? or is the feeling that either you take the feature set you are given or you use a custom template?

Port contrib/ingress

We need to work to get this repo on par with the others in kubernetes/:

  1. update README in contrib/ingress to point to this repo
  2. lift/shift of controllers and vendor code from kubernetes/ingress
  3. presubmit checks (#4)
  4. figure out docs and examples (#2)
  5. moving issues/prs (#3)
  6. e2e testing (both travis integration with hyperkube and on actual bots, #5)
  7. Bazel build?

Anyone with cycles, jump on this.

Nginx Ingress Controller [0.8.3] ConfigMap - Client certificate verification not supported

I've tried to add client certificate to nginx server config. It is not supported:

https://github.com/kubernetes/contrib/blob/494064ba4e685a3d9432411bcd72dc5d49641316/ingress/controllers/nginx/nginx/config/config.go

nginx-loadbalancer-conf.yaml

apiVersion: v1
data:
  ssl_verify_client: "on"
  ssl_client_certificate: "/ssl/ca.crt"
kind: ConfigMap
metadata:
  name: nginx-load-balancer-conf

rc.yaml

..
- --nginx-configmap=$(POD_NAMESPACE)/nginx-load-balancer-conf
...

after call of:

 k exec nginx-ingress-controller-p840p -- cat /etc/nginx/nginx.conf

I can't see the changes in /etc/nginx/nginx.conf

kube: 1.4.5

Load Balancer (or external) SSL termination and HTTP -> HTTPS redirect support.

We are using AWS ELB's and have SSL termination occur on the ELB's.

The ELB is requisitioned with the following service resource:

apiVersion: v1
kind: Service
metadata:
  name: ingress-external
  namespace: services
  annotations:
  # https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/aws/aws.go
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm.......
spec:
  type: LoadBalancer
  selector:
    app: ingress.controller
  ports:
    - name: http
      port: 80
      targetPort: 80
    - name: https
      port: 443
      targetPort: 80

So HTTP and HTTPS (terminated as HTTP) go to port 80.

An ingress resources need to be TLS enabled for the redirects to work with an ingress controller -- but a valid TLS definition requires setting a certificate and we don't want the ingress controller doing SSL termination.

I can't find a way of performing a HTTP -> HTTPS redirect without providing a custom template. If I go for a custom template it will be a global configuration and ideally we'd like to have some resources respond to HTTP requests.

So I propose an annotation for the ingress resource that expresses the intent that a particular ingress resource is expecting traffic from an external SSL terminating source. Perhaps something along the lines of .../proxy-ssl-termination-redirect.

When this annotation is present I expect the following template to be used in the location blocks:

if ($http_x_forwarded_proto = 'http') {
    return 301 https://$host$request_uri;
}

Related Issues:

General purpose redirect proposal

wrong approach in rewrite+add-base-url ?

Moved this here from contrib.

Maybe I'm getting this wrong, but the base-url should be constructed from matched path instead of rewrite target.

Problem description:

Ingress rule:

path = "/p"
rewrite-target = "/"
add-base-url = true

How it works:

base-url=protocol://host/target => http(s)://some.host/
  • css/test.css becomes http(s)://some.host/css/test.css (ingress rule is not matched anymore)

How it should work:

base-url=protocol://host/target => http(s)://some.host/p
  • css/test.css becomes http(s)://some.host/p/css/test.css (matches the ingress rule and URL is rewritten to /css/test.css for the backend to be loaded correctly)

If a path "/p" is matched by the ingress rule, the backend is rewritten to /, but the resources should still be pulled by /p/ from the browser point of view, otherwise there is no matching path in ingress rule.

GLBC ingress: only handle annotated ingress

Hi,
We are running a HAProxy based ingress in our clusters. But for a few service, we would like to run GLBC ingress. I did not see any way to tell ingress controllers, which Ingress resource they can handle. Can Ingress controllers can only handle ingress that has a specific annotation applied to it (similar to how schedulers do it):

"ingress.alpha.kubernetes.io/controller": glbc

Here is a way I could see being implemented. Glbc controller adds a new flag --ingress-controller.

If --ingress-controller flag value is empty, then glbc should handle any Ingress that has no annotation "ingress.alpha.kubernetes.io/controller" or annotation set to "" string.

If --ingress-controller flag is not empty, then only handle Ingress that has
"ingress.alpha.kubernetes.io/controller" : "".

Thanks.

Nginx vts as prometheus metrics

Currently the nginx endpoint for metrics only support vts metric, that provide html and json metrics.
I figured out a project that export vts metrics to prometheus, now I have all metrics, simply using a sidecar container pointing to vts. Here's my awesome metrics.

screen shot 2016-12-21 at 15 21 42

I'm interested in porting this for the nginx ingress, but first I'm here to ask if this is in the roadmap or if anyone is already working on this.

unnecessary backend reload

My backend are constantly reloading, I realised that only the ip order are changed in the templates.

Logs

I1215 12:33:49.273475       7 nginx.go:209] NGINX configuration diff
I1215 12:33:49.273665       7 nginx.go:210] --- /tmp/a580613529	2016-12-15 12:33:49.000000000 +0000
+++ /tmp/b005786148	2016-12-15 12:33:49.000000000 +0000
@@ -179,10 +179,10 @@
         ssl_certificate_key                     /ingress-controller/ssl/*.pem;
         more_set_headers                        "Strict-Transport-Security: max-age=15724800; preload";
         location / {
+           allow 213.15.3.1/32;
            allow 52.18.6.14/32;
            allow 94.28.18.253/32;
            allow 52.21.16.13/32;
-           allow 213.15.3.14/32;
            deny all;
             # enforce ssl on server side
             if ($scheme = http) {
I1215 12:33:49.295139       7 queue.go:54] queuing item &TypeMeta{Kind:,APIVersion:,}
I1215 12:33:49.295489       7 event.go:217] Event(api.ObjectReference{Kind:"Ingress", Namespace:"*", Name:"*", UID:"c69e3e6b-b888-11e6-8f0a-028561e17cf1", APIVersion:"extensions", ResourceVersion:"4223051", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress */*
I1215 12:33:49.304545       7 controller.go:375] ingress backend successfully reloaded...
I1215 12:33:49.664964       7 nginx.go:209] NGINX configuration diff
I1215 12:33:49.665150       7 nginx.go:210] --- /tmp/a028406237	2016-12-15 12:33:49.000000000 +0000
+++ /tmp/b445085592	2016-12-15 12:33:49.000000000 +0000
@@ -179,10 +179,10 @@
         ssl_certificate_key                     /ingress-controller/ssl/*.pem;
         more_set_headers                        "Strict-Transport-Security: max-age=15724800; preload";
         location / {
-           allow 213.15.3.1/32;
-           allow 52.18.6.14/32;
            allow 94.28.18.253/32;
            allow 52.21.16.13/32;
+           allow 213.15.3.14/32;
+           allow 52.18.6.14/32;
            deny all;
             # enforce ssl on server side
             if ($scheme = http) {
I1215 12:33:49.696796       7 controller.go:375] ingress backend successfully reloaded...

e2e test leaves garbage around

e2e-down.sh explicitly removes some containers, and at least one exits after some time, but there are plenty others which stay around apparently indefinitely.

I'm not sure whether this is a known deficiency of hypercube or we're just using it improperly.

porridge@beczulka:~/Desktop/coding/go/src/k8s.io/ingress$ docker ps 
CONTAINER ID        IMAGE                                                    COMMAND                  CREATED             STATUS              PORTS               NAMES
ac39a0a5730e        gcr.io/google_containers/kube-dnsmasq-amd64:1.4          "/usr/sbin/dnsmasq --"   9 minutes ago       Up 9 minutes                            k8s_dnsmasq.bee611d9_kube-dns-v20-6caok_kube-system_8ff29f9e-c82a-11e6-9097-24770389c384_107a8b1a
4b26865f46da        gcr.io/google_containers/exechealthz-amd64:1.2           "/exechealthz '--cmd="   13 minutes ago      Up 13 minutes                           k8s_healthz.3613f95_kube-dns-v20-6caok_kube-system_8ff29f9e-c82a-11e6-9097-24770389c384_5ee00e74
224be6a6f7bc        gcr.io/google_containers/pause-amd64:3.0                 "/pause"                 13 minutes ago      Up 13 minutes                           k8s_POD.a6b39ba7_kube-dns-v20-6caok_kube-system_8ff29f9e-c82a-11e6-9097-24770389c384_3e36dd7a
e85eab303994        gcr.io/google_containers/pause-amd64:3.0                 "/pause"                 13 minutes ago      Up 13 minutes                           k8s_POD.2225036b_kubernetes-dashboard-v1.4.0-96im4_kube-system_8ff1e7e5-c82a-11e6-9097-24770389c384_ff54a5b2
dd4bc152a110        gcr.io/google_containers/hyperkube-amd64:v1.4.5          "/hyperkube apiserver"   6 days ago          Up 6 days                               k8s_apiserver.213c742_k8s-master-0.0.0.0_kube-system_501bec47043160feec61f2839ec6a4c5_29321e20
cfbb7e04222e        gcr.io/google_containers/hyperkube-amd64:v1.4.5          "/setup-files.sh IP:1"   6 days ago          Up 6 days                               k8s_setup.2cde3c3c_k8s-master-0.0.0.0_kube-system_501bec47043160feec61f2839ec6a4c5_baf0cb65
219ad2ef7f1c        gcr.io/google_containers/hyperkube-amd64:v1.4.5          "/copy-addons.sh mult"   6 days ago          Up 6 days                               k8s_kube-addon-manager-data.270e200b_kube-addon-manager-0.0.0.0_kube-system_c3c035106a9df5bd5f54b3e87143ddbf_636a1308
a2fac3257ece        gcr.io/google_containers/kube-addon-manager-amd64:v5.1   "/opt/kube-addons.sh"    6 days ago          Up 6 days                               k8s_kube-addon-manager.ed858faf_kube-addon-manager-0.0.0.0_kube-system_c3c035106a9df5bd5f54b3e87143ddbf_2ccd5348
189bb0cf3d59        gcr.io/google_containers/pause-amd64:3.0                 "/pause"                 6 days ago          Up 6 days                               k8s_POD.d8dbe16c_k8s-master-0.0.0.0_kube-system_501bec47043160feec61f2839ec6a4c5_45247fc2
d971a4ae6c4a        gcr.io/google_containers/pause-amd64:3.0                 "/pause"                 6 days ago          Up 6 days                               k8s_POD.d8dbe16c_kube-addon-manager-0.0.0.0_kube-system_c3c035106a9df5bd5f54b3e87143ddbf_286fe6e0
porridge@beczulka:~/Desktop/coding/go/src/k8s.io/ingress$ 

hostname+backend should result in catchall path rule

I'm a k8s beginner

I've experienced some confusion today and yesterday that has been detailed at length in kubernetes/minikube#921 and kubernetes/website#1963

I focused my issues there, cause I believe it is more a failure of parity and understanding than a proper bug per se. But to raise the question for the sake of possible reimplementation, spec clarification, what should this yaml do?

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: example
spec:
  rules:
    - host: example.mycompanywithcorrectdns.com
  backend:
    serviceName: exampleservice
    servicePort: 8080

Is it an error to include an incomplete rule, sans http:? (it validates and loads fine).

It seems, in my experience with an nginx controller (Created form https://github.com/kubernetes/kops/blob/fbe62f09820b471c43dd874ae6ab4b0b1cfc2d09/addons/ingress-nginx/v1.4.0.yaml ), to make a mapping from example.mycompanywithcorrectdns.com to the global default backend specified by --default-backend-service flag to the nginx-controller. and to not do anything whatsoever with exampleservice.

I can test it later, but by reading specs and my experience so far, I don't know what would happen if I removed --default-backend-service from the rc template spec. I suspect without it, exampleservice would become the new default service for my cluster as a whole.

But what happens then if I create a second Ingress resource?

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: example2
spec:
  backend:
    serviceName: exampleservice2
    servicePort: 8080

I can and probably should experiment, but I cannot currently intuit the answer. I also don't know whether the answer changes depending on if I specify anything under tls.hosts (which is the closest thing to a top-level hostname association for the Ingress resource).

My working theoryβ€”I don't have a GCE account I can spend money on for this right nowβ€” is that this does something useful (like allocate a new Persistent IP or L7 LB) with GLBC in GCE. But in AWS with Nginx Controller, I'm thoroughly confused.

If I am correct that a GCE Ingress always has a network-level analogue, but an Nginx or other does not, I'm not sure a top-level backend should be a legal field of a "software" Ingress resource (or indeed that they should definitely be the same resource).

Leverage user supplied secrets directly from cloudprovider

Currently, the only way to get tls certs into a cloudprovider ingress is by creating a Kubernetes secret.
Currently, Kubernetes secrets are not encrypted.

We should allow users to directly create a secret in the cloudprovider and pass it through an annotation on the Ingress. We already do this for kubernetes.io/ingress.global-static-ip-name

See kubernetes-retired/contrib#2095 for more context.

We also need a way to do this with nginx, where one might want to store the cert in eg: GCS and pass down a URL from where the controller downloads it. I think all cloudrpovider backends will also understand URL as long as it points to an existing ssl cert resource in the cloud.

Bare-metal Ingress HA using keepalived

bare-metal ingress is not HA. Our team implement a keepalived-vip side car container for Ingress. Which could be deployed with ingress-controller container to provide HA. For example, first create a Ingress Replicaset with the keep-alived-vip side car:

apiVersion: v1
kind: ReplicationController
metadata:
  namespace: "kube-system"
  name: "ingress-test1"
  labels:
    run: "ingress"
spec:
  replicas: 2
  selector:
    run: "ingress"
  template:
   metadata:
    labels:
      run: "ingress"
   spec:
    hostNetwork: true
    containers:
      - image: "ingress-keepalived-vip:v0.0.1"
        imagePullPolicy: "Always"
        name: "keepalived-vip"
        resources:
          limits:
            cpu: 50m
            memory: 100Mi
          requests:
            cpu: 10m
            memory: 10Mi
        securityContext:
          privileged: true
        env:
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: SERVICE_NAME
            value: "ingress"
      - image: "ingress-controller:v0.0.1"
        imagePullPolicy: "Always"
        name: "ingress-controller"
        resources:
          limits:
            cpu: 200m
            memory: 200Mi
          requests:
            cpu: 200m
            memory: 200Mi

Then create a Service named "ingress" pointing to the Ingress ReplicaSet, and assign a vip to the service using annotation.

apiVersion: v1
kind: Service
metadata:
  name: ingress
  namespace: kube-system
  annotations:
    "ingress.alpha.k8s.io/ingress-vip": "192.168.10.1"
spec:
  type: ClusterIP
  selector:
    - run: "ingress"
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80

keepalived-vip side car container will watch the "ingress" Service and update keepalived's config. Thus make Ingress Pod be HA. user should access in-cluster service by the Ingress VIP.

The keepalived approache is first describe in kubernetes/kubernetes#34013

see implementation here: https://github.com/mqliang/contrib/tree/ingress-admin/ingress-admin/ingress-keepalived-vip

Same host in multiple Ingress resources

What is the correct behavior for Ingress controllers for the case when the same host is defined in multiple Ingress resources incuding resources from different namespaces?

Thx

Consider formalizing generic controller interface over grpc

This is the direction we took with container runtimes (https://github.com/kubernetes/kubernetes/blob/master/docs/devel/container-runtime-interface.md). The steps to writing a backend would boil down to:

  • produce a container that understands grpc over eg: unix socket
  • run a single pod with the generic_controller and this container using the shared socket
  • pay attention to api breaking changes between versioned releases

If anyone is interested in a protoype, check out how we do this in the kubelet: https://github.com/kubernetes/kubernetes/tree/master/pkg/kubelet/api/v1alpha1

Today, custom backends need to be compiled in, which means changes to the upstream generic_controller interface can silently break backends unless the maintainer of the backend forks the generic_controller, or checks the backend into the main repo.

Documentation: setting up ingress on metal with the right, secure CNI configuration

Based on a conversation with @Beeps on slack.

the BIGGEST problem with ingress is that the common assumption is "you will use google cloud or AWS ELB". There is exactly zero information on how to use ingress on metal. What is a production configuration ? do the ingress controllers sit on another node, etc. This is EXTREMELY confusing because if you do L4 routing.. then there is a dependence on configuring CNI/Flannel/Calico in the right way ... especially wrt security. Weave gets even more confusing.

I think the very basic configuration im looking for is for ingress to act as a pipe and passthrough everything to a nginx pod inside. there are challenges on how to set up the ingress to do this correctly ... and maintaining information like "actual source ip". We are really blocking on this stuff. Most of the docs around ingress delve into the more complex aspects of how to setup the annotations correctly, etc... but miss out on just creating a pipe.

Since most of us have existing infrastructure that we are porting to K8s... I believe this is the best first example.

Ingress only exposes internal node ip

I tried installing this version of the ingress controller today and noticed that the Address listed was

kubectl describe ing monitoring                                                                                                              Name:  			monitoring
Namespace:     	    default
Address:       		10.142.0.4
Default backend:       	default-http-backend:80 (10.0.0.21:8080)

Which is the internal node IP.

I made the following change, which resolves the issue for me, but I'm not sure it's the correct fix

diff --git a/core/pkg/ingress/status/status.go b/core/pkg/ingress/status/status.go
index 77b05d8..3a00aa7 100644
--- a/core/pkg/ingress/status/status.go
+++ b/core/pkg/ingress/status/status.go
@@ -206,12 +206,10 @@ func (s *statusSync) getRunningIPs() ([]string, error) {

        ips := []string{}
        for _, pod := range pods.Items {
-               ipAddr := k8s.GetNodeIP(s.Client, pod.Spec.NodeName)
-               if !strings.StringInSlice(ipAddr, ips) {
-                       ips = append(ips, ipAddr)
+               if !strings.StringInSlice(pod.Status.HostIP, ips) {
+                       ips = append(ips, pod.Status.HostIP)
                }
        }
-
        return ips, nil
 }

edit: this diff should be reversed I believe. I accidentally switch the branch..master order.

Audit backend interface for cross platform compatibility

I haven't had the time to prototype enough loadbalancer backends (eg: envoy, h2o, cloudprovider) with the generic_controller's interface. Currently it's pretty minimalistic, and captured in this commit: f908fb9

Tl;dr is an interface:

type Controller interface {
	Test(file string) *exec.Cmd
	OnUpdate(*api.ConfigMap, Configuration) ([]byte, error)
	BackendDefaults() defaults.Backend
	Info() *BackendInfo
}

and the Configuration type

type Configuration struct {
	HealthzURL         string
	Backends           []*Backend
	Servers            []*Server
	TCPEndpoints       []*Location
	UPDEndpoints       []*Location
	PassthroughBackend []*SSLPassthroughBackend
}

We need to attack this from 2 angles:

  1. Are the methods in this interface enough to give the backends the flexibility they need?
  2. Is the information passed through OnUpdate adequate

Right now, every onUpdate effectively gets:

Configuration
  Server 
    virtual server -> location -> Backends ([]endpoitns + ssl) -> Endpoints (address:port, health check info)
                ...more locations 
    ...more virtual servers
  ...more Servers

TestFileWatcher fails with a data race

$ make test
[...]
=== RUN   TestFileWatcher
==================
WARNING: DATA RACE
Write at 0x00c420012648 by goroutine 8:
  k8s.io/ingress/core/pkg/watch.TestFileWatcher.func1()
      /home/porridge/Desktop/coding/go/src/k8s.io/ingress/core/pkg/watch/file_watcher_test.go:32 +0x77
  k8s.io/ingress/core/pkg/watch.(*FileWatcher).watch.func1()
      /home/porridge/Desktop/coding/go/src/k8s.io/ingress/core/pkg/watch/file_watcher.go:65 +0x1b8

Previous read at 0x00c420012648 by goroutine 6:
  k8s.io/ingress/core/pkg/watch.TestFileWatcher()
      /home/porridge/Desktop/coding/go/src/k8s.io/ingress/core/pkg/watch/file_watcher_test.go:41 +0x297
  testing.tRunner()
      /home/porridge/.gvm/gos/go1.7.3/src/testing/testing.go:610 +0xc9

Goroutine 8 (running) created at:
  k8s.io/ingress/core/pkg/watch.(*FileWatcher).watch()
      /home/porridge/Desktop/coding/go/src/k8s.io/ingress/core/pkg/watch/file_watcher.go:73 +0x13c
  k8s.io/ingress/core/pkg/watch.NewFileWatcher()
      /home/porridge/Desktop/coding/go/src/k8s.io/ingress/core/pkg/watch/file_watcher.go:40 +0x9b
  k8s.io/ingress/core/pkg/watch.TestFileWatcher()
      /home/porridge/Desktop/coding/go/src/k8s.io/ingress/core/pkg/watch/file_watcher_test.go:36 +0x20b
  testing.tRunner()
      /home/porridge/.gvm/gos/go1.7.3/src/testing/testing.go:610 +0xc9

Goroutine 6 (running) created at:
  testing.(*T).Run()
      /home/porridge/.gvm/gos/go1.7.3/src/testing/testing.go:646 +0x52f
  testing.RunTests.func1()
      /home/porridge/.gvm/gos/go1.7.3/src/testing/testing.go:793 +0xb9
  testing.tRunner()
      /home/porridge/.gvm/gos/go1.7.3/src/testing/testing.go:610 +0xc9
  testing.RunTests()
      /home/porridge/.gvm/gos/go1.7.3/src/testing/testing.go:799 +0x4ba
  testing.(*M).Run()
      /home/porridge/.gvm/gos/go1.7.3/src/testing/testing.go:743 +0x12f
  main.main()
      k8s.io/ingress/core/pkg/watch/_test/_testmain.go:54 +0x1b8
==================
--- PASS: TestFileWatcher (0.04s)
PASS
Found 1 data race(s)
FAIL	k8s.io/ingress/core/pkg/watch	1.137s

Surely I must be doing something wrong, but filing this to track, just in case.

Incorrect X-Forwarded-Port for TLS servers

It appears that the X-Forwarded-Port is set incorrectly when ingress is configured for TLS resources: it looks like it ends up with a value of 442 as a result of the SSL server configuration.

This has the effect of breaking some applications that use the X-Forwared- headers to generate URLs.

Invalid port in upstream

I figure out an an issue running the master branch. The nginx-ingress mapped an invalid port "0". I have 2 ports (22/80) exposed in this pod, the port 22 was mapped incorrectly.

nginx.conf

    upstream gitlab-gitlab-80 {
        least_conn;
        server 10.1.5.21:0 max_fails=0 fail_timeout=0;
        server 10.1.5.21:80 max_fails=0 fail_timeout=0;
    }

Cmd

 ./nginx-ingress-controller --default-backend-service=kube-system/new-default-http-backend --tcp-services-configmap=kube-system/tcp-configmap --configmap=kube-system/nginx-load-balancer-conf

Pod description

Name:		gitlab-***
Namespace:	***
Node:		***
Start Time:	Tue, 06 Dec 2016 11:36:11 +0100
Labels:		component=***
		k8s-app=***
		pod-template-hash=***
Status:		Running
IP:		10.1.5.21
Controllers:	ReplicaSet/gitlab-***
Containers:
  gitlab:
    Container ID:	docker://***
    Image:		quay.io/***
    Image ID:		docker://***
    Ports:		80/TCP, 22/TCP
....

Logs

I1212 13:59:35.924606    1132 template.go:95] NGINX configuration: {"BacklogSize":511,"Backends":[
....
{"name":"gitlab-gitlab-80","secure":false,"endpoints":[
{"address":"10.1.5.21","port":"0","maxFails":0,"failTimeout":0},
{"address":"10.1.5.21","port":"80","maxFails":0,"failTimeout":0}]}] 
...
Error: exit status 1
2016/12/12 13:59:35 [emerg] 1153#1153: invalid port in upstream "10.1.5.21:0" in /tmp/nginx-cfg141892828:265
nginx: [emerg] invalid port in upstream "10.1.5.21:0" in /tmp/nginx-cfg141892828:265
nginx: configuration file /tmp/nginx-cfg141892828 test failed

Clarify port clash failure mode

If an ingress controller can't start because of port conflict (eg: #54 (comment)) it should throw an event and log. This is not an issue if the controller doesn't need a host port (100% cloud-provider).

No resolver when using external auth

Using the following annotation:

  annotations:
    ingress.kubernetes.io/auth-url: https://httpbin.org/basic-auth/user/passwd

error log:

2016/12/11 05:06:13 [error] 250#250: *40 no resolver defined to resolve httpbin.org, client: 127.0.0.1, server: prometheus.acme.co, request: "GET / HTTP/2.0", subrequest: "/_external-auth-Lw", host: "prometheus.acme.co"
2016/12/11 05:06:13 [error] 250#250: *40 auth request unexpected status: 502 while sending to client, client: 127.0.0.1, server: prometheus.acme.co, request: "GET / HTTP/2.0", host: "prometheus.acme.co"
127.0.0.1 - [127.0.0.1] - - [11/Dec/2016:05:06:13 +0000] "GET / HTTP/2.0" 502 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.98 Safari/537.36" 0 0.000 [default-echoheaders-y-80] - - - -
127.0.0.1 - [127.0.0.1] - - [11/Dec/2016:05:06:13 +0000] "GET / HTTP/2.0" 500 706 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.98 Safari/537.36" 11 0.000 [default-echoheaders-y-80] - - - -
2016/12/11 05:06:13 [error] 250#250: *40 no resolver defined to resolve httpbin.org, client: 127.0.0.1, server: prometheus.acme.co, request: "GET /favicon.ico HTTP/2.0", subrequest: "/_external-auth-Lw", host: "prometheus.acme.co", referrer: "https://prometheus.acme.co/"
2016/12/11 05:06:13 [error] 250#250: *40 auth request unexpected status: 502 while sending to client, client: 127.0.0.1, server: prometheus.acme.co, request: "GET /favicon.ico HTTP/2.0", host: "prometheus.acme.co", referrer: "https://prometheus.acme.co/"
127.0.0.1 - [127.0.0.1] - - [11/Dec/2016:05:06:13 +0000] "GET /favicon.ico HTTP/2.0" 502 0 "https://prometheus.acme.co/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.98 Safari/537.36" 0 0.000 [default-echoheaders-y-80] - - - -
127.0.0.1 - [127.0.0.1] - - [11/Dec/2016:05:06:13 +0000] "GET /favicon.ico HTTP/2.0" 500 706 "https://prometheus.acme.co/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.98 Safari/537.36" 27 0.000 [default-echoheaders-y-80] - - - -

Adding flags from specific backend implementation

Hey,

we are implementing our own version of the ingress controller right now. We would like to add some additional flags, but we can't find any way to accomplish it without touching the generic code.

We added the SetFlags function to the generic launch.go

func SetFlags(flags *pflag.FlagSet) {
	defaultSvc = flags.String("default-backend-service", "",
		`Service used to serve a 404 page for the default backend. Takes the form
    namespace/name. The controller uses the first node port of this Service for
    the default backend.`)

	ingressClass = flags.String("ingress-class", "",
		`Name of the ingress class to route through this controller.`)

	configMap = flags.String("configmap", "",
		`Name of the ConfigMap that contains the custom configuration to use`)

	publishSvc = flags.String("publish-service", "",
		`Service fronting the ingress controllers. Takes the form
	 namespace/name. The controller will set the endpoint records on the
	 ingress objects to reflect those on the service.`)

	tcpConfigMapName = flags.String("tcp-services-configmap", "",
		`Name of the ConfigMap that contains the definition of the TCP services to expose.
	The key in the map indicates the external port to be used. The value is the name of the
	service with the format namespace/serviceName and the port of the service could be a
	number of the name of the port.
	The ports 80 and 443 are not allowed as external ports. This ports are reserved for the backend`)

	udpConfigMapName = flags.String("udp-services-configmap", "",
		`Name of the ConfigMap that contains the definition of the UDP services to expose.
	The key in the map indicates the external port to be used. The value is the name of the
	service with the format namespace/serviceName and the port of the service could be a
	number of the name of the port.`)

	resyncPeriod = flags.Duration("sync-period", 60*time.Second,
		`Relist and confirm cloud resources this often.`)

	watchNamespace = flags.String("watch-namespace", api.NamespaceAll,
		`Namespace to watch for Ingress. Default is to watch all namespaces`)

	healthzPort = flags.Int("healthz-port", 10254, "port for healthz endpoint.")

	profiling = flags.Bool("profiling", true, `Enable profiling via web interface host:port/debug/pprof/`)

	defSSLCertificate = flags.String("default-ssl-certificate", "", `Name of the secret
		that contains a SSL certificate to be used as default for a HTTPS catch-all server`)

	defHealthzURL = flags.String("health-check-path", "/healthz", `Defines
		the URL to be used as health check inside in the default server in NGINX.`)

	k8sClientConfig = kubectl_util.DefaultClientConfig(flags)
}

and call it in our main.go

var (
	flags *pflag.FlagSet
	reloadCommand *string
	webhookEndpoint *string
	templateFile *string
	outputFile *string
)



func init()  {
	flags = pflag.NewFlagSet("", pflag.ExitOnError)
	reloadCommand = flags.String("ReloadCommand", "nginx -s reload", "Command that will be triggered whenever the nginx configuration changes")
	webhookEndpoint = flags.String("WebhookEndpoint", "http://127.0.0.1:8888/set", "Webhook that will be notified whenever new endpoints are added")
	templateFile = flags.String("TemplateFile", "/etc/nginx/nginx.tmpl", "Template to be rendered")
	outputFile = flags.String("OutputFile", "/etc/nginx/nginx.conf", "Location of rendered template")

	controller.SetFlags(flags)

	flags.AddGoFlagSet(flag.CommandLine)
	flags.Parse(os.Args)
}

Do you know of any way to do it in a more elegant fashion? We tried a lot things, but I either wipes the generic flags or all..

Any ideas?

Best regards

Nginx Ingress Controller [0.8.3] stock custom template parse error

I've tried to put in the original nginx.tmpl as a template. I've got this error:

main.go:71] invalid NGINX template: template: nginx.tmpl:210: function "buildAuthLocation" not defined

ConfigMap created with:

kubectl create configmap nginx-template --from-file=nginx.tmpl=./nginx.tmpl

rc.yaml

...
      volumes:
      - name: nginx-template-volume
        configMap:
          name: nginx-template
          items:
          - key: nginx.tmpl
            path: nginx.tmpl          

...
        volumeMounts:
        - mountPath: /etc/nginx/template
          name: nginx-template-volume
          readOnly: true          

Ingress creates wrong firewall rule after `default-http-backend` service was clobbered.

From kubernetes/kubernetes#36546.

During development, found ingress firewall test failed instantly on my cluster. Turned out due to some reasons, the default-http-backend service was deleted and recreated, and it was then allocated a different nodePort. Because ingress controller records this nodePort at the beginning but never refresh(ref1 and ref2), it always create firewall rule with the stale nodePort after.

@bprashanth

Ingress controllers should handle edge cases uniformly x-platform

From @aledbf on April 13, 2016 18:26

Ideas:

  • root URL (mapping to a different url in the upstream)
    • global per controlller β€”root-url
    • per Ingress rule using an annotation root-url. This means the ingress only contains one rule
  • upstreams to external resources (like /google proxy to http://www.google.cl)
kind: Ingress
metadata:
  name: external-ingress
spec:
  rules:
  - host: foo.bar.com
    http:
      paths:
      - backend:
          serviceName: external:www.google.cl
          servicePort: 80
        path: /google
  • rules that overlap (firt rule wins, event with warning) 766
$ kubectl get ing
NAME      RULE          BACKEND   ADDRESS
demo      -                       172.17.4.99
          foo.bar.com
          /             echoheaders-x:80
foo-tls   -                       172.17.4.99
          foo.bar.com
          /             echoheaders-x:80
          bar.baz.com
          /             echoheaders-y:80

NGINX controller e2e tests:

  • no default backend
  • invalid default backend
  • no ingress
  • single ingress:
    • rule
      • invalid service
      • invalid port
      • no endpoints
      • no host (it should use the _ nginx server)
      • without path
  • multiple ingress:
    • rule
      • invalid service
      • invalid port
      • no endpoints
      • with same path (error or just a warning?)
      • without path
  • TLS rules:
    • invalid certs
    • invalid hosts (fails verification)
    • invalid service
    • invalid port
    • no endpoints
    • no host (it should use the _ nginx server)
  • tcp configmap
    • invalid service
    • invalid port
    • no endpoints
  • custom nginx configmap
    • change timeouts
    • invalid fields

Copied from original issue: kubernetes-retired/contrib#775

Point gce ingress health checks at the node for onlylocal services

We now have a new beta annotation on Services, external-traffic (http://kubernetes.io/docs/user-guide/load-balancer/#loss-of-client-source-ip-for-external-traffic). With this annotation set to onlyLocal NodePort Services only proxy to local endpoints. If there are no local endpoints, iptables is configured to drop packets. Currently sticking an onlyLocal Service behind an Ingress works, but does so in a suboptimal way.

The issue is, currently, the best way to configure lb health checks is to set high failure threshold so we detect nodes with bad networking, but not flake on bad endpoints. With this approach, if all endpoints evacuate a node, it'll take eg: 10 health checks*10 seconds per health check = 1.5 minutes to mark that node unhealthy, but the node will start DROPing packets for the NodePort immediately. If we pointed the lb health check at the healthcheck-nodeport (a nodePort that's managed by kube-proxy), it would fail in < 10s even with the high thresholds described above.

@thockin

make test-e2e fails on push

It does not seem right that only privileged people should be allowed to run the e2e test.

make -C controllers/nginx container
make[2]: Entering directory `/home/porridge/Pulpit/coding/go/src/k8s.io/ingress/controllers/nginx'
docker build -t gcr.io/google_containers/nginx-ingress-controller:0.8.4 rootfs
Sending build context to Docker daemon 27.92 MB
Step 1 : FROM gcr.io/google_containers/nginx-slim:0.11
 ---> 1f831b9bc034
Step 2 : RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get install -y   diffutils   ssl-cert   --no-install-recommends   && rm -rf /var/lib/apt/lists/*   && make-ssl-cert generate-default-snakeoil --force-overwrite
 ---> Using cache
 ---> d503fdd37ab3
Step 3 : COPY . /
 ---> e8f230d9d3a0
Removing intermediate container 19fa9207856d
Step 4 : RUN curl -sSL -o /sbin/dumb-init https://github.com/Yelp/dumb-init/releases/download/v1.2.0/dumb-init_1.2.0_amd64 &&   chmod +x /sbin/dumb-init
 ---> Running in ea440bcbb2a1
 ---> e5adf07224f9
Removing intermediate container ea440bcbb2a1
Step 5 : ENTRYPOINT /sbin/dumb-init --
 ---> Running in bdc2f45c6af7
 ---> 6c2c67728394
Removing intermediate container bdc2f45c6af7
Step 6 : CMD /nginx-ingress-controller
 ---> Running in 352d60740191
 ---> 3968d1630603
Removing intermediate container 352d60740191
Successfully built 3968d1630603
make[2]: Leaving directory `/home/porridge/Pulpit/coding/go/src/k8s.io/ingress/controllers/nginx'
make -C controllers/nginx push
make[2]: Entering directory `/home/porridge/Pulpit/coding/go/src/k8s.io/ingress/controllers/nginx'
gcloud docker push gcr.io/google_containers/nginx-ingress-controller:0.8.4
WARNING: The '--' argument must be specified between gcloud specific args on the left and DOCKER_ARGS on the right. IMPORTANT: previously, commands allowed the omission of the --, and unparsed arguments were treated as implementation args. This usage is being deprecated and will be removed in March 2017.
This will be strictly enforced in March 2017. Use 'gcloud beta docker' to see new behavior.
Using 'push gcr.io/google_containers/nginx-ingress-controller:0.8.4' for DOCKER_ARGS.
The push refers to a repository [gcr.io/google_containers/nginx-ingress-controller]
c3a8219e9c1c: Pushing [==================================================>] 48.64 kB
cc537ee68cae: Pushing [==================================================>] 27.89 MB/27.89 MB
5911c4a66511: Pushing [==================================================>] 1.036 MB
6d33e15f21fc: Mounted from google_containers/nginx-slim 
1d2899dd3441: Mounted from google_containers/nginx-slim 
161ae3947447: Waiting 
36e0302572e7: Waiting 
5f70bf18a086: Waiting 
85f69a0fae58: Waiting 
denied: Access denied.
make[2]: *** [push] Error 1
make[2]: Leaving directory `/home/porridge/Pulpit/coding/go/src/k8s.io/ingress/controllers/nginx'
make[1]: *** [docker-push] Error 2
make[1]: Leaving directory `/home/porridge/Pulpit/coding/go/src/k8s.io/ingress'
2016/12/22 09:53:12 e2e.go:278: Step 'build-release' finished in 1m7.73947222s
2016/12/22 09:53:12 e2e.go:137: Something went wrong: error building: error building: error running build-release: exit status 2
exit status 1
make: *** [test-e2e] Error 1
porridge@beczulka:~/Desktop/coding/go/src/k8s.io/ingress$ 

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.