GithubHelp home page GithubHelp logo

kong-dist-kubernetes's Introduction

DEPRECATED

This repository has been deprecated. Please use docs for Kong for Kubernetes for installation and configuration of Kong on Kubernetes.

Website Documentation Discussion

Kong or Kong Enterprise can easily be provisioned on a Kubernetes cluster - see Kong on Kubernetes for all the details.

Important Note

When deploying into a Kubernetes cluster with Deployment Manager, it is important to be aware that deleting ReplicationController Kubernetes objects does not delete its underlying pods, and it is your responisibility to manage the destruction of these resources when deleting or updating a ReplicationController in your configuration.

Kong Enterprise

Kong Enterprise is our powerful offering for larger organizations in need of security, monitoring, compliance, developer onboarding, higher performance, granular access and a dashboard to manage Kong easily. Learn more at https://konghq.com/kong-enterprise/.

Usage

Assuming the prerequisite of access to a k8s cluster via kubectl

make run_<postgres|cassandra>

Expose the admin api

kubectl port-forward -n kong svc/kong-control-plane 8001:8001
curl localhost:8001

Access the proxy

export HOST=$(kubectl get nodes --namespace default -o jsonpath='{.items[0].status.addresses[0].address}')
export PROXY_PORT=$(kubectl get svc --namespace kong kong-ingress-data-plane -o jsonpath='{.spec.ports[0].nodePort}')
curl $HOST:$PROXY_PORT

Cleanup

make cleanup

Usage

Assuming the prerequisite of access to a k8s cluster via kubectl

make run_<postgres|cassandra|dbless>

Expose the admin api

kubectl port-forward -n kong svc/kong-control-plane 8001:8001 &
curl localhost:8001

Access the proxy

export HOST=$(kubectl get nodes --namespace default -o jsonpath='{.items[0].status.addresses[0].address}')
export PROXY_PORT=$(kubectl get svc --namespace kong kong-ingress-data-plane -o jsonpath='{.spec.ports[0].nodePort}')
curl $HOST:$PROXY_PORT

If using dbless/declarative the declarative.yaml file is mounted as a config map onto the Kong containers. We use the md5sum of declarative.yaml file to update the deployment but per Facilitate ConfigMap rollouts / management for production setups one would might be best to use helm, kustomize or reloader.

Cleanup

make cleanup

kong-dist-kubernetes's People

Contributors

aaronhmiller avatar coopr avatar dhudsmith avatar fgribreau avatar hbagdi avatar hutchic avatar ipedrazas avatar jammm avatar lawrencegripper avatar mptap avatar pabloguerrero avatar rainest avatar shashiranjan84 avatar shrey-rajvanshi avatar stackedsax avatar subnetmarco avatar thibaultcha avatar yousafsyed avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kong-dist-kubernetes's Issues

postgres namespace is default, should be kong

Installation failing with ingress-data-plane pod throwing this error:

init_by_lua error: /usr/local/share/lua/5.1/kong/init.lua:355: [PostgreSQL error] failed to retrieve server_version_num: host or service not provided, or not known
stack traceback:
[C]: in function 'assert'
/usr/local/share/lua/5.1/kong/init.lua:355: in function 'init'
init_by_lua:3: in main chunk
nginx: [error] init_by_lua error: /usr/local/share/lua/5.1/kong/init.lua:355: [PostgreSQL error] failed to retrieve server_version_num: host or service not provided, or not known
stack traceback:
[C]: in function 'assert'
/usr/local/share/lua/5.1/kong/init.lua:355: in function 'init'
init_by_lua:3: in main chunk

PG_HOST assumes postgres to be setup in kong namespace.

Crash on kong >v0.9.1

It works when image: kong:0.9.0, but kong:0.9.1 image cannot resolve kong-database on runtime. I've checked with nslookup and telnet and these are resolve properly kong-database .

2016/11/23 08:39:20 [error] 107#0: [lua] cluster.lua:41: [postgres error] kong-database could not be resolved (2: Server failure), context: ngx.timer
2016/11/23 08:39:23 [error] 107#0: [lua] cluster.lua:41: [postgres error] kong-database could not be resolved (2: Server failure), context: ngx.timer
2016/11/23 08:39:26 [error] 107#0: [lua] postgres_db.lua:41: failed to cleanup TTLs: kong-database could not be resolved (2: Server failure), context: ngx.timer
2016/11/23 08:39:26 [error] 107#0: [lua] cluster.lua:82: [postgres error] kong-database could not be resolved (2: Server failure), context: ngx.timer
2016/11/23 08:39:26 [error] 108#0: [lua] postgres_db.lua:41: failed to cleanup TTLs: kong-database could not be resolved (2: Server failure), context: ngx.timer
2016/11/23 08:39:26 [error] 107#0: [lua] cluster.lua:41: [postgres error] kong-database could not be resolved (2: Server failure), context: ngx.timer

External Postgres using cloud proxy on GCE fails (could not retrieve current migrations)

I set up a CloudSQL instance of Postgres 9.6 and added the gcr.io/cloudsql-docker/gce-proxy:1.11 proxy and necessary configs. The kong-migration job file fails with the following errors and no matter what I try, I get same result.

Error

[postgres error] could not retrieve current migrations: [postgres error] host or service not provided, or not known

Expected result

migrations to work (they work locally with docker images and run using --link)

Details


Container | Timestamp | Message
-- | -- | --
kong-migration | 2017-11-29T19:52:11.784059105Z | init_worker_by_lua:46: in function <init_worker_by_lua:44>
kong-migration | 2017-11-29T19:52:11.784056965Z | [C]: in function 'xpcall'
kong-migration | 2017-11-29T19:52:11.784054644Z | init_worker_by_lua:39: in function <init_worker_by_lua:37>
kong-migration | 2017-11-29T19:52:11.784052611Z | /usr/local/bin/kong:7: in function 'file_gen'
kong-migration | 2017-11-29T19:52:11.784050178Z | /usr/local/share/lua/5.1/kong/cmd/init.lua:88: in function </usr/local/share/lua/5.1/kong/cmd/init.lua:45>
kong-migration | 2017-11-29T19:52:11.784048123Z | [C]: in function 'xpcall'
kong-migration | 2017-11-29T19:52:11.784045495Z | /usr/local/share/lua/5.1/kong/cmd/init.lua:88: in function </usr/local/share/lua/5.1/kong/cmd/init.lua:88>
kong-migration | 2017-11-29T19:52:11.784043192Z | /usr/local/share/lua/5.1/kong/cmd/migrations.lua:34: in function 'cmd_exec'
kong-migration | 2017-11-29T19:52:11.784040521Z | [C]: in function 'assert'
kong-migration | 2017-11-29T19:52:11.784038605Z | stack traceback:
kong-migration | 2017-11-29T19:52:11.784036043Z | /usr/local/share/lua/5.1/kong/cmd/migrations.lua:34: [postgres error] could not retrieve current migrations: [postgres error] host or service not provided, or not known
kong-migration | 2017-11-29T19:52:11.784028195Z | Error:
kong-migration | 2017-11-29T19:52:11.784020572Z | 2017/11/29 19:52:11 [verbose] running datastore migrations
kong-migration | 2017-11-29T19:52:11.784018375Z | 2017/11/29 19:52:11 [verbose] prefix in use: /usr/local/kong
kong-migration | 2017-11-29T19:52:11.784016257Z | 2017/11/29 19:52:11 [verbose] no config file, skipping loading
kong-migration | 2017-11-29T19:52:11.784013505Z | 2017/11/29 19:52:11 [verbose] no config file found at /etc/kong.conf
kong-migration | 2017-11-29T19:52:11.784000226Z | 2017/11/29 19:52:11 [verbose] no config file found at /etc/kong/kong.conf
kong-migration | 2017-11-29T19:52:11.783955120Z | 2017/11/29 19:52:11 [verbose] Kong: 0.11.1
cloudsql-proxy | 2017-11-29T19:49:13.630048863Z | 2017/11/29 19:49:13 Ready for new connections
cloudsql-proxy | 2017-11-29T19:49:13.630045428Z | 2017/11/29 19:49:13 Listening on 127.0.0.1:5432 for my-project:us-central1:my-instance-id
cloudsql-proxy | 2017-11-29T19:49:13.630015451Z | 2017/11/29 19:49:13 using credential file for authentication; [email protected]

The configuration file with additional image is as follows. The cloud sql proxy runs fine but the kong image crashes every time. I thought at first perhaps it needed time for the proxy to connect so I added the restartPolicy: OnFailure but that doesn't solve. For some reason the image doesn't recognize the ENV configs and cannot find the database, or are the errors telling me something else?

apiVersion: batch/v1
kind: Job
metadata:
  name: kong-migration
spec:
  template:
    metadata:
      name: kong-migration
    spec:
      containers:
      - name: kong-migration
        image: kong
        env:
          - name: KONG_NGINX_DAEMON
            value: 'off'
          - name: KONG_DATABASE
            value: "postgres"
          - name: KONG_PG_HOST
            value: "127.0.0.1:5432"
          - name: KONG_PG_PASSWORD
            valueFrom:
              secretKeyRef:
                name: my-secret-credentials
                key: password
        command: [ "/bin/sh", "-c", "kong migrations up -v" ]
      - name: cloudsql-proxy
        image: gcr.io/cloudsql-docker/gce-proxy:1.11
        command: ["/cloud_sql_proxy", "--dir=/cloudsql",
                  "-instances=my-project:us-central1:my-instance-id=tcp:5432",
                  "-credential_file=/secrets/cloudsql/credentials.json"]
        volumeMounts:
          - name: my-instance-credentials
            mountPath: /secrets/cloudsql
            readOnly: true
          - name: ssl-certs
            mountPath: /etc/ssl/certs
          - name: cloudsql
            mountPath: /cloudsql
      volumes:
        - name: my-instance-credentials
          secret:
            secretName: my-instance-credentials
        - name: cloudsql
          emptyDir:
        - name: ssl-certs
          hostPath:
            path: /etc/ssl/certs
      restartPolicy: OnFailure

screenshot 2017-11-29 13 02 54

[Ingress] Service configuration with Ingress 404

When I deployed kong and only exposed NodePort services for 8000,8001 and routed nginx via Ingress, the gateway access failed

version : kong/0.13.1

Services

apiVersion: v1
kind: Service
metadata:
  name: kong-proxy
  namespace: kong
  labels:
    app: kong
spec:
  selector:
    app: kong
  ports:
  - name: kong-proxy
    port: 8000
    targetPort: 8000

Ingress

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: kong-proxy
  namespace: kong
  labels:
    app: kong
  annotations:
    kubernetes.io/ingress.class: "nginx"
    kubernetes.io/tls-acme: "true"
    nginx.ingress.kubernetes.io/proxy-body-size: "20m"
    nginx.ingress.kubernetes.io/proxy-connect-timeout: "30"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "60"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "60"
    nginx.ingress.kubernetes.io/configuration-snippet: |
      more_set_headers "Host: mobile-dev.dzxwapp.com";
spec:
  rules:
  - host: gateway.dzxwapp.com
    http:
      paths:
      - backend:
          serviceName: kong-proxy
          servicePort: 8000
  tls:
  - hosts:
      - gateway.dzxwapp.com
    secretName: gateway.dzxwapp.com-tls

failed request

curl -X "GET" "https://gateway.dzxwapp.com/v1/status" \
     -H 'Host: mobile-dev.dzxwapp.com' 

HTTP/1.1 404 Not Found
Server: nginx/1.13.9
Date: Wed, 02 May 2018 08:34:19 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 21
Connection: close
Strict-Transport-Security: max-age=15724800; includeSubDomains;

default backend - 404

success request

curl -X "GET" "http://kong-proxy.kong.svc.cluster.local:8000/v1/status" \
     -H 'Host: mobile-dev.dzxwapp.com' 

HTTP/1.1 403 Forbidden
Date: Wed, 02 May 2018 08:43:28 GMT
Content-Type: application/json; charset=utf-8
Transfer-Encoding: chunked
Connection: keep-alive
Server: kong/0.13.1

{"message":"Invalid authentication credentials"}

Very long readinessProbe

Hi Shashi, thank you for creating the kong chart.
I was wondering, why is the readinessProbe initial delay set to 300 seconds (5 minutes)?
Is there a reason that it needs to be so long?

Deploy Kong with HTTP Load Balancer on Google Cloud

##How do I deploy Kong with HTTP Load Balancer on GKE
I am trying to setup Kong on GKE with HTTP Load Balancing. GKE allow creation of HTTP Load Balancer with Ingress Resource. It's documented here.
https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer

Problem is HTTP Load Balancer setup default health check rule which expects HTTP 200 status to the GET requests on / path. Kong doesn't provide that out of the box. Does Kong provide any other health check endpoint that I can use with GKE Ingress?

Backup/restore cluster

Hey!

Thank you for the repo, it helped me to get started fairly quickly.
I don't have that much experience with Kubernetes, so maybe my question has at least one obvious answer that I can't see for now.

What strategy would you use when using Kubernetes on Google Container Engine, for backup Cassandra data?

Cheers.

Postgres vs Cassandra

I noticed cassandra has deployed as StatefulSet vs Postgres as ReplicationController. Can I understand the difference? Also can anyone suggest how to choose between these 2?

Error when setting up kong on minikube {"message":"no route and no API found with those values"}

I am currently trying to setup Kong on a minikube kubernetes cluster, and I have followed the steps from https://github.com/Kong/kong-dist-kubernetes/blob/master/minikube/README.md

However, When I run the command the curl $(minikube service --url kong-proxy|head -n1) I get the response {"message":"no route and no API found with those values"}

I am unsure on why this is happening as all the pods are running and when I curl kong-admin I get the correct response.

unsuccessful deployment at my kubernetes cluster if kong dns_resolver is consul

images: kong 0.14.0-centos
my deployment file:

apiVersion: v1
kind: Service
metadata:
  name: kong-proxy
spec:
  type: LoadBalancer
  loadBalancerSourceRanges:
  - 0.0.0.0/0
  ports:
  - name: kong-proxy
    port: 8000
    targetPort: 8000
    protocol: TCP
  selector:
    app: kong

---
apiVersion: v1
kind: Service
metadata:
  name: kong-proxy-ssl
spec:
  type: LoadBalancer
  loadBalancerSourceRanges:
  - 0.0.0.0/0
  ports:
  - name: kong-proxy-ssl
    port: 8443
    targetPort: 8443
    protocol: TCP
  selector:
    app: kong

---
apiVersion: v1
kind: Service
metadata:
  name: kong-admin
spec:
  type: LoadBalancer
  loadBalancerSourceRanges:
  - 0.0.0.0/0
  ports:
  - name: kong-admin
    port: 8001
    targetPort: 8001
    protocol: TCP
  selector:
    app: kong

---
apiVersion: v1
kind: Service
metadata:
  name: kong-admin-ssl
spec:
  type: LoadBalancer
  loadBalancerSourceRanges:
  - 0.0.0.0/0
  ports:
  - name: kong-admin-ssl
    port: 8444
    targetPort: 8444
    protocol: TCP
  selector:
    app: kong

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kong-rc
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: kong-rc
        app: kong
    spec:
      containers:
        - name: consul-client
          image: "consul:1.2.2"
          args:
            - "agent"
            - "-advertise=$(PODIP)"
            - "-bind=0.0.0.0"
            - "-retry-join=consul-0.consul.$(NAMESPACE).svc.cluster.local"
            - "-retry-join=consul-1.consul.$(NAMESPACE).svc.cluster.local"
            - "-retry-join=consul-2.consul.$(NAMESPACE).svc.cluster.local"
            - "-client=0.0.0.0"
            - "-datacenter=dc1"
            - "-data-dir=/consul/data"
            - "-domain=cluster.local"
          env:
            - name: PODIP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          lifecycle:
            preStop:
              exec:
                command:
                - /bin/sh
                - -c
                - consul leave
          resources:
            limits:
              cpu: "200m"
              memory: 512Mi
            requests:
              cpu: "100m"
              memory: 256Mi
          ports:
            - containerPort: 8500
              name: ui-port
            - containerPort: 8400
              name: alt-port
            - containerPort: 53
              name: udp-port
            - containerPort: 8443
              name: https-port
            - containerPort: 8080
              name: http-port
            - containerPort: 8301
              name: serflan
            - containerPort: 8302
              name: serfwan
            - containerPort: 8600
              name: consuldns
            - containerPort: 8300
              name: server
        - name: kong
          image: "kong:0.14.0-centos"
          imagePullPolicy: Always
          securityContext:
            capabilities:
              add:
              - SYS_MODULE
              - NET_ADMIN
              - SYS_ADMIN
          env:
            - name: KONG_ADMIN_LISTEN
              value: "0.0.0.0:8001, 0.0.0.0:8444 ssl"
            - name: KONG_DATABASE
              value: postgres
            - name: KONG_PG_USER
              value: kong
            - name: KONG_PG_DATABASE
              value: kong
            - name: KONG_PG_PASSWORD
              value: kong
            - name: KONG_PG_HOST
              value: postgres
            - name: KONG_PROXY_ACCESS_LOG
              value: "/var/log/proxy_access.log"
            - name: KONG_ADMIN_ACCESS_LOG
              value: "/var/log/admin_access.log"
            - name: KONG_PROXY_ERROR_LOG
              value: "/var/log/proxy_error.log"
            - name: KONG_ADMIN_ERROR_LOG
              value: "/var/log/admin_error.log"
            - name: KONG_DNS_RESOLVER
              value: "127.0.0.1:8600"
            - name: KONG_DNSMASQ
              value: "off"
          resources:
            limits:
              cpu: "2"
              memory: 8G
            requests:
              cpu: "2"
              memory: 6G
          ports:
            - name: admin
              containerPort: 8001
              protocol: TCP
            - name: proxy
              containerPort: 8000
              protocol: TCP
            - name: proxy-ssl
              containerPort: 8443
              protocol: TCP
            - name: admin-ssl
              containerPort: 8444
              protocol: TCP

when i run:

kubectl create -f kong.yaml

it show that pod can run successful. but when i logon into the pod, i will find some error in KONG_PROXY_ERROR_LOG.

2018/08/20 06:46:25 [notice] 1#0: start worker process 86
2018/08/20 06:46:26 [crit] 85#0: *26 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:26 [crit] 72#0: *72 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:26 [crit] 62#0: *104 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:26 [crit] 82#0: *109 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:26 [crit] 71#0: *64 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:26 [crit] 63#0: *46 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:26 [crit] 74#0: *116 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:26 [crit] 68#0: *55 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:26 [crit] 55#0: *89 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:26 [crit] 60#0: *49 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:26 [crit] 67#0: *54 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:27 [crit] 70#0: *66 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:27 [crit] 64#0: *53 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:27 [crit] 77#0: *120 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:27 [crit] 75#0: *52 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:27 [crit] 69#0: *71 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:27 [crit] 81#0: *117 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:27 [crit] 57#0: *58 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:28 [crit] 65#0: *65 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:28 [crit] 84#0: *61 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:28 [crit] 66#0: *69 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:29 [crit] 86#0: *70 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:29 [crit] 58#0: *77 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:30 [crit] 61#0: *80 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:30 [crit] 56#0: *86 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:31 [crit] 79#0: *85 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:31 [crit] 78#0: *95 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer
2018/08/20 06:46:31 [crit] 83#0: *98 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: could not acquire callback lock: timeout, context: ngx.timer
2018/08/20 06:46:31 [crit] 59#0: *90 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: could not acquire callback lock: timeout, context: ngx.timer
2018/08/20 06:46:31 [crit] 80#0: *93 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: could not acquire callback lock: timeout, context: ngx.timer
2018/08/20 06:46:31 [crit] 76#0: *94 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: could not acquire callback lock: timeout, context: ngx.timer
2018/08/20 06:46:31 [crit] 73#0: *99 [lua] balancer.lua:710: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 2 server failure, context: ngx.timer

Disconnection after 60s with GRPC connection

I am connecting my GRPC micro-service through kong by a Android client. GRPC connection works fine for 60s after that it disconnects. I have changed read_timeout, write_timeout and connect_timeout to 300s in services. it has no effect. Also I have changed keepalive_timeout 60s; in kong template. It does not have any effect either. It seems there is some issue with GRPC connection.

Kong does not detect new ingress resource

Kubernetes details

kubectl --version
Kubernetes v1.9.5

Kong Installation

helm install --name kongssl stable/kong

Ingress Resource

----
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: grafana-ingress
  namespace: ns
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  tls:
  - hosts:
    - foo.bar
    secretName: foo-cert 
  rules:
  - host: foo.bar 
    http:
      paths:
      - path: /
        backend:
          serviceName: grafana
          servicePort: 3000 

Issue

The ingress resource gets create successfully however no new service is added to Kong

curl -k https://ADMIN_IP:ADMIN_PORT/apis
{"total":0,"data":[]}

curl -k https://ADMIN_IP:ADMIN_PORT/services
{"next":null,"data":[]}

Kong Admin port does not work

The kong admin port and admin ssl port is listening to 127.0.0.1 instead of 0.0.0.0 due to which access of admin port from K8S loadbalancer is service is not possible. Connection is refused at kubernetes level

================================================================
[root@sandbox-controller-sec kong-dist-kubernetes]# kubectl get svc --all-namespaces=true
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kong-admin LoadBalancer 10.101.32.176 8001:32601/TCP 18s
default kong-admin-ssl LoadBalancer 10.101.95.182 8444:32224/TCP 18s
default kong-proxy LoadBalancer 10.104.106.80 8000:30682/TCP 18s
default kong-proxy-ssl LoadBalancer 10.102.116.170 8443:30701/TCP 18s
default kubernetes ClusterIP 10.96.0.1 443/TCP 1d
default postgres ClusterIP 10.96.82.10 5432/TCP 4m
kube-system kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP 1d
[root@sandbox-controller-sec kong-dist-kubernetes]# telnet 10.101.32.176 8001
Trying 10.101.32.176...
telnet: connect to address 10.101.32.176: Connection refused
[root@sandbox-controller-sec kong-dist-kubernetes]# telnet 10.101.32.176 8001
Trying 10.101.32.176...
telnet: connect to address 10.101.32.176: Connection refused
[root@sandbox-controller-sec kong-dist-kubernetes]# telnet 10.104.106.80 8000
Trying 10.104.106.80...
Connected to 10.104.106.80.
Escape character is '^]'.
^]
telnet> quit
Connection closed.
[root@sandbox-controller-sec kong-dist-kubernetes]# telnet 10.101.32.176 8001
Trying 10.101.32.176...
telnet: connect to address 10.101.32.176: Connection refused
[root@sandbox-controller-sec kong-dist-kubernetes]# telnet 10.101.32.176 8001
Trying 10.101.32.176...
telnet: connect to address 10.101.32.176: Connection refused

================================================================

When Kong is started it uses 127.0.0.1 for admin port by default-

[root@kong-rc-76b9cf6fc5-dvsv8 kong]# kong start -vvv
2018/01/18 15:00:40 [verbose] Kong: 0.12.0
2018/01/18 15:00:40 [verbose] reading config file at /opt/confg/nginx/nginx.conf
2018/01/18 15:00:40 [verbose] prefix in use: /opt/confg/kong
2018/01/18 15:00:40 [verbose] preparing nginx prefix directory at /opt/confg/kong
2018/01/18 15:00:40 [verbose] SSL enabled, no custom certificate set: using default certificate
2018/01/18 15:00:40 [verbose] default SSL certificate found at /opt/confg/kong/ssl/kong-default.crt
2018/01/18 15:00:40 [verbose] Admin SSL enabled, no custom certificate set: using default certificate
2018/01/18 15:00:40 [verbose] admin SSL certificate found at /opt/confg/kong/ssl/admin-kong-default.crt
2018/01/18 15:00:43 [verbose] could not start Kong, stopping services
2018/01/18 15:00:43 [verbose] stopped services
Error:
sr/local/share/lua/5.1/kong/cmd/start.lua:62: /usr/local/share/lua/5.1/kong/cmd/start.lua:51: nginx: [emerg] bind() to 0.0.0.0:8000 failed (98: Address already in use▒
nginx: [emerg] bind() to 0.0.0.0:8443 failed (98: Address already in use)
nginx: [emerg] bind() to 127.0.0.1:8001 failed (98: Address already in use)
nginx: [emerg] bind() to 127.0.0.1:8444 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:8000 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:8443 failed (98: Address already in use)
nginx: [emerg] bind() to 127.0.0.1:8001 failed (98: Address already in use)
nginx: [emerg] bind() to 127.0.0.1:8444 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:8000 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:8443 failed (98: Address already in use)
nginx: [emerg] bind() to 127.0.0.1:8001 failed (98: Address already in use)
nginx: [emerg] bind() to 127.0.0.1:8444 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:8000 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:8443 failed (98: Address already in use)
nginx: [emerg] bind() to 127.0.0.1:8001 failed (98: Address already in use)
nginx: [emerg] bind() to 127.0.0.1:8444 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:8000 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:8443 failed (98: Address already in use)
nginx: [emerg] bind() to 127.0.0.1:8001 failed (98: Address already in use)
nginx: [emerg] bind() to 127.0.0.1:8444 failed (98: Address already in use)

[root@kong-rc-76b9cf6fc5-dvsv8 /]# netstat -anp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:8443 0.0.0.0:* LISTEN 1/nginx: master pro
tcp 0 0 127.0.0.1:8444 0.0.0.0:* LISTEN 1/nginx: master pro
tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN 1/nginx: master pro
tcp 0 0 127.0.0.1:8001 0.0.0.0:* LISTEN 1/nginx: master pro**

A workaround is to add admin_listen=0.0.0.0:8001 in the kong configuration before starting

Ideally it should use 0.0.0.0

Unable to mount volumes for pod when uses cassandra

the log is
Normal Scheduled 2m default-scheduler Successfully assigned cassandra-0 to 192.168.110.131
Normal SuccessfulMountVolume 2m kubelet, 192.168.110.131 MountVolume.SetUp succeeded for volume "default-token-qmqjk"
Warning FailedMount 14s kubelet, 192.168.110.131 Unable to mount volumes for pod "cassandra-0_default(9a39b8ab-a9d2-11e9-bc97-000c293279b8)": timeout expired waiting for volumes to attach/mount for pod "default"/"cassandra-0". list of unattached/unmounted volumes=[cassandra-data]

error validating "cassandra.yaml"

When executing

kubectl create -f cassandra.yaml

on a GKE cluster I get the following error:

error validating "cassandra.yaml": error validating data: the server could not find the requested resource; if you choose to ignore these errors, turn validation off with --validate=false

for postgres:the migration doesn't work with postgres owing multiple replicas

I have noticed that the migration can't run successfully at environment with multiple replicas of postgres.And then i deployed a HA environment of postgres,nether can the migration run successfully.The error of above two status are the same -----can't find postgres,finally timeout.

Is that function not supportable?

kong-dbless.yaml will not apply

kong-dbless.yaml is failing to apply with:

# kubectl create -f dev/test.yaml
Error from server (BadRequest): error when creating "dev/test.yaml": Deployment in version "v1beta1" cannot be handled as a Deployment: v1beta1.Deployment.Spec: v1beta1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Env: []v1.EnvVar: v1.EnvVar.Value: ReadString: expects " or n, but found f, error found in #10 byte of ...|,"value":false},{"na|..., bigger context ...|dev/stderr"},{"name":"KONG_ADMIN_LISTEN","value":false},{"name":"KONG_PROXY_LISTEN","value":"0.0.0.0|...

Versions:

# kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.8", GitCommit:"a89f8c11a5f4f132503edbc4918c98518fd504e3", GitTreeState:"clean", BuildDate:"2019-04-23T04:52:31Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.7", GitCommit:"6f482974b76db3f1e0f5d24605a9d1d38fad9a2b", GitTreeState:"clean", BuildDate:"2019-03-25T02:41:57Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}

This can be fixed by adding single quotes around 'off' on line 38: https://github.com/Kong/kong-dist-kubernetes/blob/master/kong-dbless.yaml#L38

Manually adding values to Kong but after a while values are getting deleted

Running a kong kubernetes cluster which includes a kong deployment & two services kong-admin & kong-proxy. The cluster is running as expected, but when I make a POST request for services to kong-admin api, the service gets added & after some time the same service gets deleted automatically.

Why are the manually added values getting deleted ?

I’m using a postgres database which is bootstrapped. So, I don’t think that database is the issue.

Issue with kong_postgres.yml on GCE

Hello !

The default configuration did not worked, I started:

kubectl create -n kong -f postgres.yml
kubectl create -n kong -f kong_postgres.yml

and looking at /usr/local/kong/logs/ I saw kong was not able to convert KONG_PG_HOST to the IP address. To make it work (for testing) I had to edit kong_postgres.yml and put the real IP of the postgres host.

Note: once connected through bash to the kong RC pod, I was able to resolve the postgres.kong host so the real issue is not clear to me.

Support for bare metal Kubernetes cluster

Please correct me if I am wrong, but Kong currently will not work on a Kubernetes bare metal cluster due to the lack of LoadBalancer services. It seems LoadBalancer services are a dependency. If not are there Kubernetes configs available for a bare metal cluster?

[Question] Service configuration with Ingress instead of LoadBalancer

In the example kong_postgress.yaml file you have 4 load balancers, one for each port for both proxy and admin. If instead I wanted to use an Ingress for terminating SSL and routing to the app, is that okay or is there some specific reason I would want 4 load balancers?

Preferred

  • Ingress (api.mydomain.com w/ static IP)
    • NodePort (kong)
      • Deployment (kong)

I will trial/error but just wondering if by design there is some reason to have 4 separate services for kong and whether they must be LoadBalancer, or whether anyone has used Ingress instead.

unsuccessful deployment at my kubernetes cluster

kubectl version:

Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.2", GitCommit:"bdaeafa71f6c7c04636251031f93464384d54963", GitTreeState:"clean", BuildDate:"2017-10-24T19:38:10Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

use kong-dist-kubernetes at branch: master

images: kong 0.13.0

run:

kubectl create -f postgres.yaml
kubectl create -f kong_migration_postgres.yaml
kubectl create -f kong_postgres.yaml

logs error:

2018/04/18 08:55:08 [crit] 47#0: *3 [lua] balancer.lua:685: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 3 name error, context: ngx.timer
2018/04/18 08:55:08 [crit] 46#0: *24 [lua] balancer.lua:685: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 3 name error, context: ngx.timer

2018/04/18 08:55:08 [crit] 46#0: *24 [lua] balancer.lua:685: init(): failed loading initial list of upstreams: failed to get from node cache: [postgres error] [toip() name lookup failed]: dns server error: 3 name error, context: ngx.timer
2018/04/18 08:56:08 [error] 47#0: *457 [lua] postgres.lua:220: [postgres] could not cleanup TTLs: [toip() name lookup failed]: dns server error: 3 name error, context: ngx.timer
2018/04/18 08:56:08 [error] 46#0: *458 [lua] postgres.lua:220: [postgres] could not cleanup TTLs: [toip() name lookup failed]: dns server error: 3 name error, context: ngx.timer
2018/04/18 08:57:08 [error] 47#0: *915 [lua] postgres.lua:220: [postgres] could not cleanup TTLs: [toip() name lookup failed]: dns server error: 3 name error, context: ngx.timer
2018/04/18 08:57:08 [error] 46#0: *940 [lua] postgres.lua:220: [postgres] could not cleanup TTLs: [toip() name lookup failed]: dns server error: 3 name error, context: ngx.timer
2018/04/18 08:58:08 [error] 47#0: *1373 [lua] postgres.lua:220: [postgres] could not cleanup TTLs: [toip() name lookup failed]: dns server error: 3 name error, context: ngx.timer

use curl for admin:

$ curl -i -X POST \                              
  --url http://101.132.118.128:30214/services/ \
  --data 'name=example-service' \
  --data 'url=http://mockbin.org'
HTTP/1.1 500 Internal Server Error
Date: Wed, 18 Apr 2018 08:56:29 GMT
Content-Type: application/json; charset=utf-8
Transfer-Encoding: chunked
Connection: keep-alive
Access-Control-Allow-Origin: *
Server: kong/0.13.0

{"message":"An unexpected error occurred"}

BTW,the doc at https://getkong.org/install/kubernetes/?_ga=2.111659847.2146740119.1524015965-1509646068.1524015965, this des:

Note: Included manifest files in repo only support Kong v0.11.x, for 0.10.x please use the tag 1.0.0.

need update...

Multiple replicas don't pick up admin updates.

Maybe not an issue but more of a question. If I have 2 kong replicas, and I make an an update to the admin service to add a route to kong, It seems like only the replica that received the request via the admin service is updated. I need to restart the other pod for it to pick up the changes in postgres. Is this expected? Not a huge deal as routes don't change often, but I was wondering if there was away to notify the other replicas once one has been updated. They are all using an external postgres store. Seems like Kong is storing or doing something in memory on the pod that received teh admin API request.

Inconsistent Documentation - Missing installation step.

While following the documentation for k8s installation, I encountered the following errors.

Error due to non-existent secret. Warning FailedMount MountVolume.SetUp failed for volume "api-server-cert" : secrets "kong-control-plane.kong.svc" not found

Found out from Makefile - setup_certificate.sh step is missing in the docs.

[Ingress] Health check requires custom host parameter to avoid 404

When I deployed kong and only exposed NodePort services for 8000,8001 and routed traffic via Ingress, the gateway worked fine but the proxy failed health checks.

  • solution: add custom host in the health check matching a valid api configured from gateway

Using gcloud admin console, I clicked on "More" link and it revealed the ability to add a custom host in addition to the health check path. Once I added the domain for a pre-configured API, then health checks worked. I hope this helps someone else who faces similar issue.

screenshot_2017-11-30_19_58_53

Using kube-dns doesn't resolve correctly.

Setup

With the env variables

        env:
          - name: KONG_DNS_RESOLVER
            value: 10.3.240.10
          - name: KONG_DNSMASQ
            value: "off"
         ...

Note 10.3.240.10 is the result of

kubectl get svc kube-dns --namespace=kube-system | grep kube-dns | awk '{print $2}'

Expected

    {
         "preserve_host" : false,
         "upstream_url" : "http://console-graphql-service/graphiql",
         "created_at" : 1481226046000,
         "strip_request_path" : true,
         "name" : "graphiql",
         "id" : "b3369902-e18b-4eb6-99d7-d3faef64a1c9",
         "request_path" : "/graphiql"
      }

Should resolve to the console-graphql-service.

Actual

Kong Error

An invalid response was received from the upstream server.

Note, substituting console-graphql-service in the api config with the service ip does return the expected result.

for kong1.0.0,job doesn't work with using postgres

original job

apiVersion: batch/v1
kind: Job
metadata:
  name: kong-migration
spec:
  template:
    metadata:
      name: kong-migration
    spec:
      containers:
      - name: kong-migration
        image: kong:1.0.0rc3
        env:
          - name: KONG_NGINX_DAEMON
            value: 'off'
          - name: KONG_PG_PASSWORD
            value: kong
          - name: KONG_PG_HOST
            value: postgres
        command: [ "/bin/sh", "-c", "kong migrations up" ]
      restartPolicy: Never

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.