GithubHelp home page GithubHelp logo

octops / gameserver-ingress-controller Goto Github PK

View Code? Open in Web Editor NEW
65.0 3.0 9.0 7.13 MB

Automatic Ingress configuration for Game Servers managed by Agones

Home Page: https://octops.io

License: Apache License 2.0

Dockerfile 0.50% Makefile 3.31% Go 96.19%
agones gamedev golang kubernetes

gameserver-ingress-controller's Introduction

Octops Game Server Ingress Controller

Automatic Ingress configuration for Game Servers managed by Agones.

The Octops Controller leverages the power of the Kubernetes Ingress Controller to bring inbound traffic to dedicated game servers.

Players will be able to connect to a dedicated game server using a custom domain and a secure connection.

Supported Agones Resources

  • Fleets
  • Stand-Alone GameServers

Use Cases

  • Real-time games using websocket

Known Limitations

For the Octops Controller to work, an Ingress Controller must be present in the cluster. The one that has been mostly adopted by the Kubernetes community is the NGINX Ingress Controller. However, it has been reported by the Agones' community that for games based on websocket the NGINX controller might not be a good fit due to the lost of connections between restarts. Check https://kubernetes.github.io/ingress-nginx/how-it-works/#when-a-reload-is-required for details.

You can find more information on the original reported issue #21.

The connection drop behaviour is also present on alternatives like the HAProxy Ingress Controller.

For that reasons the suggested Ingress Controller is the Contour Ingress Controller. The controller is built on top of the https://www.envoyproxy.io/ service proxy. Envoy can handle flawlessly updates while game servers and ingress resources are reconciled by the Octops Controller.

Requirements

The following components must be present on the Kubernetes cluster where the dedicated game servers, and the controller will be hosted/deployed.

  • Agones
  • Contour Ingress Controller
    • Choose the appropriate setup depending on your environment, network topology and cloud provider. It will affect how the Ingress Service will be exposed to the internet.
    • Update the DNS information to reflect the name/address of the load balancer pointing to the exposed service. You can find this information running kubectl -n projectcontour get svc and checking the column EXTERNAL-IP.
    • The DNS record must be a * wildcard record. That will allow any game server to be placed under the desired domain automatically.
    • Contour Install Instructions
  • Cert-Manager - [optional if you are managing your own certificates]
    • Check https://cert-manager.io/docs/tutorials/acme/http-validation/ to understand which type of ClusterIssuer you should use.
    • Make sure you have an ClusterIssuer that uses LetsEncrypt. You can find some examples on deploy/cert-manager.
    • The name of the ClusterIssuer must be the same used on the Fleet annotation octops.io/issuer-tls-name.
    • Install (Check for newer versions): $ kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.7.1/cert-manager.yaml

Configuration and Manifests

Ingress Routing Mode

The Octops controller supports 2 different types of ingress routing mode: Domain and Path.

This configuration is used by the controller when creating the ingress resource within the Kubernetes cluster.

Routing Mode is a Fleet or GameServer scoped configuration. A Fleet manifest defines the routing mode to all of its GameServers. For stand-alone GameServers, the routing mode is defined on its own manifest.

Domain

Every game server gets its own FQDN. I.e.:https://octops-2dnqv-jmqgp.example.com or https://octops-g6qkw-gnp2h.example.com

# simplified Fleet manifest for Domain mode
# each GameServer is accessible using the combination: [gameserver_name].example.com
apiVersion: "agones.dev/v1"
kind: Fleet
metadata:
  name: fleet-us-east1-1
spec:
  replicas: 3
  template:
    metadata:
      annotations:
        octops.io/ingress-class-name: "contour" #required for Contour to handle ingress
        octops-projectcontour.io/websocket-routes: "/" #required for Contour to enable websocket
        octops.io/gameserver-ingress-mode: "domain"
        octops.io/gameserver-ingress-domain: "example.com"

Check the examples folder for a full Fleet manifest that uses the Domain routing mode.

Path

There is one global domain and gameservers are available using the URL path. I.e.: https://servers.example.com/octops-2dnqv-jmqgp or https://servers.example.com/octops-g6qkw-gnp2h

# simplified Fleet manifest for Path mode
# each GameServer is accessible using the combination: servers.example.com/[gameserver_name]
apiVersion: "agones.dev/v1"
kind: Fleet
metadata:
  name: fleet-us-east1-1
spec:
  replicas: 3
  template:
    metadata:
      annotations:
        octops.io/ingress-class-name: "contour" #required for Contour to handle ingress
        octops-projectcontour.io/websocket-routes: "/{{ .Name }}" #required for Contour to enable websocket for exact path. This is a template that the controller will replace by the name of the game server
        octops.io/gameserver-ingress-mode: "path"
        octops.io/gameserver-ingress-fqdn: servers.example.com

Check the examples folder for a full Fleet manifest that uses the Path routing mode.

How it works

When a game server is created by Agones, either as part of a Fleet or a stand-alone deployment, the Octops controller will handle the provisioning of a couple of resources.

It will use the information present in the game server annotations and metadata to create the required Ingress and dependencies.

Below is an example of a manifest that deploys a Fleet using the Domain routing mode:

# Reference: https://agones.dev/site/docs/reference/fleet/
apiVersion: "agones.dev/v1"
kind: Fleet
metadata:
  name: octops # the name of your fleet
  labels: # optional labels
    cluster: gke-1.24
    region: us-east-1
spec:
  replicas: 3
  template:
    metadata:
      labels: # optional labels
        cluster: gke-1.24
        region: us-east-1
      annotations:
        octops.io/ingress-class-name: "contour" # required for Contour to handle ingress
        octops-projectcontour.io/websocket-routes: "/" # required for Contour to enable websocket
        # Required annotation used by the controller
        octops.io/gameserver-ingress-mode: "domain"
        octops.io/gameserver-ingress-domain: "example.com"
        octops.io/terminate-tls: "true"
        octops.io/issuer-tls-name: "letsencrypt-prod"
# The rest of your fleet spec stays the same        
 ...

Deployed GameServers:

# kubectl [-n yournamespace] get gs
NAME                 STATE   ADDRESS         PORT   NODE     AGE
octops-2dnqv-jmqgp   Ready   36.23.134.23    7437   node-1   10m
octops-2dnqv-d9nxd   Ready   36.23.134.23    7323   node-1   10m
octops-2dnqv-fr8tx   Ready   32.76.142.33    7779   node-2   10m

Ingresses created by the controller:

# kubectl [-n yournamespace] get ingress
NAME                 HOSTS                           ADDRESS         PORTS     AGE
octops-2dnqv-jmqgp   octops-2dnqv-jmqgp.example.com                   80, 443   4m48s
octops-2dnqv-d9nxd   octops-2dnqv-d9nxd.example.com                   80, 443   4m46s
octops-2dnqv-fr8tx   octops-2dnqv-fr8tx.example.com                   80, 443   4m45s

Proxy Mapping - Ingress x GameServer

# The game server public domain uses the omitted 443/HTTPS port instead of the Agones port range 7000-8000
https://octops-2dnqv-jmqgp.example.com/ ⇢ octops-2dnqv-jmqgp:7437
https://octops-2dnqv-d9nxd.example.com/ ⇢ octops-2dnqv-d9nxd:7323
https://octops-2dnqv-fr8tx.example.com/ ⇢ octops-2dnqv-fr8tx:7779

Conventions

The table below shows how the information from the game server is used to compose the ingress settings.

Game Server Ingress
name [hostname, path]
annotation: octops.io/gameserver-ingress-mode [domain, path]
annotation: octops.io/gameserver-ingress-domain base domain
annotation: octops.io/gameserver-ingress-fqdn global domain
annotation: octops.io/terminate-tls terminate TLS (true, false)
annotation: octops.io/issuer-tls-name name of the ClusterIssuer
annotation: octops-[custom-annotation] custom-annotation
annotation: octops.io/tls-secret-name custom ingress secret
annotation: octops.io/ingress-class-name ingressClassName field

Support for Multiple Domains

For both routing modes one can specify multiple domains. That will make the same game server to be accessible through all of them.

The value must be a comma separated list of domains.

annotations:
  # Domain Mode
  octops.io/gameserver-ingress-domain: "example.com,example.gg"
  # Path Mode
  octops.io/gameserver-ingress-fqdn: "www.example.com,www.example.gg"

Custom Annotations

Any Fleet or GameServer resource annotation that contains the prefix octops- will be added down to the Ingress resource crated by the Octops controller.

octops-projectcontour.io/websocket-routes: /

Will be added to the ingress in the following format:

projectcontour.io/websocket-routes: /

The same way annotations prefixed with octops.service- will be passed down to the service resource that is the bridge between the game server and the ingress.

octops.service-myannotation: myvalue

Will be added to the service in the following format:

myannotation: myvalue

Templates

It is also possible to use a template to fill values at the Ingress and Services creation time.

This feature is specially useful if the routing mode is path. Envoy will only enable websocket for routes that match exactly the path set on the Ingress rules.

The example below demonstrates how custom annotations using a template would be generated for a game server named octops-tl6hf-fnmgd.

# manifest.yaml
octops-projectcontour.io/websocket-routes: "/{{ .Name }}"

# parsed
octops-projectcontour.io/websocket-routes: "/octops-tl6hf-fnmgd"

The field .Port is the port exposed by the game server that was assigned by Agones.

# manifest.yaml
octops.service-projectcontour.io/upstream-protocol.tls: "{{ .Port }}"

# parsed
octops.service-projectcontour.io/upstream-protocol.tls: "7708"

Important

If you are deploying manifests using helm you should scape special characters.

# manifest.yaml
octops.service-projectcontour.io/upstream-protocol.tls: '{{"{{"}} .Port {{"}}"}}'

# parsed
octops.service-projectcontour.io/upstream-protocol.tls: "7708"

The same applies for any other custom annotation. The currently supported GameServer fields are .Name and .Port. More to be added in the future.

Any annotation can be used. It is not restricted to the Contour controller annotations.

octops-my-custom-annotations: my-custom-value will be passed to the Ingress resource as:

my-custom-annotations: my-custom-value

Multiline is also supported, I.e.:

annotations:
    octops-example.com/backend-config-snippet: |
      http-send-name-header x-dst-server
      stick-table type string len 32 size 100k expire 30m
      stick on req.cook(sessionid)

Remember that the max length of a label name is 63 characters. That limit is imposed by Kubernetes

https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/

Fleet and GameServer Resource Manifests

  • octops.io/gameserver-ingress-mode: defines the ingress routing mode, possible values are: domain or path.
  • octops.io/gameserver-ingress-domain: name of the domain to be used when creating the ingress. This is the public domain that players will use to reach out to the dedicated game server.
  • octops.io/gameserver-ingress-fqdn: full domain name where gameservers will be accessed based on the URL path.
  • octops.io/terminate-tls: it determines if the ingress will terminate TLS. If set to "false" it means that TLS will be terminated at the load balancer. In this case there won't be a certificate issued by the local cert-manager.
  • octops.io/issuer-tls-name: required if terminate-tls=true and certificates are provisioned by CertManager. This is the name of the ClusterIssuer that cert-manager will use when creating the certificate for the ingress.
  • octops.io/ingress-class-name: Defines the ingress class name to be used e.g ("contour", "nginx", "traefik")

The same configuration works for Fleets and GameServers. Add the following annotations to your manifest:

# Fleet annotations using ingress routing mode: domain
annotations:
  octops.io/ingress-class-name: "contour" # required for Contour to handle ingress
  octops-projectcontour.io/websocket-routes: "/" # required for Contour to enable websocket
  octops.io/gameserver-ingress-mode: "domain"
  octops.io/gameserver-ingress-domain: "example.com"
  octops.io/terminate-tls: "true"
  octops.io/issuer-tls-name: "selfsigned-issuer"
# Fleet annotations using ingress routing mode: path
annotations:
  octops.io/ingress-class-name: "contour" # required for Contour to handle ingress
  octops-projectcontour.io/websocket-routes: "/" # required for Contour to enable websocket
  octops.io/gameserver-ingress-mode: "path"
  octops.io/gameserver-ingress-fqdn: "servers.example.com"
  octops.io/terminate-tls: "true"
  octops.io/issuer-tls-name: "selfsigned-issuer"
# Optional and can be ignored if TLS is not terminated by the ingress controller
octops.io/terminate-tls: "true"
octops.io/issuer-tls-name: "selfsigned-issuer"

Wildcard Certificates

It is worth noticing that games using the domain routing model and CertManager handling certificates, might face a limitation imposed by Letsencrypt in terms of the numbers of certificates that can be issued per week. One can find information about the rate limiting on https://letsencrypt.org/docs/rate-limits/.

For each new game server created there will be a new certificate request triggered by CertManager. That means that https://octops-2dnqv-jmqgp.example.com and https://octops-2dnqv-d9nxd.example.com require 2 different certificates. That approach will not scale well for games that have a high churn. In fact Letsencrypt limits to 50 domains per week.

In order to avoid issues with certificates and limits one should implement a wildcard certificate. There are different ways that this can be achieved. It also depends on how your cloud provider handled TLS termination at the load balancer or how the DNS and certificates for the game domain are managed.

There are 2 options:

  1. Terminate TLS at the load balancer that is exposed by the Contour/Envoy service. That way one can ignore all the TLS or issuer annotations. That also removes the dependency on CertManager. Be aware that cloud providers have different implementations of how certificates are generated and managed. Moreover, how they are assigned to public endpoints or load balancers.
  2. Provide a self-managed wildcard certificate.
    1. Add a TLS secret to the default namespace that holds the wildcard certificate content. That certificate must have been generated, acquired or bought from a different source.
    2. Set the annotation octops.io/terminate-tls: "true". That will instruct the controller to add the TLS section to the Ingress.
    3. Add the annotation octops.io/tls-secret-name: "my-wildcard-cert". That secret will be added to the Ingress under the TLS section. It will tell Envoy to use that secret content to terminate TLS for the public game server endpoint.

Important

  • Certificate renewal should be handled by the game server owner. The fact that the secret exists does not mean that Kubernetes or any other process will handle expiration.
  • CertManager can be used to generate wildcard certificates using DNS validation.

Clean up and GameServer Lifecycle

Every resource created by the Octops controller is attached to the game server itself. That means, when a game server is deleted from the cluster all its dependencies will be cleaned up by the Kubernetes garbage collector.

Manual deletion of services and ingresses is not required by the operator of the cluster.

How to install the Octops Controller

Deploy the controller running:

$ kubectl apply -f deploy/install.yaml
or
$ kubectl apply -f https://raw.githubusercontent.com/Octops/gameserver-ingress-controller/main/deploy/install.yaml

Check the deployment:

$ kubectl -n octops-system get pods

# Expected output
NAME                                         READY   STATUS    RESTARTS   AGE
octops-ingress-controller-6b8dc49fb9-vr5lz   1/1     Running   0          3h6m

Check logs:

$ kubectl -n octops-system logs -f $(kubectl -n octops-system get pod -l app=octops-ingress-controller -o=jsonpath='{.items[*].metadata.name}')

Events

You can track events recorded for each GameServer running kubectl get events [-w] and the output will look similar to:

...
1s Normal  Creating  gameserver/octops-domain-tqmvm-rcl5p  Creating Service for gameserver default/octops-domain-tqmvm-rcl5p
0s Normal  Created   gameserver/octops-domain-tqmvm-rcl5p  Service created for gameserver default/octops-domain-tqmvm-rcl5p
0s Normal  Creating  gameserver/octops-domain-tqmvm-rcl5p  Creating Ingress for gameserver default/octops-domain-tqmvm-rcl5p
0s Normal  Created   gameserver/octops-domain-tqmvm-rcl5p  Ingress created for gameserver default/octops-domain-tqmvm-rcl5p
...

The controller will record errors if a resource can't be created.

0s Warning Failed  gameserver/octops-domain-zxt2q-6xl6r  Failed to create Ingress for gameserver default/octops-domain-zxt2q-6xl6r: ingress routing mode domain requires the annotation octops.io/gameserver-ingress-domain to be present on octops-domain-zxt2q-6xl6r, check your Fleet or GameServer manifest.

Alternatively, you can check events for a particular game server running

$ kubectl describe gameserver [gameserver-name]
...
Events:
  Type    Reason          Age    From                           Message
  ----    ------          ----   ----                           -------
  Normal  PortAllocation  2m59s  gameserver-controller          Port allocated
  Normal  Creating        2m59s  gameserver-controller          Pod octops-domain-4sk5v-7gtw4 created
  Normal  Scheduled       2m59s  gameserver-controller          Address and port populated
  Normal  RequestReady    2m53s  gameserver-sidecar             SDK state change
  Normal  Ready           2m53s  gameserver-controller          SDK.Ready() complete
  Normal  Creating        2m53s  gameserver-ingress-controller  Creating Service for gameserver default/octops-domain-4sk5v-7gtw4
  Normal  Created         2m53s  gameserver-ingress-controller  Service created for gameserver default/octops-domain-4sk5v-7gtw4
  Normal  Creating        2m53s  gameserver-ingress-controller  Creating Ingress for gameserver default/octops-domain-4sk5v-7gtw4
  Normal  Created         2m53s  gameserver-ingress-controller  Ingress created for gameserver default/octops-domain-4sk5v-7gtw4

Extras

You can find examples of different ClusterIssuers on the deploy/cert-manager folder. Make sure you update the information to reflect your environment before applying those manifests.

For a quick test you can use the examples/fleet.yaml. This manifest will deploy a simple http game server that keeps the health check and changes the state to "Ready".

$ kubectl apply -f examples/fleet-domain.yaml

# Find the ingress for one of the replicas
$ kubectl get ingress
NAME                 HOSTS                           ADDRESS         PORTS     AGE
octops-tl6hf-fnmgd   octops-tl6hf-fnmgd.example.com                   80, 443   67m
octops-tl6hf-jjqvt   octops-tl6hf-jjqvt.example.com                   80, 443   67m
octops-tl6hf-qzhzb   octops-tl6hf-qzhzb.example.com                   80, 443   67m

# Test the public endpoint. You will need a valid public domain or some network sorcery depending on the environment you pushed the manifest.
$ curl https://octops-tl6hf-fnmgd.example.com

# Output
{"Name":"octops-tl6hf-fnmgd","Address":"36.23.134.23:7318","Status":{"state":"Ready","address":"192.168.0.117","ports":[{"name":"default","port":7318}]}}

Ingress manifest

Below is an example of a manifest created by the controller for a GameServer from a Fleet set to routing mode domain:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  labels:
    agones.dev/gameserver: "octops-tl6hf-fnmgd"
    kubernetes.io/ingress.class: "contour"
    projectcontour.io/websocket-routes: "/"
  name: octops-tl6hf-fnmgd
  namespace: default
spec:
  rules:
    - host: octops-tl6hf-fnmgd.example.com
      http:
        paths:
          - backend:
              service:
                name: octops-tl6hf-fnmgd #service is also created but the controller
                port:
                  number: 7837
            path: /
            pathType: Prefix
  tls:
    - hosts:
        - octops-tl6hf-fnmgd.example.com
      secretName: octops-tl6hf-fnmgd-tls

Demo

To demonstrate how the Octops controller workers, you can deploy a fleet of Quake 3 servers (QuakeKube) that can be managed by Agones.

QuakeKube is a Kubernetes-fied version of QuakeJS that runs a dedicated Quake 3 server in a Kubernetes Deployment, and then allow clients to connect via QuakeJS in the browser.

The source code of the project that integrates the game with Agones can be found on https://github.com/Octops/quake-kube.

It is a fork from the original project https://github.com/criticalstack/quake-kube.

Deploy the Quake Fleet

Update the fleet annotation and use a domain that you can point your load balancer or Public IP.

# examples/quake/quake-fleet.yaml
annotations:
  octops.io/gameserver-ingress-domain: "yourdomain.com" # Do not include the host part. The host name will be generated by the controller and it is individual to each gameserver.

Deploy the manifest

$ kubectl apply -f examples/quake/quake-fleet.yaml

When the game becomes Ready the gameserver-ingress-controller will create the Ingress that holds the public URL. Use the following command to list all the ingresses.

$ kubectl get ingress

# Output
NAME                 HOSTS                                ADDRESS         PORTS     AGE
octops-w2lpj-wtqwm   octops-w2lpj-wtqwm.yourdomain.com                    80, 443   18m

Point your browser to the address from the HOST column. Depending on your setup there may be a warning about certificates.

Destroy

You can destroy the quake fleet running:

$ kubectl destroy -f examples/quake/quake-fleet.yaml 

As expected Agones will destroy the Fleet, consequently deleting all the Ingresses associated to the destroyed gameservers.

Screenshots

The screenshots below use a fake domain arena.com used just for local and demonstration purpose. That domain should reflect the name of the domain you own and want your gameservers to be hosted. On real cloud environment, the certificate issued by cert-manager will be valid.

alt text

alt text

alt text

gameserver-ingress-controller's People

Contributors

danieloliveira079 avatar dependabot[bot] avatar trennepohl avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

gameserver-ingress-controller's Issues

Support multiple hosts

I need 2 hosts, one for internal ingress and the other for external ingress(end users) usage, but it seems that only 1 host is supported in octops ingress controller.

Can you add this feature? Splitting fqdn annotation by , character could be simple.

Pass custom labels and/or annotations upon creation to gameserver-ingress-controller managed ingresses

It would be nice to be able to potentially pass (static) labels or annotations through to the ingress objects that are created and managed under gameserver-ingress-controller.

For example, perhaps annotating our fleet with:
octops-custom-annotation-our-annotation: something-cool ends up getting stripped and applying our-annotation: something-cool to Ingress objects. We can then use this to annotate these Ingress objects with say external-dns which can handle our root dns record/entry for all gameservers.

Getting some error logs about event permissions

events "tankkings-production-fleet-szpbt-lds6d.171434fc5f66ed97" is forbidden: User "system:serviceaccount:octops-system:octops" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
 'events "tankkings-production-fleet-szpbt-lds6d.171434fc5f66ed97" is forbidden: User "system:serviceaccount:octops-system:octops" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)

octops controller is installed in standard octops-system namespace, via install.yaml. the fleet is in default namespace.

Path based ingress->gameserver rules

It would be nice to be able to setup each gameserver's associated Ingress object with a static host, and a gameserver named path for routing. This 'mode' or additional modes could be supported with another octops annotation which directs the controller to operate in this fashion.

Basically adding a little more customization to the ingress reconciler's Rules: []networkingv1.IngressRule. Right now it's statically configured to route based on a single host and rule:

Rules: []networkingv1.IngressRule{
				{
					Host: fmt.Sprintf("%s.%s", gs.Name, gs.Annotations[gameserver.OctopsAnnotationIngressDomain]),
					IngressRuleValue: networkingv1.IngressRuleValue{
						HTTP: &networkingv1.HTTPIngressRuleValue{
							Paths: []networkingv1.HTTPIngressPath{
								{
									Path:     "/",
									PathType: &defaultPathType,
									Backend: networkingv1.IngressBackend{
										Service: &networkingv1.IngressServiceBackend{
											Name: gs.Name,
											Port: networkingv1.ServiceBackendPort{
												Number: gameserver.GetGameServerPort(gs).Port,
											},
										},
									},
								},
							},
						},
					},
				

What could be nice would be to route based off a gameserver path name as well. In this mode octops.io/gameserver-ingress-domain would be the target domain (wouldn't create or use a subdomain), and the Path would be used as the Ingress rule for routing to gameserver, with the gameserver name as the path.

Would be great (for our use-case), if the controller could be configured to manage Ingress objects that end up looking like this:

spec:
  rules:
  - host: allgameservers.foo.org
    http:
      paths:
      - path: /gs.Name
        pathType: Prefix
        backend:
          service:
            name: gs.Name
            port:
              number: gameserver.GetGameServerPort(gs).Port

Template access to service objects

Need to annotate service objects with dynamic information. As discussed on slack. Example:

octops.service-projectcontour.io/upstream-protocol.tls: "{{ .Port }}"

Port can not be static in our fleet object definition, as it changes per gameserver allocation

Contour Ingress does not rewrite path in Path mode

Hi there!

We are seeing an issue where using the contour router does not rewrite request paths without the assigned game server prefix (i.e /game-server-prefix/path-to-get does not get rewritten to just /path-to-get when hitting the game server pod). AFAIK nginx ingress does support it.

Anyway we can get the same behaviour with contour ingress?

Support for specific secretName

By default, ingress appears to create cert-manager certificate secrets in the form of gameserverName-tls.

We would need to support the ability to ignore cert-manager and specify the secretName specifically to support our wildcard certificate under the "path" based routing mode.

I implemented and tested this feature here: https://github.com/winterpixelgames/gameserver-ingress-controller/tree/main-winterpixel, and I can confirm it is working for us.

We support direct TLS to gameserver port (7000-8000). We termiante TLS on our godot gameservers to eliminate as many hops as possible. However, some clients are behind firewalls which block outbound https traffic to non-standard ports. So to support this case we use this controller in the following fashion:

Screen Shot 2021-11-22 at 4 32 16 PM

Screen Shot 2021-11-22 at 4 32 16 PM

Route based on path. Clients try and connect to servers.winterpixel.io/gsName
Terminate TLS on ingress (required for layer7 routes).
Forward on TLS to gameserver backend via an octops- prefix
Manage the dns entry through external-dns via octops- prefix

websocket-routes annotation template not evaluated when generating Ingress

While experimenting with websocket-routes annotation, I noticed that anything that doesn't match the example in documentation (so something different than /{{ .Name }}) do not work. It fallback to /<game server name> in the Ingress object.

I dig a bit in the code and noticed that this template doesn't seem to be used at all.

rule := newIngressRule(f, "/"+gs.Name, gs.Name, gameserver.GetGameServerPort(gs).Port)

Is it the expected behavior? Did I miss something?

Ingress created by the controller is unreachable

The controller has created an ingress that looks like:

NAME                        CLASS    HOSTS                           ADDRESS          PORTS     AGE
some-name-9d572   <none>   someaddress.com   104.196.xxx.xx   80, 443   34m

I've ensured that the game server is running correctly but there is nothing running at the address specified by the ingress. Any suggestions on how I might be able to debug this issue?

Thanks in advance!

Unit Tests?

I see your Makefile has systems for testing but there are no tests in the repo. Are they available?

"Path" routing mode changes the relative path of requests

One-line use case: We want to use gameserver-ingress-controller's "path" mode without changing our game server code.

Use Case

We're interested in gameserver-ingress-controller because it offers a way to expose agones gameservers through a TLS-protected hostname (instead of http://NODE_IP:PORT).
gameserver-ingress-controller offer twos routing modes, "domain" and "path".
"domain" mode requires a wildcard subdomain ssl cert (or lots and lots of 1-subdomain certs), which we aren't able to create right now*.
So we tried "path" mode, which routes requests to the right gameserver based on the first segment of the url path.
This solved our TLS issue but presented a new one: the relative path of every request to the server now started with a // segment.
We could update our server code to expose endpoints at paths like /*/healthz and just ignore the first segment, but this didn't seem like a good fix. In particular, if we were ever to deploy the same server code to a different environment that used the "domain" routing mode, we would have to change it back.

Our Approach

Our thought is to use url-rewriting at the ingress layer to strip the extra segment from the request, leaving the same relative paths that the server would see if it were in "domain" mode.
For example,
"domain" mode: gameserver-mc9cw-8p8wt.example.com/healthz -> /healthz
"path" mode: example.com/gameserver-mc9cw-8p8wt/healthz -> [url-rewrite] -> /healthz

ingress-nginx supports url rewriting using a combination of annotations and path, as shown in the docs here: https://kubernetes.github.io/ingress-nginx/examples/rewrite/#rewrite-target
It is already possible to set the necessary annotation in the gameserver or fleet yaml:
annotations: octops-nginx.ingress.kubernetes.io/rewrite-target: /$2

However, "path" mode currently sets the ingress path yaml to:
- path: /gameserver-name
For the url-rewrite to work, we would need to set the path yaml to:
- path: /gameserver-name(/|$)(.*)

To solve this for ourselves, we talked about deploying a modified version of this project, but...

Feature Request

If rewriting sounds like a good default behavior, "path" mode could include the capture group regex (/|$)(.*) in the path field of all generated Ingresses and add the rewrite-target nginx annotation automatically.
This way the request format in "path" mode will match the request format of "domain" mode (e.g. "/healthz")
I tested this out by hardcoding some values in Options.go and it worked.

Alternatively, the controller might provide a way to pass through custom path strings to the generated Ingress, the same way "octops-[custom-annotation]" passes an annotation through.
This would leave the default behavior of "path" mode as it currently is but enable users to achieve the url-rewriting described above.
This sounds like a bigger feature request, though.

*Our team actually resolved our blockers to getting a wildcard ssl cert, so we are now able to use "domain" mode. I wanted to post this anyway in case the issue is still relevant to anyone using "path" mode.

Multiple Replicas for controller

Let's say I wanted to deploy multiple replicas of the controller for scalability and redundancy. What design method would you use for this? The watcher is going to alert each GameServer handler. Do we create a simple mutex to prevent duplication of efforts? This seems like a standard problem. Is there a standard solution?

Add Prometheus instrumentation to Reconcile

We should add Prometheus instrumentation to the controller in order to track the following metrics:

  • octops_ingress_reconcile_success: Counter for each reconcile event that succeeded
  • octops_ingress_reconcile_failed: Counter for each reconcile event that failed
  • octops_ingress_reconcile_duration_ms: Time taken to complete a reconcile event in milliseconds

The gameserver-ingress-controller should expose the [PORT]/metrics endpoint that will be scrapped by Prometheus. The implementation should be extensible and take into consideration that new metrics might be added to the future.

Refereces to the Prometheus instrumentation for Go applications can be found on https://prometheus.io/docs/guides/go-application/

tls-secret-name requires terminate-tls=true, which makes the cert-issuer override that certificate in kubernetes

Hello,
I can't set the tls-secret-name without setting terminate-tls to "true".
The problem is, when I set the terminate-tls to "true" cert-manager will try to make a cert with same name as written in tls-secret-name, and will manage the cert that was supposed to be made manually or imported from different source.

After inspecting the go code, the problem seems to be that setting "terminate-tls" to true will activate both "tls-secret-name" and "issuer-tls-name".

Since WithTLSCertIssuer runs after WithTLS, could you please add a check in WithTLSCertIssuer if there is TLS fields, to avoid invoking the CertIssuer and overriding the certificate ?

Thank you for the cool repo btw :D

Octops causes ingress controller to constantly reload which causes dropped websocket connections

Creation of Ingress objects causes a reload of the nginx controller, which ultimately shuts down all nginx worker processes (https://kubernetes.github.io/ingress-nginx/how-it-works/#when-a-reload-is-required). All existing websocket connections that serviced by that controller will eventually be disconnected after worker_shutdown_timeout expires: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#worker-shutdown-timeout

We discovered this after consistently seeing our existing websocket connections that were proxied through the same ingress controller as octops, disconnect at approximate the same time at approximatlely 4min internals (240s is the default worker-shutdown-timeout). This is also an issue with Http connections that keep a socket open via keep-alive as well.

This is documented in a few places:
kubernetes/ingress-nginx#6731
kubernetes/ingress-nginx#7115

And a good summary write up below:
https://danielfm.me/post/painless-nginx-ingress/#ingress-classes-to-the-rescue

But ultimately a reload of the configuration will cause socket connections to drop eventually. In the first link above, nginx developers expect that the solution to the problem is the client library handles the reconnect. which is obviously a problem when dealing with real-time games all running via websocket through the same ingress controller.

It should be noted that nginx+ (enterprise paid product) does not have this limitation:
(https://www.nginx.com/faq/how-does-zero-downtime-configuration-testingreload-in-nginx-plus-work/)
https://www.nginx.com/blog/using-nginx-plus-to-reduce-the-frequency-of-configuration-reloads/

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.