GithubHelp home page GithubHelp logo

ngrok / kubernetes-ingress-controller Goto Github PK

View Code? Open in Web Editor NEW
185.0 17.0 21.0 2.16 MB

The official ngrok Ingress Controller for Kubernetes

Home Page: https://ngrok.com

License: MIT License

Dockerfile 0.21% Makefile 2.00% Go 95.13% Shell 1.87% Mustache 0.67% Nix 0.13%
ngrok ingress-controller kubernetes reverse-proxy

kubernetes-ingress-controller's Introduction

ngrok Logo Kubernetes logo

CI Status License Status Gateway API Preivew Slack Twitter

ngrok Kubernetes Operator

Leverage ngrok for your ingress in your Kubernetes cluster. Instantly add load balancing, authentication, and observability to your services via ngrok Cloud Edge modules using Custom Resource Definitions (CRDs) and Kubernetes-native tooling. This repo contains both our Kubernetes Ingress Controller and the Kubernetes Gateway API

Installation | Getting Started | Documentation | Developer Guide | Known Issues

Installation

Helm

Note We recommend using the Helm chart to install the controller for a better experience for upgrades.

Add the ngrok Ingress Controller Helm chart:

helm repo add ngrok https://ngrok.github.io/kubernetes-ingress-controller

Then, install the latest version (setting the appropriate values for your environment):

export NAMESPACE=[YOUR_K8S_NAMESPACE]
export NGROK_AUTHTOKEN=[AUTHTOKEN]
export NGROK_API_KEY=[API_KEY]

helm install ngrok-ingress-controller ngrok/kubernetes-ingress-controller \
  --namespace $NAMESPACE \
  --create-namespace \
  --set credentials.apiKey=$NGROK_API_KEY \
  --set credentials.authtoken=$NGROK_AUTHTOKEN

Note The values for NGROK_API_KEY and NGROK_AUTHTOKEN can be found in your ngrok dashboard and are used by your ingress controller to authenticate with ngrok for configuring and running your network ingress traffic at the edge.

For a more in-depth installation guide follow our step-by-step Getting Started guide.

Gateway API Preview

To install the developer preview of the gateway api we'll make the following changes to the above instructions.

Install the v1 gateway CRD before the helm installation.

kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.0.0/standard-install.yaml

Then, during the helm install set the experimental gateway flag.

helm install ngrok-ingress-controller ngrok/kubernetes-ingress-controller \
  --namespace $NAMESPACE \
  --create-namespace \
  --set credentials.apiKey=$NGROK_API_KEY \
  --set credentials.authtoken=$NGROK_AUTHTOKEN \
  --set useExperimentalGatewayApi=true  # gateway preview

YAML Manifests

Apply the sample combined manifest from our repo:

kubectl apply -n ngrok-ingress-controller \
  -f https://raw.githubusercontent.com/ngrok/kubernetes-ingress-controller/main/manifest-bundle.yaml

For a more in-depth installation guide follow our step-by-step Getting Started guide.

Documentation

The full documentation for the ngrok Ingress Controller can be found on our k8s docs

Known Issues

Note

This project is currently in beta as we continue testing and receiving feedback. The functionality and CRD contracts may change. It is currently used internally at ngrok for providing ingress to some of our production workloads.

  1. Current issues of concern for production workloads are being tracked here and here.

Support

The best place to get support using the ngrok Kubernetes Operator is through the ngrok Slack Community. If you find bugs or would like to contribute code, please follow the instructions in the contributing guide.

License

The ngrok ingress controller is licensed under the terms of the MIT license.

See LICENSE for details.

kubernetes-ingress-controller's People

Contributors

abdiramen avatar alex-bezek avatar bobzilladev avatar carlamko avatar caseysoftware avatar ck-ward avatar ctindel avatar devmandy avatar dthomasngrokker avatar euank avatar jonstacks avatar josephpage avatar jrobsonchase avatar krwenholz avatar mschenck avatar nijikokun avatar nikolay-ngrok avatar ofthedelmer avatar russorat avatar salilsub avatar stmcallister avatar sudobinbash avatar vincetse avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubernetes-ingress-controller's Issues

Add support for helm to create the secret

Currently helm can take a different secret name, but it relies on the user to create the secret before hand. While this will likely be the most common way to install it in prod, it would be nice for a quick start to be able to pass the api key and auth token as helm values and then it can create the secret for you

Verify all path matching works as expected

Description

K8s has detailed requirements about how paths match in different situations https://kubernetes.io/docs/concepts/services-networking/ingress/#path-types
we should run through these manually or with automated tests to ensure our code and ngrok follow the k8s requirements.

Multiple matches
In some cases, multiple paths within an Ingress will match a request. In those cases precedence will be given first to the longest matching path. If two paths are still equally matched, precedence will be given to paths with an exact path type over prefix path type.

Use Case

Avoid unexpected path mismatching bugs

Related issues

No response

Pass Ingress Conformance Tests

We should aim to pass https://github.com/kubernetes-sigs/ingress-controller-conformance

• Default backend ❌
◦ default backends are a separate field in the spec from the rules. Its normally used for 404 pages. It should probably be configured to be a static response backend type
• Host rules ❌
◦ needs support for the tls portion of the spec
◦ we’d also fail because it requires multiple rules in 1 ingress object to pass
• Ingress class 👍
• Load Balancing 👍
• path rules ⚠️ We should pass the rule checks but would fail the fact that there are multiple rules

Add support for Request and Response Headers Route Module and Compression

Description

We currently have support for the compression HTTPSEdgeRoute module. Add support for being able to add/remove on both the request headers and response headers.

Use Case

This will allow a user to:

  • Remove Headers before they are sent to the backend
  • Add additional Headers before they are sent to the backend
  • Remove Headers from the response before they are sent back from the edge to the client
  • Add additional headers before they are sent back from the edge to the client

Related issues

No response

Add Support for SAML Edge Module

Description

Add support for SAML Edge module

Use Case

This module restricts endpoint access to only users authorized by a SAML IdP. Upstream servers behind a SAML-protected endpoint can safely assume that requests are from users authorized by the SAML IdP to access the protected resource.

Related issues

No response

Automatically Label PRs based on files changed

Description

Right now, labeling PRs based on the files they have changed is a manual process. PRs should automatically be labeled based on the files that they touch.

Use Case

Automatically labeling PRs will save us time and prevent mislabeling errors. It will also allow us to more easily search for PRs or issues relating to a given area.

Related issues

No

Implement an object store to provide world view of relevant k8s objects

Problem Statement
In order to support all ingress compliance checks and to empower more intelligent handling of ingress object updates, we need a world view of all ingress objects.


Approach
Implement the common store pattern found in numerous ingress controllers, such that:

ingress-store

  • controller boots and reads everything (nothing exists yet) and initializes store

for the example of receiving a create event: Start reconcile loop

  • controller pushes data about ingress into store
  • controller reads aggregated data from store
  • controller creates edges, reserves domains, and creates tunnel crd

Wildcard domains

Description

The k8s ingress spec specifies that it does support wildcard hostnames https://kubernetes.io/docs/concepts/services-networking/ingress/#hostname-wildcards
For other ingress controllers which are simply proxy servers, they can set up rules to handle wildcard hosts and leave the dns up to the user. With ngrok, a matcher rule doesn't do anything if an edge isn't setup with an actual host with dns routing to it, so we'll need to setup an edge.

Currently if you try to use a wildcard, you get this error

2023-02-14T16:57:35Z	ERROR	controllers.ingress	Failed to sync ingress to store	{"error": "Domain.ingress.k8s.ngrok.com \"*-bezek-hello-world-ingress-ngrok-io\" is invalid: metadata.name: Invalid value: \"*-bezek-hello-world-ingress-ngrok-io\": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')"}

Use Case

Allow the utilization of wildcard domains

Related issues

No response

CRD Backend Support

Description

Currently, we create tunnel group backends for highly available tunnels to services within a k8s cluster. Ngrok also supports these other backends:

  • http response backend: static http response value
  • Failover backend: takes a list of other backends
  • weighted backends: takes a list of other backends

If we treat our tunnel groups as a Backend, and create crd's for these other backends that can reference each other and tunnel groups, we could use them with the ingress spec

Use Case

apiVersion: ingress.k8s.ngrok.com/v1alpha1
kind: HttpResponseBackend
metadata:
  name: http-response-backend-404
spec:
  body: "404 Not Found"
  status: 404
---
apiVersion: ingress.k8s.ngrok.com/v1alpha1
kind: FailoverBackend
metadata:
  name: failover-backend
spec:
  backends:
    - resource:
        apiGroup: ingress.k8s.ngrok.com/v1alpha1
        kind: HttpResponseBackend
        name: http-response-backend-404
    - service:
        name: test-service
        port:
          number: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-resource-backend
spec:
  defaultBackend:
    resource:
      apiGroup: ingress.k8s.ngrok.com/v1alpha1
      kind: HttpResponseBackend
      name: http-response-backend-404
  rules:
    - host: foo.bar.com
      http:
        paths:
          - path: /foo
            pathType: Prefix
            backend:
              service: # Makes a tunnel group backend for this service
                name: test-service
                port:
                  number: 80
              resource: # References other backend types
                apiGroup: ingress.k8s.ngrok.com/v1alpha1
                kind: FailoverBackend
                name: failover-backend

Related issues

No response

Ingress Controller Doesn't set CNAME Target on Ingress Status anymore

What happened

Prior to the refactoring to use the store/driver, the controller would update the ingress objects with the CNAME Target for custom domains so tools like external-dns can make white label domains work automatically. The status is no longer being updated at all

What you think should happen instead

When creating an ingress with a custom domain, the cname target should be written to the ingress status loadbalancer hostname field.

How to reproduce

create a ingress object with a custom hostname and see the yaml version of the object doesn't have the status filled out properly

k get ing minimal-ingress -o yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    k8s.ngrok.com/tls-min-version: "1.3"
  creationTimestamp: "2023-02-22T07:22:01Z"
  finalizers:
  - k8s.ngrok.com/finalizer
  generation: 1
  name: minimal-ingress
  namespace: ngrok-ingress-controller
  resourceVersion: "20862"
  uid: cd1970e6-91fa-452c-87bd-6da1c47b272d
spec:
  ingressClassName: ngrok
  rules:
  - host: test.alexbezek.io
    http:
      paths:
      - backend:
          service:
            name: http-echo-svc
            port:
              number: 80
        path: /
        pathType: Prefix
status:
  loadBalancer: {}

Update Docs

We have a couple of markdown docs and the main readme, but now that the project is getting closer to be released, lets make some more user-friendly docs. We don't have to cover everything, but it would be good to get the overall structure together that can be used for our main doc site eventually.

Make better helm defaults

Because we use helm for our local isntall and development, its values are tied heavily to our local environment with values that don't make sense for prod (like the latest image and pullPolicy never). We should make more sane defaults and setup a way to provide an override values.yaml for our local development.

We should also as a part of this document the helm values and possibly auto generate those docs

Various review Feedback updates

From our product review meeting, we should make the following updates:

  • switch replica count to 1: free
  • remove log stdout option
  • credential secret should name should just be 1 value
  • add server address as a config option
  • Should we explicitly require opting into the default credentials or explicitly specify a name. Could also tell them post install if that secret doesn't exist
  • allow seting default ingress class via helm file

Allow ingress class controller name to be configureable

Description

While we offer helm values to override the ingressClass name, we don't let you change the controllerName which is matched against the ingress classes spec.controller name. So even though you can create multiple ingress classes, you can't deploy separate controllers to match each one specifically.

Use Case

Install multiple ngrok kubernetes ingress controllers watching different ingress classes

Related issues

No response

Default Backends

Description

K8s ingress has the concept of a default backend which isn't tied to any particular rule/route/host https://kubernetes.io/docs/concepts/services-networking/ingress/#default-backend

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-resource-backend
spec:
  defaultBackend:
    service: # Service and Resource are mutually exclusive
      name: test
      port:
        number: 80
    resource: # Service and Resource are mutually exclusive
      apiGroup: k8s.example.com
      kind: StorageBucket
      name: static-assets

We can support services today.
For resources, when we end up creating CRDs for things like a static response backend, that would be a commonly used case for 404 responses for example.

Default backends are a bit unique though in that they don't have any host to match on. With something like nginx which is routeable via an IP, it can still create a default backend. For ngrok, if there are no hosts setup, then there will be no edge actually created. A default backend ends up only being useable when there are other ingress rules configured that result in a routeable edge.

Use Case

404's and other fallback pages are common usecases for the default backend

Related issues

No response

Allow Controller Log level to be configureable

Description

its helpful for us when developing and debugging to have quite a few low level debug logs, however, these can be far too noisy for regular users when things are working fine. We should let the user specify a configuration level via helm values that is passed to the controller.

Use Case

add extra debug logs that would be helpful when debugging issues

Related issues

No response

Parse error: function "break" not defined

Kubernetes Version

1.25.3

Helm Chart Version

0.3.0

Helm Chart configuration

N/A

What happened

When running helm upgrade, I got the following message

Error: UPGRADE FAILED: parse error at (ngrok-ingress-controller/templates/NOTES.txt:47): function "break" not defined

What you think should happen instead

It should have shown me an example Ingress manifest for a service running in my cluster.

How to reproduce

Have an existing service in the cluster and run a helm install/upgrade

Define and Set supported k8s versions

Description

We need to test and document which versions of k8s and the ingress spec we officially support

Use Case

some features may not work on older k8s versions and we'll need to actively manage which past versions we support

Related issues

No response

Synchronize ngrok resources in background process

Description

We created this store concept that houses all the gathered resource information, and then a Driver has a sync function which calculates the total differences, syncs those to the ngrok api, and updates the k8s api with statuses.

We started initially by just plugging this into our current reconcile loop, however we quickly found that the number of reconcile events getting re-triggered would cause sync to be called many times in a row and hit rate limits with the ngrok api. To fix this temporarily, we added a thread safe reentrance lock https://github.com/ngrok/kubernetes-ingress-controller/blob/main/internal/store/driver.go#L123-L134

Instead of doing this, we can just trigger this sync operation on some interval in a background process. This will have the bonus effect of continually reconverging any changes made directly in the ngrok account back to what matches k8s.

Use Case

a simpler sync loop to think about and continual drift correction

Related issues

No response

Have CI run gen commands and fail on diffs

Kubebuilder supports auto generating some files like rbac permissions based on annotations in the code. We should have CI run these generation commands and fail CI if there are any diffs so we don't forget to generate and check those in.

A job or step in our workflows should run the gen commands and ensure no git diffs are produced. If there are diffs produced, the CI job should fail

Formalize release process for artifacts

Currently we have some github actions that will release new latest docker images on each push to main and it will publish a new release of our helm chart on each new version bump in its chart.yaml.

We need to formalize a release process for these artifacts, make sure its fully automated using common community best practices, and document it in our readme.

Migrate ingress annotations to CRD

Description

Currently, annotations on ingress objects are used to configure HTTPS Edge Modules. Example:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: minimal-ingress
  annotations:
    k8s.ngrok.com/tls-min-version: "1.3"

Some of the HTTPS Edge Modules are more complex and difficult to define in annotations. Additionally, there is currently no way to re-use HTTPS Edge Module configurations across ingresses.

With a CRD approach, users can define configuration for different route modules in a NgrokModuleSet CRD like so:

apiVersion: ingress.k8s.ngrok.com/v1alpha1
kind: NgrokModuleSet
metadata:
  name: my-module
modules:  
  compression:
    enabled: true
  tlsTermination:
    minVersion: "1.3"

and re-use this configuration across multiple ingresses, like so:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ing-1
  annotations:
    k8s.ngrok.com/modules: "my-module"
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ing-2
  annotations:
    k8s.ngrok.com/modules: "my-module"

Use Case

This makes it easier to share common Edge Module configuration across ingresses.

Related issues

No response

Helm Lint & docs generation

We should lint our helm chart as a commonly good practice.

Additionally, if we include the right comments in our values.yaml file, there are github actions and other projects that can generate markdown table docs of our values. This issue should set that up as well to publish to a markdown file in the repo.

ngrok api rate limits

What happened

With the addition of the store/driver, its not uncommon to see ngrok rate limit errors saying we reached our 120 calls per minute. While I can look in datadog to see the api calls on our end, the controller doesn't offer the observability to determine why so many calls are being made.

What you think should happen instead

We should add logging and/or metrics about the number of api calls being made so we can make sure that aren't making unnecessary calls.

  • we could log each api call at a debug level log
  • create a metric for the number of api calls being made

How to reproduce

apply a couple of ingresses and if there are enough resources, it hits rate limits

Add Support for OIDC Edge Module

Description

Add support for OIDC Edge module.

Use Case

This module restricts endpoint access to only users authorized by a OpenID Identity Provider. Upstream servers behind an OIDC-protected endpoint can safely assume that requests are from users authorized by the OIDC IdP to access the protected resource.

Related issues

No response

Ngrok agent should default to the closest region geographically

What happened

When starting a tunnel, typically the default behaviour was to use the US region unless another region was specified. With recent changes, the default behaviour of the agent is to now use the closest region https://ngrok.com/docs/guides/upgrade-v2-v3#changes-to-choosing-a-region.

Currently, the helm value defaults to ""

## @param region ngrok region to create tunnels in. Defaults to empty to utilize the global network
region: ""

Which causes no argument to be passed to the controller

args:
{{- if .Values.region }}
- --region={{ .Values.region}}
{{- end }}

Which causes it to default to the US

c.Flags().StringVar(&opts.region, "region", "us", "The region to use for ngrok tunnels")

Its then used by the tunnel driver

if opts.Region != "" {
connOpts = append(connOpts, ngrok.WithRegion(opts.Region))
}

What you think should happen instead

Instead, if no region override is provided to the helm chart or controller, the tunnels should fallback to the default behaviour and not specify a region at all.

How to reproduce

No response

Make a little magic happen

as a part of the notes.txt that runs after the helm installation, we should output some useful information about next steps the user could take to see some useful functionality of ngrok very quickly. This is open to other ideas but the one that was brought up was to query the cluster for the k8s dashboard and if present output an example of the ingress object needed to expose it behind oauth

Allow the controller to watch only a specific namespace

Currently, the ingress controller watches all namespaces. Its a common use case to need a controller to watch only a specific namespace in the case where people run a controller in a namespace for a particular team or environment.

Create a full bundle of manifests for easy deployment

Description

We should have an automated job that can run helm template and add the output as a single combined yaml file with all the documents required for a basic installation.

Use Case

Many repos have this so a user can simply k apply -f https://raw.githubusercontent... for demo installations

Related issues

No response

Response/Request Header addition annotation change

Description

Currently the annotation requires you to pass in a multi-line string of the json for the key/value pair of the header such as this

    k8s.ngrok.com/request-headers-add: |
      {
        "X-SEND-TO-BACKEND": "Value1"
      }

We could instead do this

k8s.ngrok.com/request-headers-add.X-SEND-TO-BACKEND: "Value1"

where the annotation format is k8s.ngrok.com/request-headers-add.* and the following value is the header key

Use Case

Simplifying how request/response headers are added via annoations

Related issues

No response

Helm Common Values

We should audit and support the common helm values other controllers support.

Add Support for OAuth Edge Module

Description

Add support for OAuth Edge module.

Use Case

Allow users to easily add OAuth in front of services exposed via a ngrok Ingress.

Related issues

No response

Consolidate common CRUD logic for ngrok API

Description

Common logic has emerged across various controllers and ngrok API objects to make sure the API matches what is in the Spec for the resources. The reconciliation for this looks something like:

DELETE_FLOW:
    If we don't have an ID,
        Do nothing and return
    If we do have an ID,
        Try to Delete it
            If there was no error, return
            If the error is a 404 not found, its already been deleted, so don't return an error
            Return the error 

CREATE_OR_UPDATE_FLOW:
    CREATE_FLOW:
        If we can search and find a matching object, search for the object
            If we find an object
                Use it and return it and store the ID in a status field
        Create the object in the API and store the ID in a status field
 
    If we don't have an ID
        CREATE_FLOW
    If we do have an ID
        Check that it exists in the ngrok API
            If it does not exist
                 CREATE_FLOW
        Update the API to match the spec.

Consolidate this logic into a single place

Use Case

Currently we duplicate a lot of the logic for this flow between many controllers and across quite a few ngrok API resources. We should put this flow logic into a single place to make it easier to maintain.

Related issues

No response

Allow adding annotations to helm cluster roles

Description

We should create a helm value that can be used to pass in arbitrary annotations that get added to the cluster role.

Use Case

With a CRD, a user/serviceaccount needs permission to access a specific resource. Out of the box, we can access a crd easily because auth is defaulted to the system:masters role. It is a best practice to not use this role and to create more locked down roles.

Allowing annotations to be added to the cluster role would allow users to add annotations like these to easily give themselves access to the CRDs

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: aggregate-cron-tabs-edit
  labels:
    # Add these permissions to the "admin" and "edit" default roles.
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"

Related issues

No response

Old Routes not being removed when updating ingress

What happened

When updating a path in the ingress, both the old path and the new path exist as HTTPSEdgeRoutes.

Ex:

---
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
  name: test
  namespace: default
  annotations:
    k8s.ngrok.com/https-compression: "true"
spec:
  ingressClassName: ngrok
  rules:
  - host: my-test.ngrok.io
    http:
      paths:
      - path: /path1   # update this to /path2
        pathType: Prefix
        backend:
          service:
            name: webhook-test
            port:
              number: 80

What you think should happen instead

I expected the old route to be removed from the HTTPS Edge in ngrok and only the new route /path2 to exist.

How to reproduce

Simply update the path selector on an ingress to reproduce.

Specific Useful Error when running behind firewall

Description

A very common problem when people try out ngrok is that they are behind a firewall that has blocked ngrok's outbound tunnel/session domain name ngrok.com. Simulating by turning off my wifi and restarting the pods, you just get a crash loop with these logs

2023-03-13T22:22:40Z	INFO	controller-runtime.metrics	Metrics server is starting to listen	{"addr": ":8080"}
2023-03-13T22:22:40Z	INFO	setup	found matching ingress	{"ingress-name": "ingress-consul", "ingress-namespace": "default"}
Error: unable to create tunnel driver: failed to dial ngrok server with address "tunnel.ngrok.com:443": dial tcp: lookup tunnel.ngrok.com on 10.96.0.10:53: no such host
Usage:
  manager [flags]

Flags:
      --controller-name string             The name of the controller to use for matching ingresses classes (default "k8s.ngrok.com/ingress-controller")
      --election-id string                 The name of the configmap that is used for holding the leader lock (default "ngrok-ingress-controller-leader")
      --health-probe-bind-address string   The address the probe endpoint binds to. (default ":8081")
  -h, --help                               help for manager
      --metadata string                    A comma separated list of key value pairs such as 'key1=value1,key2=value2' to be added to ngrok api resources as labels
      --metrics-bind-address string        The address the metric endpoint binds to (default ":8080")
      --region string                      The region to use for ngrok tunnels
      --server-addr string                 The address of the ngrok server to use for tunnels
      --watch-namespace string             Namespace to watch for Kubernetes resources. Defaults to all namespaces.
      --zap-devel                          Development Mode defaults(encoder=consoleEncoder,logLevel=Debug,stackTraceLevel=Warn). Production Mode defaults(encoder=jsonEncoder,logLevel=Info,stackTraceLevel=Error) (default true)
      --zap-encoder encoder                Zap log encoding (one of 'json' or 'console')
      --zap-log-level level                Zap Level to configure the verbosity of logging. Can be one of 'debug', 'info', 'error', or any integer value > 0 which corresponds to custom debug levels of increasing verbosity
      --zap-stacktrace-level level         Zap Level at and above which stacktraces are captured (one of 'info', 'error', 'panic').
      --zap-time-encoding time-encoding    Zap time encoding (one of 'epoch', 'millis', 'nano', 'iso8601', 'rfc3339' or 'rfc3339nano'). Defaults to 'epoch'.

2023-03-13T22:22:40Z	ERROR	setup	error running manager	{"error": "unable to create tunnel driver: failed to dial ngrok server with address \"tunnel.ngrok.com:443\": dial tcp: lookup tunnel.ngrok.com on 10.96.0.10:53: no such host"}

Use Case

Since this situation and error are so common, it would be ideal to give a very specific and detailed error message with links to docs on how to address the problem

Related issues

No response

Specify IP policy in Annotations by name

Description

Currently, you are only able to specify IP Policies by ID. You should be able to specify an IP Policy by name or ID in an annotation since we have a CRD for IP Policy.

Use Case

Rather than create an IP policy via CRD and have to get the policy ID output, you should be able to specify the name of the IP Policy resource in the ingress annotations and it should look up the ID.

Related issues

No response

Allow multiple versions of the controller to be installed without conflict

Description

There may be use cases for running multiple, separately configured and managed, instances of this controller. To do so, we need to ensure all the resources we create via helm are named something specific to their release version and release name so there aren't conflicts

Use Case

Other ingress controllers commonly have this use case for things like supporting internal vs external load balancers. With the ngrok ingress controller, there may be use cases for running separate instances for teams, or configuring instances separately.

As we find real use cases, we should consider adding support for those cases without requiring you run multiple controllers, but regardless its a good practice to enable this

Related issues

No response

Allow Custom MetaData

allow users to pass custom metadata via helm values that would be intermixed with the ingress controller meta information

This will change heavily when we #35 so we should wait for that

Update Tunnel CRD status with information about active tunnels

Description

Today, the tunnel controller watches but does not "manage" the tunnel CRD meaning it doesn't do any updates to its status or finalizers. Each controller watches tunnels and creates tunnels via ngrok-go. Ideally, those active tunnels are visible somewhere such as on the tunnel status.

We can't easily do this today though as only 1 leader elected controller can be managing the status.

Use Case

This would give necessary insight into the status of your tunnels for debugging and troubleshooting purposes. This way when you do something like
kubectl get tunnels you would get an output thats like 3/3 showing the pods that have active tunnels.

Related issues

No response

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.