GithubHelp home page GithubHelp logo

inlets / inlets-operator Goto Github PK

View Code? Open in Web Editor NEW
1.3K 19.0 98.0 18.92 MB

Get public TCP LoadBalancers for local Kubernetes clusters

Home Page: https://docs.inlets.dev/reference/inlets-operator

License: MIT License

Go 84.72% Shell 7.28% Dockerfile 2.08% Makefile 4.00% Mustache 1.93%
loadbalancer kind k3d minikube kubernetes tunnel local-cluster devops tcp encrypted

inlets-operator's Introduction

inlets-operator

Build Status License: MIT Go Report Card Documentation

Get public TCP LoadBalancers for local Kubernetes clusters

When using a managed Kubernetes engine, you can expose a Service as a "LoadBalancer" and your cloud provider will provision a TCP cloud load balancer for you, and start routing traffic to the selected service inside your cluster. In other words, you get ingress to an otherwise internal service.

The inlets-operator brings that same experience to your local Kubernetes cluster by provisioning a VM on the public cloud and running an inlets server process there.

Within the cluster, it runs the inlets client as a Deployment, and once the two are connected, it updates the original service with the IP, just like a managed Kubernetes engine.

Deleting the service or annotating it will cause the cloud VM to be deleted.

See also:

Change any LoadBalancer from <pending> to a real IP

Once the inlets-operator is installed, any Service of type LoadBalancer will get an IP address, unless you exclude it with an annotation.

kubectl run nginx-1 --image=nginx --port=80 --restart=Always
kubectl expose pod/nginx-1 --port=80 --type=LoadBalancer

$ kubectl get services -w
NAME               TYPE        CLUSTER-IP        EXTERNAL-IP       PORT(S)   AGE
service/nginx-1    ClusterIP   192.168.226.216   <pending>         80/TCP    78s
service/nginx-1    ClusterIP   192.168.226.216   104.248.163.242   80/TCP    78s

You'll also find a Tunnel Custom Resource created for you:

$ kubectl get tunnels

NAMESPACE   NAME             SERVICE   HOSTSTATUS     HOSTIP         HOSTID
default     nginx-1-tunnel   nginx-1   provisioning                  342453649
default     nginx-1-tunnel   nginx-1   active         178.62.64.13   342453649

We recommend exposing an Ingress Controller or Istio Ingress Gateway, see also: Expose an Ingress Controller

Plays well with other LoadBalancers

Want to create tunnels for all LoadBalancer services, but ignore one or two?

Want to disable the inlets-operator for a particular Service? Add the annotation operator.inlets.dev/manage with a value of 0.

kubectl annotate service nginx-1 operator.inlets.dev/manage=0

Want to ignore all services, then only create Tunnels for annotated ones?

Install the chart with annotatedOnly: true, then run:

kubectl annotate service nginx-1 operator.inlets.dev/manage=1

Using IPVS for your Kubernetes networking?

For IPVS, you need to declare a Tunnel Custom Resource instead of using the LoadBalancer field.

apiVersion: operator.inlets.dev/v1alpha1
kind: Tunnel
metadata:
  name: nginx-1-tunnel
  namespace: default
spec:
  serviceRef:
    name: nginx-1
    namespace: default
status: {}

You can pre-define the auth token for the tunnel if you need to:

spec:
  authTokenRef:
    name: nginx-1-tunnel-token
    namespace: default

Who is this for?

Your cluster could be running anywhere: on your laptop, in an on-premises datacenter, within a VM, or on your Raspberry Pi. Ingress and LoadBalancers are a core-building block of Kubernetes clusters, so Ingress is especially important if you:

  • run a private-cloud or a homelab
  • self-host applications and APIs
  • test and share work with colleagues or clients
  • want to build a realistic environment
  • integrate with webhooks and third-party APIs

There is no need to open a firewall port, set-up port-forwarding rules, configure dynamic DNS or any of the usual hacks. You will get a public IP and it will "just work" for any TCP traffic you may have.

How does it compare to other solutions?

  • There are no rate limits on connections or bandwidth limits
  • You can use your own DNS
  • You can use any IngressController or an Istio Ingress Gateway
  • You can take your IP address with you - wherever you go

Any Service of type LoadBalancer can be exposed within a few seconds.

Since exit-servers are created in your preferred cloud (around a dozen are supported already), you'll only have to pay for the cost of the VM, and where possible, the cheapest plan has already been selected for you. For example with Hetzner (coming soon) that's about 3 EUR / mo, and with DigitalOcean it comes in at around 5 USD - both of these VPSes come with generous bandwidth allowances, global regions and fast network access.

Conceptual overview

In this animation by Ivan Velichko, you see the operator in action.

It detects a new Service of type LoadBalancer, provisions a VM in the cloud, and then updates the Service with the IP address of the VM.

Demo GIF

There's also a video walk-through of exposing an Ingress Controller

Installation

Check out the reference documentation for inlets-operator to get exit-nodes provisioned on different cloud providers here.

See also: Helm chart

Expose an Ingress Controller or Istio Ingress Gateway

Unlike other solutions, this:

  • Integrates directly into Kubernetes
  • Gives you a TCP LoadBalancer, and updates its IP in kubectl get svc
  • Allows you to use any custom DNS you want
  • Works with LetsEncrypt

Configuring ingress:

Other use-cases

Provider Pricing

The host provisioning code used by the inlets-operator is shared with inletsctl, both tools use the configuration in the grid below.

These costs need to be treated as an estimate and will depend on your bandwidth usage and how many hosts you decide to create. You can at all times check your cloud provider's dashboard, API, or CLI to view your exit-nodes. The hosts provided have been chosen because they are the absolute lowest-cost option that the maintainers could find.

Provider Price per month Price per hour OS image CPU Memory Boot time
Google Compute Engine * ~$4.28 ~$0.006 Ubuntu 20.04 1 614MB ~3-15s
Equinix-Metal ~$360 $0.50 Ubuntu 20.04 1 32GB ~45-60s
Digital Ocean $5 ~$0.0068 Ubuntu 18.04 1 1GB ~20-30s
Scaleway 5.84€ 0.01€ Ubuntu 20.04 2 2GB 3-5m
Amazon Elastic Computing 2 $3.796 $0.0052 Ubuntu 20.04 1 1GB 3-5m
Linode $5 $0.0075 Ubuntu 20.04 1 1GB ~10-30s
Azure $4.53 $0.0062 Ubuntu 20.04 1 0.5GB 2-4min
Hetzner 4.15€ €0.007 Ubuntu 20.04 1 2GB ~5-10s
  • The first f1-micro instance in a GCP Project (the default instance type for inlets-operator) is free for 720hrs(30 days) a month

Video walk-through

In this video walk-through Alex will guide you through creating a Kubernetes cluster on your laptop with KinD, then he'll install ingress-nginx (an IngressController), followed by cert-manager and then after the inlets-operator creates a LoadBalancer on the cloud, you'll see a TLS certificate obtained by LetsEncrypt.

Video demo

Tutorial: Tutorial: Expose a local IngressController with the inlets-operator

Contributing

Contributions are welcome, see the CONTRIBUTING.md guide.

Also in this space

  • inlets - L7 HTTP / L4 TCP tunnel which can tunnel any TCP traffic. Secure by default with built-in TLS encryption. Kubernetes-ready with Operator, helm chart, container images and YAML manifests
  • MetalLB - a LoadBalancer for private Kubernetes clusters, cannot expose services publicly
  • kube-vip - a more modern Kubernetes LoadBalancer than MetalLB, cannot expose services publicly
  • Cloudflare Argo - product from Cloudflare for Cloudflare customers and domains - K8s integration available through Cloudflare DNS and ingress controller. Not for use with custom Ingress Controllers
  • ngrok - a SasS tunnel service tool, restarts every 7 hours, limits connections per minute, SaaS-only, no K8s integration available, TCP tunnels can only use high/unconventional ports, can't be used with Ingress Controllers
  • Wireguard - a modern VPN, not for exposing services publicly
  • Tailscale - a mesh VPN that automates Wireguard, not for exposing services publicly

Author / vendor

inlets and the inlets-operator are brought to you by OpenFaaS Ltd.

inlets-operator's People

Contributors

adamjohnson01 avatar aledbf avatar alexandrevilain avatar alexellis avatar blackjid avatar cpanato avatar dependabot[bot] avatar jjcollinge avatar jsiebens avatar juan-lee avatar lucasroesler avatar matevzmihalic avatar rashedkvm avatar rawkode avatar sachinmaharana avatar senk avatar skycoop avatar thesurlydev avatar utsavanand2 avatar viveksyngh avatar waterdrips avatar welteki avatar zechen0 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

inlets-operator's Issues

deployment.yaml template needs project-id command line argument for packet provisioning

The Helm deployment.yaml template is missing project-id for provisioning on packet.com

Expected Behaviour

template should pass project-id to the inlets-operator container

Current Behaviour

template is passing region twice instead of project-id

{{- if .Values.packetProjectId }}
- "-region={{.Values.packetProjectId}}"
{{- end }}

Possible Solution

replace region with project-id

Context

Provisioning to packet.com with latest version of helm chart does not work since project-id is not passed.

Your Environment

inlets/inlets-operator:0.5.4

[Feature request] Multiplex a single exit-node

Currently, inlets-operator creates one exit-node (DO Droplet, etc.) for each LoadBalancer created. This can add up quickly if a number of services are behind the load balancer.

Inlets offers the ability to set multiple upstreams. For example, from https://blog.alexellis.io/https-inlets-local-endpoints/:

inlets client \
 --remote wss://exit.domain.com \
 --upstream=gateway.domain.com=http://127.0.0.1:8080,prometheus.domain.com=http://127.0.0.1:9090

Will inlets-operator offer this capability as well?

Expected Behaviour

When creating a 2nd load balancer, if a valid one exists, use Caddy + inlet upstreams to differentiate on the existing exit node

Current Behaviour

A second exit-node is created on the external provider.

Possible Solution

Use a forwarding setup similar to the one from the blog post, using Caddy to recognize a subdomain, etc.

Steps to Reproduce (for bugs)

kubectl create secret generic inlets-access-key --from-literal inlets-access-key="DO_ACCESS_KEY_HERE"
kubectl apply -f https://raw.githubusercontent.com/inlets/inlets-operator/master/artifacts/crd.yaml
kubectl apply -f https://raw.githubusercontent.com/inlets/inlets-operator/master/artifacts/operator-rbac.yaml
kubectl apply -f https://raw.githubusercontent.com/inlets/inlets-operator/master/artifacts/operator.yaml
kubectl run nginx-1 --image=nginx --port=80 --restart=Always
kubectl run nginx-2 --image=nginx --port=80 --restart=Always
kubectl expose deployment nginx-1 --port=80 --type=LoadBalancer
kubectl expose deployment nginx-2 --port=80 --type=LoadBalancer

Context

Your Environment

  • inlets version, find via kubectl get deploy inlets-operator -o wide

  • Kubernetes distribution i.e. minikube v0.29.0., KinD v0.5.1, Docker Desktop: Rancher

  • Kubernetes version kubectl version:

  • Operating System and version (e.g. Linux, Windows, MacOS): Linux CentOS 8

  • Cloud provisioner: DigitalOcean.com

Add Travis build for CI

Expected Behaviour

It'd be great to have a Travis build for CI

Current Behaviour

We don't have Travis yet

Possible Solution

Use alexellis/inlets as an example, after merging I'll add the token and Docker Hub credentials

[Bug] inlets client failed to connect when exposing port is 8080

Expected Behaviour

The inlets client in k8s cluster should connect to inlets server successfully.

Current Behaviour

The inlets client in k8s cluster fails to connect inlets server.

Note that if we use inlets client outside k8s cluster, the connection will succeed, meaning the inlets server is working just fine.

Possible Solution

Steps to Reproduce (for bugs)

kind create cluster

# Install inlets-operator
kubectl apply -f ./artifacts/crds/
kubectl create secret generic inlets-access-key --from-literal inlets-access-key=<Linode Access Token>
helm install --set provider=linode,region=us-east inlets-operator ./chart/inlets-operator

# Install and expose echoserver
kubectl run echoserver --image=gcr.io/google_containers/echoserver:1.4 --port=8080
cat <<EOF | kubectl create -f -                      
apiVersion: v1  
kind: Service
metadata:
  labels:
    run: echoserver
  name: echoserver
  namespace: default
spec:
  ports:
  - name: http
    port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    run: echoserver
  type: LoadBalancer
EOF

Logs

kubectl logs deployment.apps/echoserver-tunnel-client     
2020/07/18 10:25:36 Welcome to inlets.dev! Find out more at https://github.com/inlets/inlets
2020/07/18 10:25:36 Starting client - version 2.7.3
2020/07/18 10:25:36 Upstream:  => http://echoserver:8080
2020/07/18 10:25:36 Token: "p9DzT5gtqubn0EefdK9f9IxgEXjRGAXx57tiEnwR2henRhu7XovASoOvcHhklq0i"
time="2020/07/18 10:25:36" level=info msg="Connecting to proxy" url="ws://45.79.148.100:8080/tunnel"
time="2020/07/18 10:25:36" level=error msg="Failed to connect to proxy. Response status: 200 - 200 OK. Response body: CLIENT VALUES:\nclient_address=10.244.0.1\ncommand=GET\nreal path=/tunnel\nquery=nil\nrequest_version=1.1\nrequest_uri=http://45.79.148.100:8080/tunnel\n\nSERVER VALUES:\nserver_version=nginx: 1.10.0 - lua: 10001\n\nHEADERS RECEIVED:\nauthorization=Bearer p9DzT5gtqubn0EefdK9f9IxgEXjRGAXx57tiEnwR2henRhu7XovASoOvcHhklq0i\nconnection=Upgrade\nhost=45.79.148.100:8080\nsec-websocket-key=b5J+fZqRtaHdOHjv3hZ/7w==\nsec-websocket-version=13\nupgrade=websocket\nuser-agent=Go-http-client/1.1\nx-inlets-id=dcb2f0ffac964928a06bd64aaf5d60e8\nx-inlets-upstream==http://echoserver:8080\nBODY:\n-no body in request-" error="websocket: bad handshake"

Context

Your Environment

  • Can be reproduced in different k8s environments: Linode, DigitalOcean, kinD.
  • Can be reproduced using different apps, e.g. echoserver, openfaas
  • Can only be reproduced with free version of inlets, unable to reproduce with inlets PRO

Support request for network timeout with DigitalOcean API

I have followed the tutorial and it works correctly and reliably on my Mac, but on my k3s RPi cluster I am getting the error:

error syncing 'default/nginx-1-tunnel': Post https://api.digitalocean.com/v2/droplets: dial tcp: i/o timeout, requeuing

Expected Behaviour

The droplet should be created

Current Behaviour

No droplet created

Steps

  1. GIt clone this repo
  2. Install arkade
    2.a Copy my DigitalOcean API key into the file /home/pi/faas-netes/do/key
  3. arkade install inlets-operator --provider digitalocean --region lon1 --token-file /home/pi/faas-netes/do.key
  4. kubectl run nginx-1 --image=arm32v7/nginx --port=80 --restart=Always
  5. kubectl expose deployment nginx-1 --port=80 --type=LoadBalancer

Other

The inlets-access-key value didn't look right (even after base64 decoding) so I manually created a secret for inlets-access-key with the value of my API Key from Digital Ocean & deleted the operator pod. Sadly, same error.

Documentation for EC2 provider

Current Behaviour

Currently, there is no documentation on how to use the EC2 provider. While I think I figured out most of it, there are still a few things that I'm not sure on. Documentating this for other people will help them get up to speed with using Inlets faster.

Possible Solution

If I got everything right (which I assume, because I was able to complete the tutorial), you can start inlets-operator with the following parameters if you want to use the EC2 provider:

./inlets-operator \
 --kubeconfig /Users/lstigter/.kube/config \
 --provider ec2 \
 --region us-west-2 \
 --zone us-west-2a \
 --access-key $AWS_ACCESS_KEY_ID \
 --secret-key $AWS_SECRET_KEY \
 --license $LICENSE

For my trial, I used an account that had all privileges but considering the implications of that I want to have the least-privileged capabilities it needs to create the AWS resources. Would that be something like

{
   "Version": "2012-10-17",
   "Statement": [{
      "Effect": "Allow",
      "Action": "ec2:RunInstances",
      "Resource": [
        "arn:aws:ec2:region::image/ami-9e1670f7",
        "arn:aws:ec2:region::image/ami-45cf5c3c",
        "arn:aws:ec2:region:account:instance/*",
        "arn:aws:ec2:region:account:volume/*",
        "arn:aws:ec2:region:account:key-pair/*",
        "arn:aws:ec2:region:account:security-group/*",
        "arn:aws:ec2:region:account:subnet/*",
        "arn:aws:ec2:region:account:network-interface/*"
      ]
    }
   ]
}

tunnel-client "Failed to connect to proxy" errors

While going through the blog post I encountered errors with the tunnel client not being able to connect to the droplet or other tunnel server. The problem seems to be due to the dataplane, control, and service ports being the same.

Expected Behaviour

Tunnel client should be able to connect to the tunnel server to expose a private service via an inlet.

Current Behaviour

The tunnel client pod logs the following errors.

time="2019-10-06T16:12:26Z" level=info msg="Connecting to proxy" url="ws://<redacted>:80/tunnel"
time="2019-10-06T16:12:26Z" level=error msg="Failed to connect to proxy" error="websocket: bad handshake"
time="2019-10-06T16:12:26Z" level=error msg="Failed to connect to proxy" error="websocket: bad handshake"

Possible Solution

Allow specifying a control-port other than 80.

Steps to Reproduce (for bugs)

Follow steps from here using the digital ocean provisioner.

Context

I was experimenting with implementing an azure provisioner and encountered these problems. At first I thought it was a problem with my azure setup, but then I tried with the digital ocean provisioner and encountered the same errors.

Your Environment

Reproduced using the same steps from the blog post using kind on Ubuntu 18.04.

  • inlets version inlets --version 2.4.1

  • Docker/Kubernetes version docker version / kubectl version: v1.15.3

  • Operating System and version (e.g. Linux, Windows, MacOS): Ubuntu 18.04

  • Link to your project or a code example to reproduce issue: reproduced on master

[Feature] Add openstack provider

Adding openstack provider will allow us to use great providers like OVH providing instances starting à 0,008€/hour.

I can work on it, submitting a PR on inletsctl.

Expected Behaviour

Adding the --provider openstack flag should provision a new instance on an openstack provider.

Current Behaviour

We can't provision an instance on providers using OpenStack.

Possible Solution

Using gophercloud: https://github.com/gophercloud/gophercloud
And adding the following flags:

  • openstack-endpoint
  • openstack-username
  • openstack-password

The GCE exit node doesn't delete with `kubectl delete svc`

The GCE exit node doesn't delete with kubectl delete svc

Expected Behaviour

Deleting the kubernetes service with a kubectl delete svc <svc_name> should delete the exit node on GCE with the service.

Current Behaviour

The service gets deleted as expected but not the exit node
Screenshot 2020-02-10 at 7 24 58 PM

Possible Solution

PR is underway

Steps to Reproduce (for bugs)

  1. Follow the steps in the README for exit node on GCE
  2. Run nginx with a service exposing the deployment
  3. Try deleting the service with `kubectl delete <svc_name>

Your Environment

  • inlets-operator version, find via kubectl get deploy inlets-operator -o wide
    0.6.2

  • Kubernetes distribution i.e. minikube v0.29.0., KinD v0.5.1, Docker Desktop:
    minikube version: v1.2.0

  • Cloud provisioner: (DigitalOcean.com / Packet.com / etc)
    Google Cloud Platform

Support issue for Flux/k3d

Getting connection refused errors

time="2019-10-22T23:38:19Z" level=error msg="Failed to connect to proxy" error="dial tcp 178.128.168.110:8080: connect: connection refused"
time="2019-10-22T23:38:19Z" level=error msg="Failed to connect to proxy" error="dial tcp 178.128.168.110:8080: connect: connection refused"
time="2019-10-22T23:38:24Z" level=info msg="Connecting to proxy" url="ws://178.128.168.110:8080/tunnel"
time="2019-10-22T23:38:24Z" level=error msg="Failed to connect to proxy" error="dial tcp 178.128.168.110:8080: connect: connection refused"
time="2019-10-22T23:38:24Z" level=error msg="Failed to connect to proxy" error="dial tcp 178.128.168.110:8080: connect: connection refused"
time="2019-10-22T23:38:29Z" level=info msg="Connecting to proxy" url="ws://178.128.168.110:8080/tunnel"
time="2019-10-22T23:38:29Z" level=error msg="Failed to connect to proxy" error="dial tcp 178.128.168.110:8080: connect: connection refused"
time="2019-10-22T23:38:29Z" level=error msg="Failed to connect to proxy" error="dial tcp 178.128.168.110:8080: connect: connection refused"

Using flux, current manifest: https://github.com/colek42/k3d-bootstrap/tree/fe0f92d4025d40bc924e820362e0127698b94682/releases

Expected Behaviour

When the repo is bootstrapped I should see the default backend.

Current Behaviour

This error is happening when when installing an ingress-nginx controller in a k3d environment. The tunnel should connect and I should see a default backend. Hitting the ingress controller on the NodePort works as expected

Possible Solution

Steps to Reproduce (for bugs)

https://github.com/colek42/k3d-bootstrap/tree/fe0f92d4025d40bc924e820362e0127698b94682/releases (flux repo)

Context

I am trying to get ingress working on bootstrap.

Your Environment

  • inlets version, find via kubectl get deploy inlets-operator -o wide
    0.3.2

  • Kubernetes distribution i.e. minikube v0.29.0., KinD v0.5.1, Docker Desktop:
    k3d

  • Kubernetes version kubectl version:
    1.14.4 k3s

  • Operating System and version (e.g. Linux, Windows, MacOS):
    linux-ubuntu 19

  • Cloud provisioner: (DigitalOcean.com / Packet.com)
    DO

Digital Ocean Naive Install Spins Up multiple Droplets

Doing a simple install of the inlets operator and I then installed an ingress on my cluster which started multiple hosts on my Digitial Ocean Account (there was only a single service with type=LoadBalancer)

Expected Behaviour

A single DO droplet gets started and I then have an IP address for my Kind cluster, when I delete the Service it should delete the corresponding Droplet.

Current Behaviour

It seems to get stuck and spins up a new Droplet on each reconciliation:
image

Then when I delete the Helm Chart, the Droplets remain.

Steps to Reproduce (for bugs)

kind create cluster --name test

 kubectl create secret generic inlets-access-key \
            --from-literal inlets-access-key=$DO_TOKEN

kubectl apply -f https://raw.githubusercontent.com/inlets/inlets-operator/0.9.0/artifacts/crds/inlets.inlets.dev_tunnels.yaml
kubectl apply -f https://raw.githubusercontent.com/inlets/inlets-operator/0.9.0/artifacts/operator-rbac.yaml
kubectl apply -f https://raw.githubusercontent.com/inlets/inlets-operator/0.9.0/artifacts/operator.yaml

helm install ingress ingress-nginx/ingress-nginx

Context

I really love this idea, I've been looking for something like this. In all honesty, I have not touched any of the default settings or configurations as of yet. I just worry that a naive installer that installs this might spin up many Droplets without realising and this could cause them to break DO fair usage policy and have their account blocked (I have had this happen before).

Let me know if I can help out. I'd love to contribute

Your Environment

  • inlets-operator version: 0.7.4
  • kind version: v0.9.0 go1.15.2 darwin/amd64
  • kubernetes: v1.19.1
  • operating System: MacOs
  • Cloud Provisioner: Digitial Ocean

Add Support for an Azure Container Instance Provisioner

Add support for using an Azure Container Instance as a provisioner.

Expected Behaviour

The behavior would be similar to the droplet provisioner except it would use a container instance resource on azure instead of a digital ocean droplet.

Current Behaviour

Currently inlet servers are only supported on digital ocean and packet.

Context

inlet provides a very nice way to expose services from a private cluster. It would be great if azure customers could leverage this functionality.

Ports in tunnel are not updated when updating the LoadBalancer service

Expected Behaviour

When adding or removing ports of a LoadBalancer service, the ports of the tunnel should be updated as well.

Current Behaviour

Ports in the tunnel are not updated.

Steps to Reproduce (for bugs)

  1. Create a LoadBalancer service with a single port
apiVersion: v1
kind: Service
metadata:
  name: example-service
spec:
  selector:
    app: example
  type: LoadBalancer
  ports:
    - name: http
      port: 8080
      targetPort: 8080

inlets pod is created, logs:

2020/09/19 05:21:21 Welcome to inlets-pro! Client version 0.7.0
2020/09/19 05:21:21 Licensed to: Johan Siebens <*****>, expires: *** day(s)
2020/09/19 05:21:21 Upstream server: example-service, for ports: 8080
inlets-pro client. Copyright Alex Ellis, OpenFaaS Ltd 2020
time="2020/09/19 05:21:21" level=info msg="Connecting to proxy" url="wss://****:8123/connect"
  1. Update the service with an extra port:
apiVersion: v1
kind: Service
metadata:
  name: example-service
spec:
  selector:
    app: example
  type: LoadBalancer
  ports:
    - name: http
      port: 8080
      targetPort: 8080
    - name: https
      port: 8443
      targetPort: 8443
  1. Tunnel is not updated with the extra port
    logs of the inlets-operator pod:
I0919 05:35:34.666837       1 controller.go:317] Successfully synced 'default/example-service-tunnel'
I0919 05:36:04.667099       1 controller.go:317] Successfully synced 'default/example-service-tunnel'
2020/09/19 05:36:24 Tunnel exists: example-service-tunnel
I0919 05:36:24.440714       1 controller.go:317] Successfully synced 'default/example-service'

Context

Your Environment

  • inlets-operator version: 0.9.1

  • Kubernetes distribution i.e. minikube v0.29.0., KinD v0.5.1, Docker Desktop: k3s

  • Kubernetes version kubectl version: 1.18

  • Operating System and version (e.g. Linux, Windows, MacOS): Linux

[Feature request] Trigger events for ip allocation

When running kubectl get events -w while adding a new loadbalancer, There is no log about the IP allocation for the service.

I can see an event when the service gets an IP from the dhcp through metallb. But nothing is registered for inlets operator.

# Example
kubectl get events -w
...
39s         Normal    IPAllocated         service/traefik                             Assigned IP "192.168.86.100"
39s         Normal    Synced              tunnel/nginx-tunnel                         Tunnel synced successfully
...

To reproduce

  • Create a pod running nginx
  • Espose nginx through a loadbalancer service
  • Watch the events with kubectl get events -w until de service and the tunnel is created.

You'll see an IPAllocated event for the loadbalancer and a Synced event for the tunnel.

It would be nice to have an IPAllocated event for the load balancer with the new exit node and the tunnels are ready.

  • inlets-operator version 0.2.6
  • Kubernetes distribution k3s 0.8
  • Kubernetes version kubectl version: v1.14.6-k3s.1

[BUG][GCE] Missing ports

Using GCE as provider in inlets-operator while creating a service type load-balancer with the following specification:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: ingress-controller
  name: ingress-controller
spec:
  ports:
  - name: 80-80
    port: 80
    protocol: TCP
    targetPort: 80
  - name: 443-443
    port: 443
    protocol: TCP
    targetPort: 443
  selector:
    app: ingress-controller
  type: LoadBalancer

The firewall rule does not open port 80 nor 443. It's open only the 8080 port.

Expected Behaviour

All ports defined in the LoadBalancer service has to be opened in the firewall-rule

Current Behaviour

The operator just open the 8080 port:

https://github.com/inlets/inlets-operator/blob/master/controller.go#L494

Possible Solution

The firewall-port should accept an array of ports.

Steps to Reproduce (for bugs)

Deploy gce inlets-operator.
Follow the getting started in this repo: https://github.com/inlets/inlets-operator#get-a-loadbalancer-provided-by-inlets

Context

I am trying to create an exit node to serve as loadbalancer for my ingress controller. (Nginx Ingress Controller)

Your Environment

  • inlets-operator version, find via kubectl get deploy inlets-operator -o wide
    Latest

  • Kubernetes distribution i.e. minikube v0.29.0., KinD v0.5.1, Docker Desktop:
    GKE 1.15.4

  • Kubernetes version kubectl version:
    1.17.1

  • Operating System and version (e.g. Linux, Windows, MacOS):
    COS

  • Cloud provisioner: (DigitalOcean.com / Packet.com / etc)

Google Cloud

[Feature request] Add cloudscale.ch provisioner

Expected Behaviour

The inlets operator supports provisioning an inlets endpoint at cloudscale.ch.

Current Behaviour

The IaaS provider cloudscale.ch is not supported.

Possible Solution

Add support for cloudscale.ch using their go sdk.

Context

Supporting the local IaaS provider is a cool thing to do and having alternatives to the big ones is always a good thing. And it gives you a Swiss IP address, which could be a good thing for some people.

[Bug] EC2 provisioner does not open client port

The version of inletsctl used by the operator does not allow access to any client ports in the security group that gets created. A PR that fixes this has now been merged.

Expected Behaviour

Connections to the Kubernetes service succeed.

Current Behaviour

Access to the client port is being blocked so connections fail.

Possible Solution

Steps to Reproduce (for bugs)

  1. Deploy an EC2 exit node with the operator
  2. Create a loadbalancer service
  3. Attempt to connect to the service using the exit node public IP

Your Environment

  • Operator 0.5.6
  • kind v0.6.0 go1.13.4 darwin/amd64
  • Kubernetes: v1.16.0
  • Operating System: MacOS 10.15.1 (19B88)
  • Cloud provisioner: AWS EC2

Read license from file in tunnel client deployment vs from argument

Read license from file in tunnel client deployment vs from argument

Expected Behaviour

I would like the secret to be read from a file i.e. a Kubernetes secret which is mounted, so that the secret can be updated without editing the deployments

Current Behaviour

If the license needs to be rotated, we have to edit each deployment in Kubernetes, or delete the deployment and have it recreated. The license also "leaks" via kubectl get deployment

Possible Solution

Create a secret for the inlets-pro license when deploying the operator, then bind this to each deployment. It may also be necessary to re-create the secret in each namespace where it is used by a tunnel deployment. I'm not sure that there's a way around this.

[Feature request] Support provisioning to multiple regions for exit-servers

Expected Behaviour

The operator should create the tunnel reliably without worrying what region is accepting new nodes.

Current Behaviour

When specifying the region with digital ocean I can only specify one region. When the region is currently not accepting any new nodes it just fails until digital ocean re-enables the zone.

image

image

image

Possible Solution

specify a comma separated list
-region=sfo1,sfo2

Steps to Reproduce (for bugs)

Go to Digital Ocean, add a new node and check to see what region is not currently accepting new nodes and specify that region in the configuration.

Context

Your Environment

  • inlets-operator version, find via kubectl get deploy inlets-operator -o wide
    inlets/inlets-operator:0.6.6

  • Kubernetes distribution i.e. minikube v0.29.0., KinD v0.5.1, Docker Desktop:
    Rancher v2.3.2

  • Kubernetes version kubectl version:

Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-14T04:24:29Z", GoVersion:"go1.12.13", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.9", GitCommit:"2e808b7cb054ee242b68e62455323aa783991f03", GitTreeState:"clean", BuildDate:"2020-01-18T23:24:23Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
  • Operating System and version (e.g. Linux, Windows, MacOS): Ubuntu 18.04.4 LTS

  • Cloud provisioner: DigitalOcean.com

Update load balancer status

The status.loadBalancer.ingress is not updated after being assigned an external IP address.

Expected Behaviour

After deploying a service of type LoadBalancer I expect that the status.loadBalancer.ingress will be updated with the IP address of the exit node.

Current Behaviour

The current status is empty but the service was successfully assigned an external IP address.

status:
loadBalancer: {}

Possible Solution

This line is commented out which seems like it would fix the issue. Not sure if this is intentional.

Steps to Reproduce (for bugs)

  1. Deploy the inlets-operator as described in this repo
  2. Deploy a service of type LoadBalancer
  3. Wait for the service to be assigned an external IP address
  4. Inspect the service kubectl get svc <my service> -o yaml
  5. Observe that the status.loadBalancer is empty

Context

First, awesome project. Was able to get the inlets working on private RPi cluster in no time. The only
issue I am having now is that when using Pulumi, there is a check to make sure a service of type LoadBalancer is successfully assigned an IP address by inspecting the status.loadBalancer.ingress[0].ip. Currently the inlets operator does not update this status so my deployment hangs (despite successfully creating an exit node and assigning an external IP address).

error: 2 errors occurred:
        * resource infrastructure/traefik was successfully created, but the Kubernetes API server reported that it failed to fully initialize or become live: 'traefik' timed out waiting to be Ready
        * Service was not allocated an IP address; does your cloud provider support this?

Your Environment

  • inlets-operator version, find via kubectl get deploy inlets-operator -o wide
    inlets/inlets-operator:0.7.0

  • Kubernetes distribution i.e. minikube v0.29.0., KinD v0.5.1, Docker Desktop:
    k3s on Raspberry Pi

  • Kubernetes version kubectl version:

Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.5", GitCommit:"20c265fef0741dd71a66480e35bd69f18351daea", GitTreeState:"clean", BuildDate:"2019-10-15T19:16:51Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2+k3s1", GitCommit:"cdab19b09a84389ffbf57bebd33871c60b1d6b28", GitTreeState:"clean", BuildDate:"2020-01-27T18:07:30Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/arm"} 
  • Operating System and version (e.g. Linux, Windows, MacOS):
    Windows

  • Cloud provisioner: (DigitalOcean.com / Packet.com / etc)
    Digital Ocean

[Feature] Add support for loadBalancerSourceRanges

Expected Behaviour

To configure a Load Balancer firewall, there is the option to use the Service's loadBalancerSourceRanges to define the ranges that should be allowed.

When possible, the provisioner could create firewall rules based on the ranges defined in loadBalancerSourceRanges

Example:

apiVersion: v1
kind: Service
metadata:
  name: myapp
spec:
  ports:
  - port: 8765
    targetPort: 9376
  selector:
    app: example
  type: LoadBalancer
  loadBalancerSourceRanges:
  - 193.92.145.1/32
  - 193.92.145.2/32

With this example, a load balancer will be created that is only accessible to clients with IP addresses from 193.92.145.1 and 193.92.145.2.

Current Behaviour

The field loadBalancerSourceRanges is ignored.

Context

inlets-operator is great to expose some services to the public, but sometimes it could be useful to restrict access to the service based on CIDR ranges.

Proposal: Add Scaleway provider

Scaleway is a great and cheap provider. I think that this provider is well suited for this operator.

Expected Behaviour

Adding the --provider scaleway flag should provision a new scaleway instance.

Current Behaviour

We can't provision a scaleway instance.

Possible Solution

  1. Add scaleway.go file to the pkg/provision package
  2. Add scaleway in the controller.

Their new SDK seems to be pretty easy to use: https://github.com/scaleway/scaleway-sdk-go
I think that the controller part should be refactored to stay readable even if we add new providers.

I can do the pull request if you approve this proposal 😃

Context

I'm always using scaleway for all my tests: it's fast and cheap!

Feature request: customise GCE plan

Currently Exit Node plan is hardcoded here. I think it better to have some kind of customization on this plan.

Expected Behaviour

Plan should be customizable with the default value if not provided.

Current Behaviour

The operator use hardcoded value for plan (machine type).

Possible Solution

Add additional run args or environment variable

Context

I'm using this operator for help me extending my GCE credits for experiments. Going forward before I started using native GCE HTTP Load Balancer (that cost so much money than this approach), I want to scale the tunnel first.

Backlog: create helm chart from static manifests

Expected Behaviour

We could use a helm chart to start being able to swap parameters for users such as the backend cloud, or inlets OSS / pro.

Current Behaviour

Not available

Possible Solution

Run this:

mkdir -p chart
cd chart
helm create inlets-operator

Then edit the YAML files generated by helm until they resemble those in ./artifacts/, test e2e and send a PR in.

Error swallowed when provisioning with GCP and no projectID

The error is swallowed when provisioning with GCP and no projectID.

Expected Behaviour

An error at the least in the logs, or a start-up error perhaps?

Current Behaviour

There's no error or logs, but also no status or IP, this was found by found by Duffie @mauilion.

Possible Solution

Reproduce the issue by misconfiguring without a project ID - (caused by alexellis/k3sup#186) - then see the error and do what we can to fix it.

Context

Confusing for new users outside of the usual flow of using Packet/DO.

Technical support request

Expected Behaviour

I should NOT get the above error

Current Behaviour

Possible Solution

Steps to Reproduce (for bugs)

1.I enter "inlets client --remote=ws://xxx.xxx.xxx.xxx --token XXXXX... --upstream="with new domain=http://192.168.1.141"
2.my inlets is version 2.7.0
3.This was working in the past.
4.

Context

I am trying to connect a new domain on my Raspberry Pi to my DO droplet. NO DOCKER, NO KUBERNETES.

Your Environment

Raspian Buster on Raspberry Pi 2. 2 domains on 2 Raspberry 3 B+ and 1 domain on Raspberry Pi 4 all of which are running OK now but when I try to add another domain and run "inlets client" I have started getting this error. I have also reported this to DO but so far no response from them.
  • inlets-operator version, find via kubectl get deploy inlets-operator -o wide
    NONE

  • Kubernetes distribution i.e. minikube v0.29.0., KinD v0.5.1, Docker Desktop:
    NONE

  • Kubernetes version kubectl version:
    NONE

  • Operating System and version (e.g. Linux, Windows, MacOS):
    Raspian Buster Linux. One domain is powered by Joomla, 1 by wordpress and 1 by Gitea.

  • Cloud provisioner: (DigitalOcean.com / Packet.com / etc)
    DigitalOcean.com

Add Linode provisioner

Add Linode provisioner

Expected Behaviour

The Linode provisioner is now available in inletsctl - thanks to @zechenbit, we can add support to the operator now by changing a few lines of code.

Current Behaviour

Only available in inletsctl as of version 0.5.5 in the provision package

Possible Solution

Update the version of the provision package and update the controller.go code:

In the control-loop:

https://github.com/inlets/inlets-operator/blob/master/controller.go#L134

When setting the user data, OS, plan and image:

https://github.com/inlets/inlets-operator/blob/master/controller.go#L434

In main.go when parsing parameters:

https://github.com/inlets/inlets-operator/blob/master/main.go#L100

In arkade where we accept a provider flag:

https://github.com/alexellis/arkade/blob/master/cmd/apps/inletsoperator_app.go#L35

And in the inlets-operator chart perhaps too?

https://github.com/inlets/inlets-operator/tree/master/chart/inlets-operator

That should be everything.

Context

Adding a popular provisioner with around a dozen regions, all costing about 5 USD / mo.

License argument should be "inletsProLicense" not "license" in chart README

The argument passed to helm here:

### DigitalOcean with inlets-pro
```sh
helm upgrade inlets-operator --install inlets/inlets-operator \
--set license=JWT_GOES_HERE
```

Should be inletsProLicense not license.

Context

I ran into an issue applying the license after copying this line, but worked it out pretty quickly as the value is different in values.yaml.

[Feature] Add provisioner for DIY VPS

Expected Behaviour

[Feature] Add provisioner for VPS

Current Behaviour

Exit nodes can be provisioned to DO/Packet

Possible Solution

Instead deploying a new instance, use an already created VPS/custom VM instead.

Steps to Reproduce (for bugs)

  1. Deploy the inlets operator
  2. Tweak the config with a configmap
  3. Add the token with a secret
  4. Profit

Context

Inlets does support DIY VPS/VMs

Your Environment

[Feature] Add Civo.com provisioner support

Expected Behaviour

Provisioning to instances on Civo.com in the UK - for 5 USD / mo / instance.

Current Behaviour

Available in inletsctl at present, but not in the operator.

Possible Solution

Move the Civo code written for inletsctl into the "provision" package in this project

Related: #31

Create a secret for the license, rather than using (only) a flag (for the operator)

Create a secret for the inlets-pro license, rather than using (only) a flag

Expected Behaviour

The license should be read from a file as not to leak the value in kubectl get deploy inlets-operator

Current Behaviour

The license is shown in the deployment and via helm install when it's passed as a flag.

Possible Solution

Using a secret, like we do for the API access token would make sense.

A change in the arkade app for the inlets-operator would also be required.

This is where the license is being read as an arg:

https://github.com/inlets/inlets-operator/blob/master/main.go#L79

Here is an example of reading a file (name passed via flag):

https://github.com/inlets/inlets-operator/blob/master/main.go#L74

And here is the helm chart to update:

https://github.com/inlets/inlets-operator/blob/master/chart/inlets-operator/templates/deployment.yaml#L36

Add an if statement and attach a volume in the same way as we do for a secret when the file is given instead of a literal value.

Provisioning to Hetzner Cloud + some questions

Hi! This project looks really cool! A few questions if you don't mind:

  • is this for development environments only or would it work for production as well? I am worried about the lb being a single point of failure
  • would it be possible to add support for Hetzner Cloud? It's a very good provider with incredible prices. I use it and love it but they don't offer load balancers yet so I am considering using inlets. I could use a DigitalOcean droplet in Frankfurt for now since added latency would be small. Can the region in DO be set?
  • what about security? Does the lb provisioned have a firewall and things like fail2ban? Is password auth disabled?
  • if I use DigitalOcean for now, can the lb be changed later with possibly no downtime if/when inlets adds support for Hetzner Cloud?

Thanks a lot in advance!

Support issue with EC2 provisioner and AWS EC2 Classic

I'm attempting to install inlets-operator by way of arkade which results in:

$ kubectl logs -n inlets deploy/inlets-operator
2020/08/08 19:10:49 Inlets client: inlets/inlets:2.7.3
2020/08/08 19:10:49 Inlets pro: false
W0808 19:10:49.685014       1 client_config.go:552] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0808 19:10:49.685868       1 controller.go:121] Setting up event handlers
I0808 19:10:49.685900       1 controller.go:243] Starting Tunnel controller
I0808 19:10:49.685903       1 controller.go:246] Waiting for informer caches to sync
I0808 19:10:49.785997       1 controller.go:251] Starting workers
I0808 19:10:49.786007       1 controller.go:257] Started workers
2020/08/08 19:10:49 Creating tunnel for nginx-1-tunnel.default
I0808 19:10:49.789938       1 controller.go:315] Successfully synced 'default/nginx-1'
2020/08/08 19:10:49 Provisioning started with provider:ec2 host:nginx-1-tunnel
E0808 19:10:51.798084       1 controller.go:320] error syncing 'default/nginx-1-tunnel': InvalidParameter: The AssociatePublicIpAddress parameter is only supported for VPC launches.
        status code: 400, request id: 7bc05cdf-7eb5-4167-9f44-3616397a40c6, requeuing

Secondary to the above, I had a hard time finding documentation for installing with AWS provider.

Expected Behaviour

Installing inlets-operator via arkade results in no error messages and creates an EC2 instance and whatever other steps should be done as part of what constitutes a successful install.

Documentation should be easier to find and should provide clear steps for a successful install and what to do if there's an issue. The documentation should also explicitly detail what will be done in the providers' account.

Current Behaviour

Possible Solution

Steps to Reproduce (for bugs)

export AWS_PROFILE=default
kubectl create ns inlets
arkade install inlets-operator -n inlets \
    -p ec2 \
    -r us-west-2 \
    -z us-west-2a \
    --token-file ~/Downloads/access-key \
    --secret-key-file ~/Downloads/secret-access-key

using arkade version is 0.6.0

where access-key and secret-access-key files just contain the access key and secret access key respectively.

Context

Your Environment

  • inlets-operator version, find via kubectl get deploy inlets-operator -o wide

  • Kubernetes distribution i.e. minikube v0.29.0., KinD v0.5.1, Docker Desktop: Bare metal 4 node cluster.

  • Kubernetes version kubectl version: 1.18.6

  • Operating System and version (e.g. Linux, Windows, MacOS): Linux (Ubuntu 20.04)

  • Cloud provisioner: AWS (us-west-2)

[Bug] - Operator creating VMs in a loop

With or without the pro license, I get the following as per below:

alex@alexx:~$ kubectl logs deploy/inlets-operator -f
2020/07/11 08:11:05 Inlets client: inlets/inlets-pro:0.6.0
2020/07/11 08:11:05 Inlets pro: true
W0711 08:11:05.154290       1 client_config.go:543] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0711 08:11:05.156499       1 controller.go:120] Setting up event handlers
I0711 08:11:05.156579       1 controller.go:262] Starting Tunnel controller
I0711 08:11:05.156587       1 controller.go:265] Waiting for informer caches to sync
I0711 08:11:05.256813       1 controller.go:270] Starting workers
I0711 08:11:05.256860       1 controller.go:276] Started workers
2020/07/11 08:11:10 Creating tunnel for nginx-1-tunnel.default
I0711 08:11:10.673498       1 controller.go:334] Successfully synced 'default/nginx-1'
2020/07/11 08:11:10 Provisioning host with DigitalOcean
2020/07/11 08:11:16 Provisioning call took: 6.223206s
2020/07/11 08:11:16 Status (nginx-1): provisioning, ID: 199578679, IP: 
I0711 08:11:16.960909       1 controller.go:334] Successfully synced 'default/nginx-1-tunnel'
2020/07/11 08:11:16 Provisioning host with DigitalOcean
2020/07/11 08:11:17 Provisioning call took: 0.646848s
2020/07/11 08:11:17 Status (nginx-1): provisioning, ID: 199578680, IP: 
I0711 08:11:17.622808       1 controller.go:334] Successfully synced 'default/nginx-1-tunnel'
2020/07/11 08:11:17 Provisioning host with DigitalOcean
2020/07/11 08:11:18 Provisioning call took: 0.689612s
2020/07/11 08:11:18 Status (nginx-1): provisioning, ID: 199578681, IP: 
I0711 08:11:18.335153       1 controller.go:334] Successfully synced 'default/nginx-1-tunnel'
2020/07/11 08:11:18 Provisioning host with DigitalOcean
2020/07/11 08:11:19 Provisioning call took: 0.817013s
2020/07/11 08:11:19 Status (nginx-1): provisioning, ID: 199578682, IP: 
I0711 08:11:19.165284       1 controller.go:334] Successfully synced 'default/nginx-1-tunnel'
2020/07/11 08:11:19 Provisioning host with DigitalOcean
2020/07/11 08:11:19 Provisioning call took: 0.700888s
2020/07/11 08:11:19 Status (nginx-1): provisioning, ID: 199578683, IP: 
I0711 08:11:19.879867       1 controller.go:334] Successfully synced 'default/nginx-1-tunnel'
2020/07/11 08:11:35 Provisioning host with DigitalOcean
2020/07/11 08:11:36 Provisioning call took: 0.982947s
2020/07/11 08:11:36 Status (nginx-1): provisioning, ID: 199578696, IP: 
I0711 08:11:36.165374       1 controller.go:334] Successfully synced 'default/nginx-1-tunnel'
2020/07/11 08:11:36 Provisioning host with DigitalOcean
2020/07/11 08:11:36 Provisioning call took: 0.801765s
2020/07/11 08:11:36 Status (nginx-1): provisioning, ID: 199578697, IP: 
I0711 08:11:36.985114       1 controller.go:334] Successfully synced 'default/nginx-1-tunnel'

OSS:

alex@alexx:~$ kubectl logs deploy/inlets-operator -f
2020/07/11 08:17:07 Inlets client: inlets/inlets:2.7.3
2020/07/11 08:17:07 Inlets pro: false
W0711 08:17:07.318000       1 client_config.go:543] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0711 08:17:07.319667       1 controller.go:120] Setting up event handlers
I0711 08:17:07.319728       1 controller.go:262] Starting Tunnel controller
I0711 08:17:07.319736       1 controller.go:265] Waiting for informer caches to sync
I0711 08:17:07.419979       1 controller.go:270] Starting workers
I0711 08:17:07.420014       1 controller.go:276] Started workers
2020/07/11 08:17:07 Creating tunnel for nginx-1-tunnel.default
I0711 08:17:07.436008       1 controller.go:334] Successfully synced 'default/nginx-1'
2020/07/11 08:17:07 Provisioning host with DigitalOcean
2020/07/11 08:17:08 Provisioning call took: 1.108584s
2020/07/11 08:17:08 Status (nginx-1): provisioning, ID: 199579039, IP: 
I0711 08:17:08.561063       1 controller.go:334] Successfully synced 'default/nginx-1-tunnel'
2020/07/11 08:17:08 Provisioning host with DigitalOcean
2020/07/11 08:17:09 Provisioning call took: 0.849848s
2020/07/11 08:17:09 Status (nginx-1): provisioning, ID: 199579041, IP: 
I0711 08:17:09.420478       1 controller.go:334] Successfully synced 'default/nginx-1-tunnel'
2020/07/11 08:17:09 Provisioning host with DigitalOcean
2020/07/11 08:17:10 Provisioning call took: 0.731186s
2020/07/11 08:17:10 Status (nginx-1): provisioning, ID: 199579042, IP: 
I0711 08:17:10.172231       1 controller.go:334] Successfully synced 'default/nginx-1-tunnel'
2020/07/11 08:17:10 Provisioning host with DigitalOcean
2020/07/11 08:17:10 Provisioning call took: 0.682390s
2020/07/11 08:17:10 Status (nginx-1): provisioning, ID: 199579043, IP: 
I0711 08:17:10.868242       1 controller.go:334] Successfully synced 'default/nginx-1-tunnel'


Versions:

alex@alexx:~$ arkade version
            _             _      
  __ _ _ __| | ____ _  __| | ___ 
 / _` | '__| |/ / _` |/ _` |/ _ \
| (_| | |  |   < (_| | (_| |  __/
 \__,_|_|  |_|\_\__,_|\__,_|\___|

Get Kubernetes apps the easy way

Version: 0.4.0
Git Commit: e0f2b4094a81cedd95d3594b2e1e035745c24802
alex@alexx:~$ kind version
kind v0.8.0 go1.14.2 linux/amd64
alex@alexx:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-30T20:19:45Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
alex@alexx:~$ 

Sample license for testing:

Name:	Alex
Email:	[email protected]
Expires in: 14 days, on: 25 July 2020
Products: [inlets-pro]

Note: use text under the line
---
eyJhbGciOiJFUzI1NiIsInR5cCI6IkpXVCJ9.eyJuYW1lIjoiQWxleCIsImVtYWlsX2FkZHJlc3MiOiJhbGV4QG9wZW5mYWFzLmNvbSIsInByb2R1Y3RzIjpbImlubGV0cy1wcm8iXSwiYXVkIjoiand0LWxpY2Vuc2UiLCJleHAiOjE1OTU2NjQ2MDAsImp0aSI6IjgwODEiLCJpYXQiOjE1OTQ0NTUwMDAsImlzcyI6Imp3dC1saWNlbnNlIiwic3ViIjoiQWxleCJ9.2fcJgq38XROlQfk0y4lQLj4vt6z-kTNEvfJC0t-L_bJMEbtyra4GOC461gycktEX7pA3sBRI3jfOTh8BLo5QNQ

Support request for multiple LoadBalancer controllers

Is it possible to extend the annocations to bypass inlets as the LoadBalancer, flipping this around to create an annocation that inlets-operator looks for to explicitly deploy. Like many, MetalLB is the default bare metal loadbalancer in a cluster for certain situations. Deploying the inlets-operator would result in two LB competing. MetalLB will always attempt to service type LoadBalancer.

The inlets-operator is the ideal solution for routing external traffic in to a private cluster without having to go the route of site-to-site vpn and routing traffice. However, as I suspect inlets will interrogate all service types LB and go back and open up VMs for each.

Use case trying to be solved. IP addresses being assigned to internal services/sites, API gateway trying to be exposed to external internet traffic, inlets is ideal for this, whereas MetalLB is ideal fo assignin IP for internal use (private net)

DB

[Restated] Support request for multiple LoadBalancer controllers

Alex

This question was posed prior, you referenced a document which most of us are aware of and then closed the issue. Here it is restated

Is it possible to extend the annocations to bypass inlets as the LoadBalancer, flipping this around to create an annocation that inlets-operator looks for to explicitly deploy. Like many, MetalLB is the default bare metal loadbalancer in a cluster for certain situations. Deploying the inlets-operator would result in two LB competing. MetalLB will always attempt to service type LoadBalancer.

The inlets-operator is the ideal solution for routing external traffic in to a private cluster without having to go the route of site-to-site vpn and routing traffice. However, as I suspect inlets will interrogate all service types LB and go back and open up VMs for each.

Use case trying to be solved. IP addresses being assigned to internal services/sites, API gateway trying to be exposed to external internet traffic, inlets is ideal for this, whereas MetalLB is ideal fo assignin IP for internal use (private net)

DB

My commentary to your docs you posted, this still creates a situation when another LB is competing with Inlets Operator. The default behavior for Inlets Operator should be to check for an annotation and then be activated. Not the converse which is active unless annotated otherwise. Inlets was not meant to be a LB. The abundance of mainstream LB soft-options exist and are in use. Your recommended approach essentially would cause going back an annotating every instance of a LB service and redeployment of a manifest. The use case for Inlets Operator is for selective deployment and function, not overarching LB instancing. I cannot think of one use case where someone who grants LB IP assignment to say even 20 services (can be anything from DB, Queue, etc) would want Inlets instantiating a remove VM for each. Inlets Operator has a very specific use case.

The loadbalancer external ip is not cleared when the tunnel is removed

If I manually remove the tunnel object associated to a loadbalancer, the garbage collector does not clear the external ip of the service. This make the loadbalancer service state inconsistent.

I already have a working fix so I could open a PR with it.

Expected Behaviour

I would expect the node-exit ip to be removed from the externalips on the loadbalancer service.

Steps to Reproduce

  1. Create a new loadblancer service
kubectl run nginx-1 --image=nginx --port=80 --restart=Always
kubectl expose deployment nginx-1 --port=80 --type=LoadBalancer

This will make the inlets operator to create a new tunnel and update the service with the exit node ip.

  1. Annotate de loadbalaner with dev.inlets.manage=false

This is where I think this is an edge case. This is because I will manually remove the tunnel and I don't want the operator to recreated it.

  1. Remove the tunnel associated with the loadbalancer
kubectl delete tunnels nginx-1-tunnel
  1. Get the service an check the external ip is still there.
kubectl get services
NAME             TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)                  AGE
nginx-1          LoadBalancer   10.43.62.113   167.99.94.58   80:31447/TCP             25h
...

The service still have the node-exit IP associated in the external ip section

Context

This is somehow an edge case. But I think one is worth fixing given the inconsistent state the loadbalancer has after the tunnel is removed

Support request

Expected Behaviour

I am unable to access https content through the proxy. Does this support https proxying? I didn't think this was a inlets-pro feature, but if so that would answer my questions.

This could be a misunderstanding on how inlets/inlet-operator works on my part so clarification would be great.

Current Behaviour

I have a k3s cluster configured with Traefik, cert-manager, and metal-lb. I used inlets to allow access into the cluster so cert-manager could use http01 to issue the ssl cert. Now if my local DNS is pointing to the the local IP (issued by metal-lb) of Traefik LoadBalancer I am able to see the site with a valid LetsEncrypt cert.

For external access I set my DNS provider to point to the IP of the inlets-tunnel on digital ocean. I am able to access the site using http, but not https.

Possible Solution

Document the process for configuring LoadBalancers for https access.

Steps to Reproduce (for bugs)

1. Install k3s (with --no-deploy servicelb option so I can use metal-lb)

2. Use helm 3 to update k3s Traefik to enable ssl.

helm upgrade --reuse-values -f values.yaml -n kube-system traefik stable/traefik
# Traefik values.yaml
externalTrafficPolicy: Local
dashboard:
  enabled: true
  domain: traefik.local.lan
  ingress:
    annotations:
      traefik.ingress.kubernetes.io/whitelist-source-range: "192.168.1.0/24,127.0.0.0/8,::1/128"
ssl:
  enabled: true
  generateTLS: true

3. Install metal-lb using helm 3

kubectl create namespace metallb-system
helm install metallb stable/metallb --namespace metallb-system -f values.yml
# metal-lb values.yaml
configInline:
  address-pools:
    - name: default-ip-space
      protocol: layer2
      addresses:
        - 192.168.1.60-192.168.1.99

4. Install cert manager

Excluding config for brevity but the cert is issued successfully.

5. Install inlets using helm 3

helm repo add inlets https://inlets.github.io/inlets-operator/
helm repo update
kubectl apply -f https://raw.githubusercontent.com/inlets/inlets-operator/master/artifacts/crd.yaml
kubectl create ns inlets
kubectl create secret -n inlets generic inlets-access-key \
    --from-literal inlets-access-key="CHANGEME"
helm install inlets-operator inlets/inlets-operator --namespace inlets -f values.yaml
# inlets values.yaml
provider: "digitalocean"
region: "nyc1"

6. Deploy an arm version of httpbin for testing.

# httpbin service.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: httpbin
  namespace: httpbin
  annotations:
    kubernetes.io/ingress.class: traefik
    # ingress.kubernetes.io/ssl-redirect: "true"
    kubernetes.io/tls-acme: "true"
    cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
  tls:
    - secretName: httpbin-tls-cert
      hosts:
        - httpbin.mydomain.com
  rules:
    - host: httpbin.local.lan
      http:
        paths:
          - backend:
              serviceName: httpbin
              servicePort: 80
    - host: httpbin.mydomain.com
      http:
        paths:
          - backend:
              serviceName: httpbin
              servicePort: 80

7. Configure DNS

  • Configure local DNS to point to metal-lb IP. httpbin.local.lan -> 192.168.1.60, httpbin.mydomain.com -> 192.168.1.60
  • Configure DNS provider (google domains) to point to digital ocean. httpbin.mydomain.com -> xxx.xxx.xxx.xxx

Context

I am trying to enable secure access to services within my cluster. I am mainly looking to safely expose home assistant.

Your Environment

  • inlets-operator version, find via kubectl get deploy inlets-operator -o wide
    inlets/inlets-operator:0.6.3

  • Kubernetes distribution i.e. minikube v0.29.0., KinD v0.5.1, Docker Desktop:
    k3s v1.17.2+k3s1

  • Kubernetes version kubectl version:
    v1.17.2

  • Operating System and version (e.g. Linux, Windows, MacOS):
    Raspbian Buster Lite (Raspberry Pi 4)

  • Cloud provisioner: (DigitalOcean.com / Packet.com / etc)
    DigitalOcean.com

Set requests/limits for the tunnel clients

Description

Set requests/limits for the tunnel clients

Context

The tunnel client deployments created by the operator should have some basic limits set to avoid run-away pods, and to upgrade from the basic QoS set by default of BestEffort.

Show columns for tunnel crd with "-o wide"

Show columns for tunnel crd with "-o wide"

Expected Behaviour

As per openfaas/ingress-operator#23, we can show additional data when users run kubectl get tunnels -o wide

Current Behaviour

Nothing is shown unless using -o yaml

Possible Solution

Add the most interesting fields here, especially the status and IP address.

Context

Thank you to Duffie @mauilion for this suggestion.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.