GithubHelp home page GithubHelp logo

NLB services with "externalTrafficPolicy: Local" route traffic to nodes that cannot handle it for a short time when a node joins the cluster about cloud-provider-aws HOT 45 CLOSED

kubernetes avatar kubernetes commented on August 17, 2024 16
NLB services with "externalTrafficPolicy: Local" route traffic to nodes that cannot handle it for a short time when a node joins the cluster

from cloud-provider-aws.

Comments (45)

M00nF1sh avatar M00nF1sh commented on August 17, 2024 13

There is a known issue with NLB that it will route traffic to newly registered targets(even before it passes initial health check). NLB team is rolling out a new HC system which likely addressed this.

However, with externalTrafficPolicy: Local, inherently it relies on the LoadBalancer to fail health-check when pod is migrated to other nodes, where will be failed requests before the health-check fail.

I'd recommend to use the NLB-IP mode introduced by AWS Load Balancer controller, which can implement zero downtime with proper configuration(pod prestop hook +readinessGate+multiple replicas). also, with NLB-IP mode, we can also preserve client-IP without the usage of ProxyProtocol, which I believe was the most important factor when users choose externalTrafficPolicy: Local instead of externalTrafficPolicy: cluster.

The currently limitation is NLB target registration can takes up to 5 minute, thus causes a long time to rollout a deployment under zero downtime configuration. NLB team's new HC system will bring down the registration time to about 3 minute and they are working on further improvements to bring the registration time to below 1 minute.

from cloud-provider-aws.

f-ld avatar f-ld commented on August 17, 2024 3

I have a similar issue.
In short:

  • a pool of "nodes" (default ones, where most services will run) where nginx is running
  • a pool of "myservice-nodes" where I have my service that is up/downscaling a lot (from 10 to 150 instances every day).

After being in touch with AWS support, I got confirmation that every time a new node is added, before healthcheck can test it and mark it unhealthy, it is considered as healthy by the NLB.

So in my case, the NLB will send TCP SYN requests to that new node where nginx port will never be opened and the consequence in our case is not a 500 but a connection timeout.

If the target group could be created with instance from a single instance group (in my case the pool of "default" nodes), then I would not have issues with instances created frequently in pool of "myservice-nodes" (which is like I said very frequent during the day) and it would only happen when a new "default" node is added (once a week maybe). So not fixed, but less visible.
Then I could also create a specific pool of nodes where I run my nginx instances, one that does not scale -> fixed, it would not happen any more.

from cloud-provider-aws.

avielb avatar avielb commented on August 17, 2024 2

same as me, I was able to install it and configure, can confirm now that the issue is not happening when working with nlb-ip configured

from cloud-provider-aws.

TBBle avatar TBBle commented on August 17, 2024 2

Not specifically? I expect this problem would affect any LoadBalancer Service with the service.beta.kubernetes.io/aws-load-balancer-type: nlb annotation, being run with the AWS Cloud Provider v1 or the in-k8s-tree AWS Cloud Provider.

from cloud-provider-aws.

dberuben avatar dberuben commented on August 17, 2024 1

I have eks 1.18 i try to use the NLB-IP with no luck.
Stuck with Normal EnsuringLoadBalancer 119s service-controller Ensuring load balancer

from cloud-provider-aws.

TBBle avatar TBBle commented on August 17, 2024 1

Okay. If you're still having trouble with the AWS LoadBalancer Controller after you install it (or having problems installing it), it probably makes more sense to open an issue on https://github.com/kubernetes-sigs/aws-load-balancer-controller than to ask questions on this ticket, as that way people who are currently working on and using it will be able to help.

from cloud-provider-aws.

dberuben avatar dberuben commented on August 17, 2024 1

So was able to install AWS LoadBalancer Controller without any issue, i'm seeing my new nlb-ip, look like this when you describe the svc k8s-nginxing....
I'm also seeing Pods IP in the TargetGroup.
The draining pods takes time but i think now i need to tweak my controller.

image

from cloud-provider-aws.

TBBle avatar TBBle commented on August 17, 2024 1

Another interesting thing is that when curl the healthcheck port, though route-able through any nodes, the response status code is different depends on whether the node I curl against has nginx pod running on it. It's 503 on nodes have no nginx, and 200 on nodes that have nginx.

That's the intended behaviour of the health-check port. It tells you if this node has that service available or not. That's what I meant about it not being routed, it's handled locally on every node, based on the pod's presence on that node, it's not routed.

Because you're using externalTrafficPolicy: Local, only nodes that have a copy of the pod will respond, the rest are probably holding the port open to make sure nothing claims it, but are not expected to respond to traffic on that port.

The point of externalTrafficPolicy: Local is that load balancer talks to each node, and only if the health-check port returns success (i.e. 200 or similar) does it then route traffic to the node for that service. So your testing shows it working precisely as intended.

If I ssh into a worker node with no Nginx running, then curl the traffic port (127.0.0.1:30467) it works as well

That's a little surprising. It's quite possible that accessing the NodePort from localhost does something odd, I don't know, and haven't looked at the rules kube-proxy has created for you. It's quite possible that kube-proxy is using a rule for 127.0.0.1 port 30467 to do something clever, and it's working when you use curl for it because you're hitting something internal to the system.

I have the vague idea that NodePort ports are not required to support 'localhost' access, as they may be handled by a variety of network systems, and may not even be using kube-proxy, etc.

It could also be a bug that when accessing via localhost, externalTrafficPolicy: Local is being treated as externalTrafficPolicy: Cluster. This would be reasonable if, because it's being run on a cluster node, kube-proxy has decided to forward the traffic, because it can do so without losing the source IP, as it would for traffic from a load balancer. I don't think that's true though.


Rereading your comment, you may have a mis-understanding. The health-check port is not going to the NGINX pod, it's being answered by kube-proxy locally. It's not the health-check defined in the NGINX resources, it's the health-check for the service object's node-port.

from cloud-provider-aws.

LHCGreg avatar LHCGreg commented on August 17, 2024

Also, switching the service to externalTrafficPolicy: Cluster did not remove the HTTP health check on the NLB. Health check failed on all nodes after the switch. I had to delete and recreate the service to get the NLB recreated to fix it.

from cloud-provider-aws.

TBBle avatar TBBle commented on August 17, 2024

Per Source IP for Services with Type=LoadBalancer, the HTTP health check used for externalTrafficPolicy: Local (on healthCheckNodePort) should not be being routed to other nodes (this is not AWS-specific, but is part of kube-proxy), but perhaps the health-check is mis-setup and is seeing the 'failure' response (503) as successful. I don't see any changes between 1.15 and 1.14.

I am on 1.15, and was also seeing that it added all nodes to the NLB, rather than the nodes that hosted endpoints. I'm not sure why that is, but it seems a bad behaviour, particularly when (in my case) many of the nodes have taints that will exclude the Pods in the Service from ever living there.

However, I wasn't seeing (or perhaps only not noticing) the problem of newly-added nodes being temporarily healthy in the NLB's Target Group, so didn't replicate the '500' issue you're seeing. Perhaps it was fixed between 1.14 and 1.15, although I don't see any relevant changes in the legacy-cloud-providers/aws history.

Is it possible you're seeing the same issue as kubernetes/kubernetes#73362? That was fixed in Kubernetes 1.18, although it relates to failures in running pods, rather than pod startup.

from cloud-provider-aws.

TBBle avatar TBBle commented on August 17, 2024

We just replicated the problem where switching from externalTrafficPolicy: Local to externalTrafficPolicy: Cluster leaves the Targetgroups in a bad state, on EKS.

Server Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.11-eks-af3caf", GitCommit:"af3caf6136cd355f467083651cc1010a499f59b1", GitTreeState:"clean", BuildDate:"2020-03-27T21:51:36Z", GoVersion:"go1.12.17", Compiler:"gc", Platform:"linux/amd64"}

The Service is actually reporting the failure in its kubectl describe:

  Normal   ExternalTrafficPolicy   11m                  service-controller  Local -> Cluster
  Warning  SyncLoadBalancerFailed  11m                  service-controller  Error syncing load balancer: failed to ensure load balancer: Error modifying target group health check: "InvalidConfigurationRequest: You cannot change the health check protocol for a target group with the TCP protocol\n\tstatus code: 400, request id: 41fbdb6d-7b92-4a47-9386-1aef3f4242e2"

So the bug is that we can't change a health check protocol for a target group, and must delete and recreate them.

This is analogous to the problem fixed in v1.19 by kubernetes/kubernetes#89562, so probably needs a similar fix around here. The error suggests that a http/https switch would be okay, but tcp/X is rejected. However, the AWS CLIv2 docs say that for an existing NLB, you simply cannot modify this setting. Also true for interval seconds, timeout seconds, and the http/https return-code matcher.

To fix it, I had to delete the NLB and the Target Groups, and then either wait for the LB creation to retry automatically, or trivially modify the Service so it immediately recreates the NLB.

There's already a report for this at kubernetes/kubernetes#80996


I also note that in this mode, there's only three nodes in my target groups, I assume it's submitting one node in each AZ.

I would guess that the list of nodes to submit to the Load Balancer is a k8s choice, not an AWS Cloud Provider choice, as the AWS Cloud Provider doesn't check externalTrafficPolicy until it is setting up health checks, and the list of instances was already provided elsewhere.

from cloud-provider-aws.

fejta-bot avatar fejta-bot commented on August 17, 2024

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

from cloud-provider-aws.

LHCGreg avatar LHCGreg commented on August 17, 2024

/remove-lifecycle stale

from cloud-provider-aws.

fejta-bot avatar fejta-bot commented on August 17, 2024

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

from cloud-provider-aws.

LHCGreg avatar LHCGreg commented on August 17, 2024

/remove-lifecycle stale

from cloud-provider-aws.

colinhoglund avatar colinhoglund commented on August 17, 2024

After being in touch with AWS support, I got confirmation that every time a new node is added, before healthcheck can test it and mark it unhealthy, it is considered as healthy by the NLB.

I also noticed intermittent connection timeouts and verified traffic being sent to unready nodes with VPC flow logs. I got this same response from AWS support and they unfortunately were not able to provide any clear workarounds. :/

from cloud-provider-aws.

TBBle avatar TBBle commented on August 17, 2024

Due to this limitation in NLB Instance mode, the best available workaround is probably to migrate to NLB IP mode, which requires the AWS LoadBalancer Controller, and Kubernetes 1.20 or 1.18 if you're using EKS. I think it also depends on the AWS VPC CNI.

from cloud-provider-aws.

avielb avatar avielb commented on August 17, 2024

I can confirm having the exact same issue as well.
@TBBle, did the upgrade and transition eks 1.18 with nlb-ip mode, fixed the issue for you?

from cloud-provider-aws.

TBBle avatar TBBle commented on August 17, 2024

I haven't deployed it yet myself.

It should fix the issue, because instead of AWS knowing about all your nodes and sending them traffic until they fail the first healthcheck because there's no Pod on that node, the NLB-IP mode is sending traffic to the Pod's IP, which is only added to the Service once that pod is actually ready to receive traffic.

In other words, NLB-IP is pessimistic (no traffic until the Pod passes the heath check) where NLB (Instance) is optimistic (traffic sent until the Node fails the health check).

from cloud-provider-aws.

TBBle avatar TBBle commented on August 17, 2024

@dberuben Have you installed the AWS LoadBalancer Controller? As far as I know, it's not deployed by default on EKS.

Not having the AWS LoadBalancer Controller installed is the only way I'd expect to see the "Ensuring load balancer" event, but no following failure event showing what went wrong.

If you have install the AWS LoadBalancer Controller, you'll have to check its logs to see what's gone wrong, see for example kubernetes-sigs/aws-load-balancer-controller#933 (comment) on how to see the logs.

from cloud-provider-aws.

dberuben avatar dberuben commented on August 17, 2024

@TBBle No i didn’t , let me try

from cloud-provider-aws.

TBBle avatar TBBle commented on August 17, 2024

Another method to improve this situation: k8s 1.19 supports a new annotation service.beta.kubernetes.io/aws-load-balancer-target-node-labels which can limit the the instances used by a CLB or NLB by label. See kubernetes/kubernetes#90943

It doesn't resolve this issue completely, but if your cluster is set up such that some nodes may never have instances of the pod you're exposing with the load balancer, then you can at least avoid traffic ever being routed to them; your LB will still optimistically route traffic to nodes which might have the pod, until they fail the health check.

I think nlb-ip is a better option for externalTrafficPolicy: Local (or even without that setting), and does not suffer this issue.

from cloud-provider-aws.

nckturner avatar nckturner commented on August 17, 2024

/cc @M00nF1sh @kishorj

from cloud-provider-aws.

kolorful avatar kolorful commented on August 17, 2024

Per Source IP for Services with Type=LoadBalancer, the HTTP health check used for externalTrafficPolicy: Local (on healthCheckNodePort) should not be being routed to other nodes (this is not AWS-specific, but is part of kube-proxy)

@TBBle Somehow this seems not to be the case. I've tested in two clusters (v1.18 and v1.20) and requests to traffic port (30467, 30607) hangs on nodes that don't have Nginx Pod, but requests to healthCheckNodePort (30308) will be re-routed to the correct nodes somehow.

Service:
  externalTrafficPolicy: Local
  healthCheckNodePort: 30308
  ports:
  - name: http
    nodePort: 30467
    port: 80
    protocol: TCP
    targetPort: http
  - name: https
    nodePort: 30607
    port: 443
    protocol: TCP
    targetPort: https

Update:
If I ssh into a worker node with no Nginx running, then curl the traffic port (127.0.0.1:30467) it works as well.

netstat -tulpn
tcp        0      0 0.0.0.0:30607           0.0.0.0:*               LISTEN      5582/kube-proxy
tcp        0      0 0.0.0.0:30320           0.0.0.0:*               LISTEN      5582/kube-proxy
tcp        0      0 127.0.0.1:10256         0.0.0.0:*               LISTEN      5582/kube-proxy
tcp        0      0 0.0.0.0:31443           0.0.0.0:*               LISTEN      5582/kube-proxy
tcp        0      0 0.0.0.0:31860           0.0.0.0:*               LISTEN      5582/kube-proxy
tcp        0      0 0.0.0.0:30548           0.0.0.0:*               LISTEN      5582/kube-proxy
tcp        0      0 0.0.0.0:30070           0.0.0.0:*               LISTEN      5582/kube-proxy
tcp        0      0 0.0.0.0:31070           0.0.0.0:*               LISTEN      5582/kube-proxy
tcp        0      0 0.0.0.0:30467           0.0.0.0:*               LISTEN      5582/kube-proxy
tcp        0      0 0.0.0.0:32163           0.0.0.0:*               LISTEN      5582/kube-proxy
tcp        0      0 0.0.0.0:32681           0.0.0.0:*               LISTEN      5582/kube-proxy
tcp6       0      0 :::30308                :::*                    LISTEN      5582/kube-proxy
tcp6       0      0 :::10249                :::*                    LISTEN      5582/kube-proxy

Update 2:
Another interesting thing is that when curl the healthcheck port, though route-able through any nodes, the response status code is different depends on whether the node I curl against has nginx pod running on it. It's 503 on nodes have no nginx, and 200 on nodes that have nginx.

from cloud-provider-aws.

kgibcc avatar kgibcc commented on August 17, 2024

So was able to install AWS LoadBalancer Controller without any issue, i'm seeing my new nlb-ip, look like this when you describe the svc k8s-nginxing....
I'm also seeing Pods IP in the TargetGroup.
The draining pods takes time but i think now i need to tweak my controller.

image

Have you tried this setup with TLS passthrough enabled? I'm having a bit of trouble getting it work.

from cloud-provider-aws.

okossuth avatar okossuth commented on August 17, 2024

@LHCGreg You could use a DaemonSet instead of a Deployment for your ingress-nginx controller pods, that way every node will have one, and the NLB will be able to see the nodes as healthy...

from cloud-provider-aws.

TBBle avatar TBBle commented on August 17, 2024

That setup has the same problem with new nodes, in that they are receiving traffic immediately, while the local nginx-ingress Pod from the DaemonSet may still be starting up.

It also assumes you can run nginx-ingress on every Node, which precludes dedicated nodes, otherwise you still have those nodes from which nginx-ingress is blocked by nodeSelector, antiAffinity, etc, receiving traffic until their health-check fails on the NLB.

I did try DaemonSet for nginx-ingress (for a different reason) on one cluster build, but reverted it later, because that meant every node that was allowed to run nginx-ingress was running nginx-ingress, and that was a lot of wasted CPU resources at large scale.

from cloud-provider-aws.

stevehipwell avatar stevehipwell commented on August 17, 2024

I thought all of this was solved by using the out of tree AWS load balancer controller? Either in nlb-ip mode or in instance mode using the node label selector annotation.

from cloud-provider-aws.

okossuth avatar okossuth commented on August 17, 2024

@stevehipwell @TBBle The problem with using the AWS load balancer controller is that it deploys an ALB as ingress object which for some use cases has severe limitations, like it only supports 25 SSL certs and 100 path rules per ALB. For this reason we are using ingress-nginx as ingress in our EKS cluster.

from cloud-provider-aws.

stevehipwell avatar stevehipwell commented on August 17, 2024

@okossuth I was refering to the AWS load balancer controller provisioning the NLB for the Ingress Nginx chart service, so what I said previously stands.

from cloud-provider-aws.

okossuth avatar okossuth commented on August 17, 2024

@stevehipwell You mean using both ingress-nginx controller and AWS load balancer controller pods at the same time in an EKS cluster?

from cloud-provider-aws.

stevehipwell avatar stevehipwell commented on August 17, 2024

@okossuth I mean using the AWS load balancer controller instead of the in tree controller for provisioning the NLB backing your Ingress Nginx.

https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/

from cloud-provider-aws.

okossuth avatar okossuth commented on August 17, 2024

@stevehipwell ok i will try that approach, thanks

from cloud-provider-aws.

TBBle avatar TBBle commented on August 17, 2024

@okossuth You're thinking of version 1 of the AWS Load Balancer controller, when it was known as the AWS ALB Ingress Controller. Version 2 now manages NLB Load Balancers too, and replaces the built-in support for Load Balancers in Kubernetes or the AWS Cloud Provider.

See https://aws.amazon.com/blogs/containers/introducing-aws-load-balancer-controller/, although since that article was written, the annotation value has changed to be service.beta.kubernetes.io/aws-load-balancer-type: external and then other annotations are used to control the mode and things like Proxy Protocol support.

This ticket should probably be closed, as I don't expect any improvement on the load-balancer support in the AWS Cloud Provider version 1 (inherited from kubernetes), and AWS Cloud Provider version 2 doesn't manage Load Balancers, that's now the job of the AWS Load Balancer Controller.

If there is a solution (either coming, or already done) for this problem in Instance mode, it'll be over in https://github.com/kubernetes-sigs/aws-load-balancer-controller.

from cloud-provider-aws.

kgibcc avatar kgibcc commented on August 17, 2024

I have not been able to get AWS Load Balancer Controllers to work with TLS passthrough. Using these annotations:

      service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
      service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"

both externalTrafficPolicy: "Local" and externalTrafficPolicy: "cluster" produce TLS handshake errors.

The goal is to use AWS LB controller, proxy protocol v2 + TLS passthrough + externalTrafficPolicy: "cluster", but I haven't figure out how to get them all lined up.

from cloud-provider-aws.

TBBle avatar TBBle commented on August 17, 2024

@kgibcc It's probably better to ask about that on the AWS Load Balancer Controller issue tracker.

from cloud-provider-aws.

iamNoah1 avatar iamNoah1 commented on August 17, 2024

Hi @LHCGreg @TBBle @f-ld @avielb @kolorful @kgibcc @okossuth do you guys consider this being an issue with ingress-nginx?

from cloud-provider-aws.

k8s-triage-robot avatar k8s-triage-robot commented on August 17, 2024

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

from cloud-provider-aws.

kishorj avatar kishorj commented on August 17, 2024

/remove-lifecycle stale

from cloud-provider-aws.

k8s-triage-robot avatar k8s-triage-robot commented on August 17, 2024

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

from cloud-provider-aws.

k8s-triage-robot avatar k8s-triage-robot commented on August 17, 2024

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

from cloud-provider-aws.

k8s-triage-robot avatar k8s-triage-robot commented on August 17, 2024

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

from cloud-provider-aws.

k8s-ci-robot avatar k8s-ci-robot commented on August 17, 2024

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

from cloud-provider-aws.

debu99 avatar debu99 commented on August 17, 2024

@M00nF1sh where do you see the news about the NLB new HC, is there any progress?

from cloud-provider-aws.

vincentgna avatar vincentgna commented on August 17, 2024

@debu99 - Nov, 2022 announcement for improved NLB HealthCheck - I can't figure out if that solved the NLB HC bug as well

https://aws.amazon.com/about-aws/whats-new/2022/11/elastic-load-balancing-capabilities-application-availability/

also, not sure if this externalTrafficPolicy: local - HealthCheck bug doesn't apply to ALBs? @M00nF1sh

from cloud-provider-aws.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.