GithubHelp home page GithubHelp logo

fairwindsops / goldilocks Goto Github PK

View Code? Open in Web Editor NEW
2.3K 34.0 125.0 3.3 MB

Get your resource requests "Just Right"

Home Page: https://fairwinds.com

License: Apache License 2.0

Makefile 0.51% Go 75.24% Dockerfile 0.36% CSS 12.45% JavaScript 9.36% Shell 2.08%
kubernetes verticalpodautoscaler resources reporting fairwinds-official hacktoberfest

goldilocks's Introduction

Goldilocks

Get your resource requests "Just Right"

Goldilocks is a utility that can help you identify a starting point for resource requests and limits.

Documentation

Check out the documentation at docs.fairwinds.com

How can this help with my resource settings?

By using the kubernetes vertical-pod-autoscaler in recommendation mode, we can see a suggestion for resource requests on each of our apps. This tool creates a VPA for each workload in a namespace and then queries them for information.

Once your VPAs are in place, you'll see recommendations appear in the Goldilocks dashboard:

Goldilocks Screenshot

Join the Fairwinds Open Source Community

The goal of the Fairwinds Community is to exchange ideas, influence the open source roadmap, and network with fellow Kubernetes users. Chat with us on Slack or join the user group to get involved!

Love Fairwinds Open Source? Automate Fairwinds Open Source for free with Fairwinds Insights. Click to learn more

Other Projects from Fairwinds

Enjoying Goldilocks? Check out some of our other projects:

  • Polaris - Audit, enforce, and build policies for Kubernetes resources, including over 20 built-in checks for best practices
  • Pluto - Detect Kubernetes resources that have been deprecated or removed in future versions
  • Nova - Check to see if any of your Helm charts have updates available
  • rbac-manager - Simplify the management of RBAC in your Kubernetes clusters

Or check out the full list

Fairwinds Insights

If you're interested in running Goldilocks in multiple clusters, tracking the results over time, integrating with Slack, Datadog, and Jira, or unlocking other functionality, check out Fairwinds Insights, a platform for auditing and enforcing policy in Kubernetes clusters.

Fairwinds Insights

goldilocks's People

Contributors

baderbuddy avatar davekonopka avatar dependabot-preview[bot] avatar dependabot[bot] avatar dosullivan avatar evgkrsk avatar felipeschossler avatar hieptl avatar hjkatz avatar isugimpy avatar kengibous avatar kenm47 avatar kimschles avatar lauraseidler avatar lucasreed avatar lukerogers avatar makoscafee avatar marcomicera avatar mlazowik avatar mstoetzer avatar rbren avatar reactiveops-bot avatar realnumber avatar sudermanjr avatar talziv avatar thewildmage avatar tnoeding avatar transient1 avatar ylmrx avatar yogeek avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

goldilocks's Issues

Make units consistent

If the units are currently set in e.g. Mi, we should round the recommendation (which seems to always be in k for memory).

This way, the dashboard will be more readable (no more 128Mi < 262144k), and any copy/pasted config will stick to the user's current unit conventions.

Screen Shot 2019-10-15 at 10 58 00 AM

Support for Jobs

As with #84 and #96 it would be really helpful for us to have support for pods created from Job resources. We use Jobs to run background workers that autoscale as the queues get loaded and clear.

The workers are generated based on queue hooks and in our implementation have unique names for each job. So I'm looking for something more complicated that the current deployments which provide metrics based on the metadata.name. I'd like to aggregate resource recommendations based on jobs with the same metadata.labels values.

This might not be possible at this time, because it doesn't look like the Job resource has a Scale subresource as required for the targetRef.

Any thoughts on whether this is possible or other approaches to get this?

Utilize OwnerRef

Use OwnerRef to specify that the VPA objects are owned by goldilocks, rather than the labels that are currently used.

Error creating vpa: the server could not find the requested resource (post verticalpodautoscalers.autoscaling.k8s.io)

I'm getting this error after a fresh goldilocks install in a AWS EKS cluster. I installed it via helm. The deployed chart version is: 0.3.0. In the web dashboard, no metrics are displayed.

Any clues of what is causing it?

2020-03-06T08:54:07.405001221Z I0306 08:54:07.404805       1 controller.go:118] Starting controller.
2020-03-06T08:54:07.602403382Z I0306 08:54:07.602236       1 controller.go:121] Creating watcher for Deployments.
2020-03-06T08:54:07.60249999Z I0306 08:54:07.602270       1 controller.go:169] Creating controller for resource type deployment
2020-03-06T08:54:07.602507027Z I0306 08:54:07.602306       1 controller.go:141] Creating watcher for Namespaces.
2020-03-06T08:54:07.602510591Z I0306 08:54:07.602315       1 controller.go:169] Creating controller for resource type namespace
2020-03-06T08:54:07.60255054Z I0306 08:54:07.602336       1 controller.go:48] Starting watcher.
2020-03-06T08:54:07.602555589Z I0306 08:54:07.602502       1 controller.go:48] Starting watcher.
2020-03-06T08:54:08.202902349Z I0306 08:54:08.202609       1 controller.go:184] namespace/default has been added.
2020-03-06T08:54:08.202933072Z I0306 08:54:08.202644       1 controller.go:184] namespace/gloo-system has been added.
2020-03-06T08:54:08.202938012Z I0306 08:54:08.202653       1 controller.go:184] namespace/goldilocks has been added.
2020-03-06T08:54:08.202941616Z I0306 08:54:08.202659       1 controller.go:184] namespace/kube-node-lease has been added.
2020-03-06T08:54:08.401615356Z I0306 08:54:08.401457       1 controller.go:184] namespace/kube-public has been added.
2020-03-06T08:54:08.40164667Z I0306 08:54:08.401493       1 controller.go:184] namespace/kube-system has been added.
2020-03-06T08:54:08.401650968Z I0306 08:54:08.401505       1 controller.go:184] namespace/sc-victoria-metrics-single has been added.
2020-03-06T08:54:08.401991628Z I0306 08:54:08.401935       1 controller.go:60] Watcher synced.
2020-03-06T08:54:08.402070237Z I0306 08:54:08.402026       1 namespace.go:35] Namespace default updated. Check the labels.
2020-03-06T08:54:08.501668235Z E0306 08:54:08.501577       1 vpa.go:142] the server could not find the requested resource (get verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:08.501700014Z I0306 08:54:08.501608       1 namespace.go:35] Namespace gloo-system updated. Check the labels.
2020-03-06T08:54:08.50359559Z E0306 08:54:08.503395       1 vpa.go:142] the server could not find the requested resource (get verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:08.503609143Z I0306 08:54:08.503409       1 vpa.go:56] Namespace is labelled for goldilocks.
2020-03-06T08:54:08.503837616Z I0306 08:54:08.503690       1 controller.go:184] deployment/default/test-nginx has been added.
2020-03-06T08:54:08.503847395Z I0306 08:54:08.503705       1 controller.go:184] deployment/gloo-system/discovery has been added.
2020-03-06T08:54:08.503850464Z I0306 08:54:08.503722       1 controller.go:184] deployment/gloo-system/gateway has been added.
2020-03-06T08:54:08.503852822Z I0306 08:54:08.503728       1 controller.go:184] deployment/gloo-system/gateway-proxy has been added.
2020-03-06T08:54:08.503855292Z I0306 08:54:08.503734       1 controller.go:184] deployment/gloo-system/gloo has been added.
2020-03-06T08:54:08.503857506Z I0306 08:54:08.503752       1 controller.go:184] deployment/goldilocks/goldilocks-controller has been added.
2020-03-06T08:54:08.503870416Z I0306 08:54:08.503758       1 controller.go:184] deployment/goldilocks/goldilocks-dashboard has been added.
2020-03-06T08:54:08.50387419Z I0306 08:54:08.503764       1 controller.go:184] deployment/kube-system/coredns has been added.
2020-03-06T08:54:08.602291407Z I0306 08:54:08.602174       1 vpa.go:96] There are 4 deployments in the namespace
2020-03-06T08:54:08.602405262Z I0306 08:54:08.602366       1 vpa.go:190] Creating vpa: discovery
2020-03-06T08:54:09.003711717Z I0306 08:54:09.003619       1 controller.go:60] Watcher synced.
2020-03-06T08:54:09.003981577Z E0306 08:54:09.003935       1 vpa.go:194] Error creating vpa: the server could not find the requested resource (post verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:09.004032108Z E0306 08:54:09.003957       1 namespace.go:38] Error reconciling: the server could not find the requested resource (post verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:09.004038356Z I0306 08:54:09.003968       1 namespace.go:35] Namespace goldilocks updated. Check the labels.
2020-03-06T08:54:09.204254456Z E0306 08:54:09.204145       1 vpa.go:142] the server could not find the requested resource (get verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:09.204381073Z I0306 08:54:09.204342       1 namespace.go:35] Namespace kube-node-lease updated. Check the labels.
2020-03-06T08:54:09.205649469Z E0306 08:54:09.205570       1 vpa.go:142] the server could not find the requested resource (get verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:09.206069518Z E0306 08:54:09.205986       1 vpa.go:142] the server could not find the requested resource (get verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:09.206080037Z I0306 08:54:09.206005       1 namespace.go:35] Namespace kube-public updated. Check the labels.
2020-03-06T08:54:09.207360538Z E0306 08:54:09.207279       1 vpa.go:142] the server could not find the requested resource (get verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:09.207369519Z I0306 08:54:09.207296       1 namespace.go:35] Namespace kube-system updated. Check the labels.
2020-03-06T08:54:09.20871957Z E0306 08:54:09.208667       1 vpa.go:142] the server could not find the requested resource (get verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:09.208729207Z I0306 08:54:09.208684       1 namespace.go:35] Namespace sc-victoria-metrics-single updated. Check the labels.
2020-03-06T08:54:09.210567972Z E0306 08:54:09.210511       1 vpa.go:142] the server could not find the requested resource (get verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:09.210619769Z I0306 08:54:09.210577       1 namespace.go:35] Namespace kube-public updated. Check the labels.
2020-03-06T08:54:09.211909174Z E0306 08:54:09.211830       1 vpa.go:142] the server could not find the requested resource (get verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:09.211919313Z I0306 08:54:09.211846       1 namespace.go:35] Namespace sc-victoria-metrics-single updated. Check the labels.
2020-03-06T08:54:09.301908558Z E0306 08:54:09.301795       1 vpa.go:142] the server could not find the requested resource (get verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:09.301972965Z I0306 08:54:09.301824       1 namespace.go:35] Namespace default updated. Check the labels.
2020-03-06T08:54:09.303030877Z E0306 08:54:09.302936       1 vpa.go:142] the server could not find the requested resource (get verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:09.303042444Z I0306 08:54:09.302950       1 vpa.go:56] Namespace is labelled for goldilocks.
2020-03-06T08:54:09.303269481Z E0306 08:54:09.303223       1 vpa.go:142] the server could not find the requested resource (get verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:09.303279099Z I0306 08:54:09.303241       1 namespace.go:35] Namespace gloo-system updated. Check the labels.
2020-03-06T08:54:09.304960529Z E0306 08:54:09.304887       1 vpa.go:142] the server could not find the requested resource (get verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:09.304969694Z I0306 08:54:09.304901       1 vpa.go:56] Namespace is labelled for goldilocks.
2020-03-06T08:54:09.30806517Z I0306 08:54:09.308006       1 vpa.go:96] There are 4 deployments in the namespace
2020-03-06T08:54:09.308084396Z I0306 08:54:09.308027       1 vpa.go:190] Creating vpa: discovery
2020-03-06T08:54:09.311007217Z I0306 08:54:09.310913       1 vpa.go:96] There are 4 deployments in the namespace
2020-03-06T08:54:09.311023075Z I0306 08:54:09.310933       1 vpa.go:190] Creating vpa: discovery
2020-03-06T08:54:09.502958961Z E0306 08:54:09.502863       1 vpa.go:194] Error creating vpa: the server could not find the requested resource (post verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:09.50298456Z E0306 08:54:09.502884       1 deployment.go:47] Error reconciling: the server could not find the requested resource (post verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:09.60600664Z E0306 08:54:09.605870       1 vpa.go:194] Error creating vpa: the server could not find the requested resource (post verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:09.606036808Z E0306 08:54:09.605891       1 namespace.go:38] Error reconciling: the server could not find the requested resource (post verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:09.606141852Z I0306 08:54:09.605921       1 namespace.go:35] Namespace kube-node-lease updated. Check the labels.
2020-03-06T08:54:09.806225393Z E0306 08:54:09.806088       1 vpa.go:142] the server could not find the requested resource (get verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:09.806310875Z I0306 08:54:09.806181       1 vpa.go:56] Namespace is labelled for goldilocks.
2020-03-06T08:54:09.813444965Z I0306 08:54:09.812002       1 vpa.go:96] There are 4 deployments in the namespace
2020-03-06T08:54:09.813462817Z I0306 08:54:09.812044       1 vpa.go:190] Creating vpa: discovery
2020-03-06T08:54:10.006451928Z E0306 08:54:10.006340       1 vpa.go:142] the server could not find the requested resource (get verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:10.006477713Z I0306 08:54:10.006372       1 namespace.go:35] Namespace kube-system updated. Check the labels.
2020-03-06T08:54:10.205960063Z E0306 08:54:10.205840       1 vpa.go:194] Error creating vpa: the server could not find the requested resource (post verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:10.206003051Z E0306 08:54:10.205861       1 deployment.go:47] Error reconciling: the server could not find the requested resource (post verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:10.406097815Z E0306 08:54:10.405997       1 vpa.go:142] the server could not find the requested resource (get verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:10.406128992Z I0306 08:54:10.406026       1 namespace.go:35] Namespace goldilocks updated. Check the labels.
2020-03-06T08:54:10.606697871Z E0306 08:54:10.606565       1 vpa.go:142] the server could not find the requested resource (get verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:10.606740591Z I0306 08:54:10.606587       1 vpa.go:56] Namespace is labelled for goldilocks.
2020-03-06T08:54:10.612097942Z I0306 08:54:10.611978       1 vpa.go:96] There are 4 deployments in the namespace
2020-03-06T08:54:10.612123112Z I0306 08:54:10.612004       1 vpa.go:190] Creating vpa: discovery
2020-03-06T08:54:10.806196286Z E0306 08:54:10.806035       1 vpa.go:142] the server could not find the requested resource (get verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:11.00621263Z E0306 08:54:11.006047       1 vpa.go:194] Error creating vpa: the server could not find the requested resource (post verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:11.006291528Z E0306 08:54:11.006067       1 deployment.go:47] Error reconciling: the server could not find the requested resource (post verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:11.20630322Z E0306 08:54:11.206084       1 vpa.go:142] the server could not find the requested resource (get verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:11.206330755Z I0306 08:54:11.206103       1 vpa.go:56] Namespace is labelled for goldilocks.
2020-03-06T08:54:11.211325097Z I0306 08:54:11.211229       1 vpa.go:96] There are 4 deployments in the namespace
2020-03-06T08:54:11.211347081Z I0306 08:54:11.211251       1 vpa.go:190] Creating vpa: discovery
2020-03-06T08:54:11.405933452Z E0306 08:54:11.405775       1 vpa.go:194] Error creating vpa: the server could not find the requested resource (post verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:11.405977406Z E0306 08:54:11.405794       1 deployment.go:47] Error reconciling: the server could not find the requested resource (post verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:11.606090255Z E0306 08:54:11.605982       1 vpa.go:142] the server could not find the requested resource (get verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:11.806316709Z E0306 08:54:11.806153       1 vpa.go:142] the server could not find the requested resource (get verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:12.006222692Z E0306 08:54:12.006026       1 vpa.go:142] the server could not find the requested resource (get verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:12.206576658Z E0306 08:54:12.206450       1 vpa.go:142] the server could not find the requested resource (get verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:12.206624953Z I0306 08:54:12.206470       1 vpa.go:56] Namespace is labelled for goldilocks.
2020-03-06T08:54:12.212380215Z I0306 08:54:12.212202       1 vpa.go:96] There are 4 deployments in the namespace
2020-03-06T08:54:12.212402402Z I0306 08:54:12.212227       1 vpa.go:190] Creating vpa: discovery
2020-03-06T08:54:12.406181637Z E0306 08:54:12.406044       1 vpa.go:194] Error creating vpa: the server could not find the requested resource (post verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:12.406259203Z E0306 08:54:12.406064       1 deployment.go:47] Error reconciling: the server could not find the requested resource (post verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:12.60662579Z E0306 08:54:12.606519       1 vpa.go:142] the server could not find the requested resource (get verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:12.606668418Z I0306 08:54:12.606539       1 vpa.go:56] Namespace is labelled for goldilocks.
2020-03-06T08:54:12.618234304Z I0306 08:54:12.618132       1 vpa.go:96] There are 4 deployments in the namespace
2020-03-06T08:54:12.61826273Z I0306 08:54:12.618155       1 vpa.go:190] Creating vpa: discovery
2020-03-06T08:54:12.805919468Z E0306 08:54:12.805803       1 vpa.go:194] Error creating vpa: the server could not find the requested resource (post verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:12.805958876Z E0306 08:54:12.805824       1 deployment.go:47] Error reconciling: the server could not find the requested resource (post verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:13.006123914Z E0306 08:54:13.006002       1 vpa.go:142] the server could not find the requested resource (get verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:13.206243626Z E0306 08:54:13.206132       1 vpa.go:142] the server could not find the requested resource (get verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:13.406385704Z E0306 08:54:13.406237       1 vpa.go:142] the server could not find the requested resource (get verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:13.606362475Z E0306 08:54:13.606217       1 vpa.go:142] the server could not find the requested resource (get verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:13.806139947Z E0306 08:54:13.805992       1 vpa.go:142] the server could not find the requested resource (get verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:13.806206373Z I0306 08:54:13.806014       1 vpa.go:56] Namespace is labelled for goldilocks.
2020-03-06T08:54:13.811452879Z I0306 08:54:13.811322       1 vpa.go:96] There are 4 deployments in the namespace
2020-03-06T08:54:13.811472314Z I0306 08:54:13.811344       1 vpa.go:190] Creating vpa: discovery
2020-03-06T08:54:14.010711808Z E0306 08:54:14.010586       1 vpa.go:194] Error creating vpa: the server could not find the requested resource (post verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:14.010752383Z E0306 08:54:14.010607       1 deployment.go:47] Error reconciling: the server could not find the requested resource (post verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:14.205986618Z E0306 08:54:14.205858       1 vpa.go:142] the server could not find the requested resource (get verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:14.20603133Z I0306 08:54:14.205878       1 vpa.go:56] Namespace is labelled for goldilocks.
2020-03-06T08:54:14.211256844Z I0306 08:54:14.211091       1 vpa.go:96] There are 4 deployments in the namespace
2020-03-06T08:54:14.211274821Z I0306 08:54:14.211111       1 vpa.go:190] Creating vpa: discovery
2020-03-06T08:54:14.405999092Z E0306 08:54:14.405878       1 vpa.go:194] Error creating vpa: the server could not find the requested resource (post verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T08:54:14.406029534Z E0306 08:54:14.405899       1 deployment.go:47] Error reconciling: the server could not find the requested resource (post verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T09:02:41.797630821Z I0306 09:02:41.797247       1 namespace.go:35] Namespace sc-victoria-metrics-single updated. Check the labels.
2020-03-06T09:02:41.799313096Z E0306 09:02:41.799178       1 vpa.go:142] the server could not find the requested resource (get verticalpodautoscalers.autoscaling.k8s.io)
2020-03-06T09:02:41.7993278Z I0306 09:02:41.799195       1 vpa.go:56] Namespace is labelled for goldilocks.
2020-03-06T09:02:41.804216542Z I0306 09:02:41.804109       1 vpa.go:96] There are 0 deployments in the namespace
2020-03-06T09:02:41.804250844Z I0306 09:02:41.804130       1 vpa.go:107] All VPAs are in sync.
2020-03-06T09:02:41.804255833Z I0306 09:02:41.804137       1 vpa.go:122] No VPAs to delete.

Return existing Resource Requests/Limits

Goldilocks needs to send the existing requests and limits for each container as part of the data that it returns to the dashboard (and the json that it outputs). This will enable us to create a rating or score for each deployment.

Score each deployment

Generate some sort of score for the deployment based on how close it is to correct requests/limits.

Summary only returning current request & limit data for last container in a pod

When a pod contains multiple containers, the summary function and subsequently the dashboard only returns the current request and limit values for the last container in a deployment.
For example, take the following deployments:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test
  labels:
    app: test
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: test
  template:
    metadata:
      labels:
        app: test
    spec:
      containers:
      - name: test
        image: debian:stable-slim
        command: [ "/bin/bash", "-c", "--" ]
        args: [ "while true; do sleep 30; done;" ]
        resources:
          limits:
            cpu: "1"
            memory: 2Gi
          requests:
            cpu: 500m
            memory: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test2
  labels:
    app: test2
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: test2
  template:
    metadata:
      labels:
        app: test2
    spec:
      containers:
      - name: test2a
        image: debian:stable-slim
        command: [ "/bin/bash", "-c", "--" ]
        args: [ "while true; do sleep 30; done;" ]
        resources:
          limits:
            cpu: "1"
            memory: 2Gi
          requests:
            cpu: 500m
            memory: 1Gi
      - name: test2b
        image: debian:stable-slim
        command: [ "/bin/bash", "-c", "--" ]
        args: [ "while true; do sleep 30; done;" ]
        resources:
          limits:
            cpu: 50m
            memory: 128Mi
          requests:
            cpu: 10m
            memory: 64Mi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test3
  labels:
    app: test3
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: test3
  template:
    metadata:
      labels:
        app: test3
    spec:
      containers:
      - name: test3b
        image: debian:stable-slim
        command: [ "/bin/bash", "-c", "--" ]
        args: [ "while true; do sleep 30; done;" ]
        resources:
          limits:
            cpu: 50m
            memory: 128Mi
          requests:
            cpu: 10m
            memory: 64Mi
      - name: test3a
        image: debian:stable-slim
        command: [ "/bin/bash", "-c", "--" ]
        args: [ "while true; do sleep 30; done;" ]
        resources:
          limits:
            cpu: "1"
            memory: 2Gi
          requests:
            cpu: 500m
            memory: 1Gi

This set of deployments results in the following summary NB the null values for request and limits:

{
  "deployments": [
    {
      "containers": [
        {
          "lowerBound": {
            "cpu": "25m",
            "memory": "262144k"
          },
          "upperBound": {
            "cpu": "48m",
            "memory": "262144k"
          },
          "target": {
            "cpu": "25m",
            "memory": "262144k"
          },
          "uncappedTarget": {
            "cpu": "25m",
            "memory": "262144k"
          },
          "limits": {
            "cpu": "1",
            "memory": "2Gi"
          },
          "requests": {
            "cpu": "500m",
            "memory": "1Gi"
          },
          "containerName": "test"
        }
      ],
      "deploymentName": "test",
      "namespace": "default"
    },
    {
      "containers": [
        {
          "lowerBound": {
            "cpu": "12m",
            "memory": "131072k"
          },
          "upperBound": {
            "cpu": "49m",
            "memory": "131072k"
          },
          "target": {
            "cpu": "12m",
            "memory": "131072k"
          },
          "uncappedTarget": {
            "cpu": "12m",
            "memory": "131072k"
          },
          "limits": null,
          "requests": null,
          "containerName": "test3b"
        },
        {
          "lowerBound": {
            "cpu": "12m",
            "memory": "131072k"
          },
          "upperBound": {
            "cpu": "49m",
            "memory": "131072k"
          },
          "target": {
            "cpu": "12m",
            "memory": "131072k"
          },
          "uncappedTarget": {
            "cpu": "12m",
            "memory": "131072k"
          },
          "limits": {
            "cpu": "1",
            "memory": "2Gi"
          },
          "requests": {
            "cpu": "500m",
            "memory": "1Gi"
          },
          "containerName": "test3a"
        }
      ],
      "deploymentName": "test3",
      "namespace": "default"
    },
    {
      "containers": [
        {
          "lowerBound": {
            "cpu": "12m",
            "memory": "131072k"
          },
          "upperBound": {
            "cpu": "48m",
            "memory": "131072k"
          },
          "target": {
            "cpu": "12m",
            "memory": "131072k"
          },
          "uncappedTarget": {
            "cpu": "12m",
            "memory": "131072k"
          },
          "limits": null,
          "requests": null,
          "containerName": "test2a"
        },
        {
          "lowerBound": {
            "cpu": "12m",
            "memory": "131072k"
          },
          "upperBound": {
            "cpu": "48m",
            "memory": "131072k"
          },
          "target": {
            "cpu": "12m",
            "memory": "131072k"
          },
          "uncappedTarget": {
            "cpu": "12m",
            "memory": "131072k"
          },
          "limits": {
            "cpu": "50m",
            "memory": "128Mi"
          },
          "requests": {
            "cpu": "10m",
            "memory": "64Mi"
          },
          "containerName": "test2b"
        }
      ],
      "deploymentName": "test2",
      "namespace": "default"
    }
  ],
  "namespaces": [
    "default"
  ]
}

You'll see that:

  • deployment "test", container "test" being the only container has a valid requests and limits.
  • deployment "test2", container "test2a" has null values for requests and limits as it is not the last pod in the deployment spec.
  • deployment "test3", container "test3b" has null values for requests and limits as it is also not the last pod in the deployment spec even though it has the same container configuration as "test2b".

The dashboard subsequently looks as follows NB the Not Set values:
Goldilocks summary error

Add container name exclusion

Add a label that will allow the exclusion of container names. For example fairwinds.com/goldilocks/exclude-container=linkerd-proxy

Import k8s oidc library

cli create-vpas command won't work if someone is using oidc credentials. You will need to import the correct library.

Namespace reconciliation doesn't work when deleting deployments

To re-create:

  1. label a namespace for goldilocks that has multiple deployments
  2. Delete one of the deployments

Expected behavior:

Goldilocks should delete the VPA for the deployment that was deleted. Instead, it gives these log messages:

goldilocks-controller-55c8778f64-mrw8n goldilocks I0812 16:48:55.115241       1 controller.go:200] deployment/demo/load-generator has been deleted.
goldilocks-controller-55c8778f64-mrw8n goldilocks E0812 16:48:55.235581       1 client.go:105] Error getting namespace for deployment : resource name may not be empty
goldilocks-controller-55c8778f64-mrw8n goldilocks I0812 16:48:56.290717       1 vpa.go:104] Deleting all owned VPAs in namespace: &Namespace{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NamespaceSpec{Finalizers:[],},Status:NamespaceStatus{Phase:,},}
goldilocks-controller-55c8778f64-mrw8n goldilocks E0812 16:48:56.581989       1 vpa.go:173] Error deleting vpa: an empty namespace may not be set when a resource name is provided
goldilocks-controller-55c8778f64-mrw8n goldilocks E0812 16:48:56.816199       1 vpa.go:173] Error deleting vpa: an empty namespace may not be set when a resource name is provided

Support Daemonsets

Currently Goldilocks will not create VPAs for DaemonSets. We should enable this.

Possible parts of this feature:

  • enable or disable daemonset controller with a flag
  • Container exclusions should apply to DaemonSets the same way they do Deployments
  • Enable or disable deployments with a flag to set a pattern for additional types

Namespace annotation updateMode does not work?

Hello,

maybe I am missing something but here is my problem: I changed the namespace label as mentioned on github in order to turn on "auto" mode. Now the namespace label's look like this:

labels:
goldilocks.fairwinds.com/enabled: "true"
goldilocks.fairwinds.com/vpa-update-mode: Auto (Tried both "auto" and "Auto")

But all my already setup vpa’s still remain on the same mode. In order to resolve it I tried:

  • Deleting the excisting vpa's
  • Restarted the controller

Just let me know if I need to provide more information.

Cheers,
Thomas

Leader Election

Use client-go leader election library to enable the safer use of multiple controller pods.

Get goreportcard to 100%

└─ goreportcard-cli  -v
Grade: A+ (99.5%)
Files: 24
Issues: 1
gofmt: 100%
go_vet: 100%
gocyclo: 95%
        pkg/dashboard/helpers.go
                Line 30: warning: cyclomatic complexity 17 of function getStatus() is high (> 15) (gocyclo)
                Line 76: warning: cyclomatic complexity 16 of function getStatusRange() is high (> 15) (gocyclo)
golint: 100%
ineffassign: 100%
license: 100%
misspell: 100%

Potential RBAC Support?

First, great project - I have been doing something similar at work as we enforce requests and limits for all deployments, but many developers have no idea what values to use. In our case we have a dev cluster that doesnt require the request and limits in the deployment and they look at Grafana over a period of time to get a base value for the settings - Not ideal and goldilocks is a much better solution.

My only question / request would be for any RBAC support on the web report? Maybe use auth-headers to limit what namespaces are shown in the report so developers would only see namespaces that they have permission to?

Create /health endpoint

Currently liveness and readiness are using / which is causing the dashboard to constantly query the cluster. This is excessive and unnecessary. Create a /health endpoint to avoid this.

Logo

Get a logo and generate assets such as favicon and top-left logo.

Zero-state

If there are no workloads to report on, we should show a message to that effect. Maybe include instructions on how to get something to show up (e.g. wait a while, run a load generator, install VPA)

Update README

The readme could use the following:

  • Deployment info (using the /hack folder)
  • Dashboard command and info about it
  • Document container exclusions

Clarify the recommendation of not using the CLI?

The heading and description for the CLI state it is not recommended for use. I can see the desire to have the application run alongside deployed pods but my first thought upon reading the heading was "why not?"

Adding a clarifying statement to the recommendation for the Dashboard over CLI would go a long way to assuage users as to why it's there.

Can I get recommendation on pods created by an operator?

Hi,

My apologies if this is not the right place to ask questions. Please point me to the right place.

I am wondering how can I have the resource recommendation if my pods are created by an operator instead of a deployment.

Specifically, I am using a database called ArangoDB, and its pods are created by operators, there is no deployment control the pods

Thanks a lot for the help.

Eric

Prometheus as a history provider for the VPA recommender

Hello

I deployed goldilocks with helm.

I would like to know or configure this block in the values ​​of the helm card of goldilocks:
`spec:
containers:

Values of goldilocks:
`# installVPA -- Whether or not to install the VPA controller from the vpa repository. Only installs the recommender. If enabled on upgrades, it will also upgrade the VPA to the version specified.
installVPA: true

uninstallVPA -- Used to uninstall the vpa controller.

uninstallVPA: false

reinstallVPA -- Used to upgrade or reinstall the VPA. Enables both the uninstall and install hooks.

reinstallVPA: false

vpaVersion -- The git ref to install VPA from.

vpaVersion: e16a0adef6c7d79a23d57f9bbbef26fc9da59378

image:

image.repository -- Repository for the goldilocks image

repository: quay.io/fairwinds/goldilocks

image.tag -- The goldilocks image tag to use

tag: v2.2.0

pullPolicy -- imagePullPolicy - Highly recommended to leave this as Always

pullPolicy: Always

nameOverride: ""
fullnameOverride: ""

controller:

controller.enabled -- Whether or not to install the controller deployment

enabled: true
rbac:
# controller.rbac.create -- If set to true, rbac resources will be created for the controller
create: true
serviceAccount:
# controller.serviceAccount.create -- If true, a service account will be created for the controller. If set to false, you must set controller.serviceAccount.name
create: true
# controller.serviceAccount.name -- The name of an existing service account to use for the controller. Combined with controller.serviceAccount.create
name:

controller.flags -- A map of additional flags to pass to the controller

flags: {}

controller.logVerbosity -- Controller log verbosity. Can be set from 1-10 with 10 being extremely verbose

logVerbosity: "2"

controller.nodeSelector -- Node selector for the controller pod

nodeSelector: {}

controller.tolerations -- Tolerations for the controller pod

tolerations: []

controller.affinity -- Affinity for the controller pods

affinity: {}

controller.resources -- The resources block for the controller pods

resources:
limits:
cpu: 25m
memory: 32Mi
requests:
cpu: 25m
memory: 32Mi

dashboard:

dashboard.enabled -- If true, the dashboard component will be installed

enabled: true

dashboard.replicaCount -- Number of dashboard pods to run

replicaCount: 2
service:
# dashboard.service.type -- The type of the dashboard service
type: ClusterIP
# dashboard.service.port -- The port to run the dashboard service on
port: 80

dashboard.logVerbosity -- Dashboard log verbosity. Can be set from 1-10 with 10 being extremely verbose

logVerbosity: "2"

dashboard.excludeContainers -- Container names to exclude from displaying in the Goldilocks dashboard

excludeContainers: "linkerd-proxy,istio-proxy"
rbac:
# dashboard.rbac.create -- If set to true, rbac resources will be created for the dashboard
create: true
serviceAccount:
# dashboard.serviceAccount.create -- If true, a service account will be created for the dashboard. If set to false, you must set dashboard.serviceAccount.name
create: true
# dashboard.serviceAccount.name -- The name of an existing service account to use for the controller. Combined with dashboard.serviceAccount.create
name:

dashboard.basePath -- Sets the web app's basePath/base href

basePath:
ingress:
# dashboard.ingress.enabled -- Enables an ingress object for the dashboard.
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths: []

tls: []
#  - secretName: chart-example-tls
#    hosts:
#      - chart-example.local

dashboard.resources -- A resources block for the dashboard.

resources:
limits:
cpu: 25m
memory: 32Mi
requests:
cpu: 25m
memory: 32Mi

nodeSelector: {}
tolerations: []
affinity: {}

`

Incohérence sur les valeurs configurées sur mon cluster et celle qui s'affiche sur le dashboard

Bonjour

on my cluster for the resource requests / limits of my component I configured as follows:
` resources:
limits:
cpu: "1"
memory: 1Gi
requests:
cpu: 200m
memory: 300m

`
and here is what I observe in the goldilocks dashboard:

  Current   Guaranteed
CPU Request 200m   25m
CPU Limit 1000m   25m
Mem Request 1024Mi   250Mi
Mem Limit 1024Mi   250Mi

I do not understand why this difference in current?

Support StatefulSets

Similarly to #84 it would be really useful to have support for pods descending from StatefulSet resources.

Provide Selectable QoS Class

Somehow find a way to give the option of guaranteed QoS class with target being the request and the limit, as well as showing the upper bound.

Helm Chart

There should be a helm chart that will deploy VPA, Prometheus, and Goldilocks

Not able to expose the Dashboard through Ingress resource - traefik

Wanted to do a POC on goldilocks for our Kubernetes cluster, tried exposing the dashboard outside the cluster(externally) through ingress resource-traefik, attaching the YAML for reference.
Also, the port-forwarding is giving empty response sometimes.

kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: traefik
  creationTimestamp: "2020-03-31T11:58:15Z"
  generation: 3
  labels:
    app.kubernetes.io/component: dashboard
    app.kubernetes.io/name: goldilocks
    build.env: np
    project.vsad: go0v
  name: goldilocks-ingress
  namespace: goldilocks
  resourceVersion: "36672151"
  selfLink: /apis/extensions/v1beta1/namespaces/goldilocks/ingresses/goldilocks-ingress
  uid: 56eb0dc6-41a3-4e98-a9d8-8b8888d1ed07
spec:
  rules:
  - host: goldilocks-kube-np.ebiz.verizon.com
    http:
      paths:
      - backend:
          serviceName: goldilocks-gateway
          servicePort: 80
        path: /
status:
  loadBalancer: {}

Use annotations instead of labels for namespaces

There is a general desire to move to annotations. I think we should go this route:

  1. Release a major version and add annotation support. Start warning about deprecation of labels.
  2. At some later date, fully deprecate the labels. Release a minor or major version. TBD

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.