GithubHelp home page GithubHelp logo

anchore / anchore-charts Goto Github PK

View Code? Open in Web Editor NEW
45.0 19.0 71.0 1.94 MB

Helm charts for Anchore tools and services

Home Page: http://charts.anchore.io

License: Apache License 2.0

Shell 1.37% Smarty 10.12% Mustache 8.06% Dockerfile 0.09% Python 80.36%
helm-charts helm kubernetes security security-vulnerability-assessment

anchore-charts's Introduction

This repository is deprecated and no longer maintained.

If you're looking for a host-local container vulnerability scanner see our new projects:

Software Bill of Materials for Containers: Syft

Container Vulnerability Scanning: Grype

anchore-charts's People

Contributors

adawalli avatar anchoreops avatar asomya avatar bhearn7 avatar blang9238 avatar bradleyjones avatar brandtkeller avatar btodhunter avatar chrisad2 avatar dakaneye avatar davidkarlsen avatar flickerfly avatar found-it avatar hn23 avatar kaizhe avatar keohn-aanchore avatar kishorb avatar kroussou avatar mjnagel avatar ndegory avatar pbalogh-sa avatar pvnovarese avatar rabadin avatar saisatishkarra avatar sfroment avatar step-security-bot avatar svietry avatar vijay-p avatar westonsteimel avatar zhill avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

anchore-charts's Issues

Cannot set different internal port for admission-controller.

When I set up a different internal port than 443 in values, it won't work due to the admission-controller will use the default port 443 because the secure-port isn't set. The readiness probe will fail because the pod's port is the defined one but the admission-controller uses the default 443.

fixed in: #7

Support enterprise license updates

Currently you cannot easily roll out a new license to enterprise deployments without making changes to the deployment spec. Add a value to the anchoreEnterpriseGlobal section for forcing license updates.

Update Controller API version

Receiving the following error when trying to deploy the chart on Helm v3:

Error: unable to build kubernetes objects from release manifest: unable to recognize no matches for kind "Deployment" in version "extensions/v1beta1"

The issue is resolved when the webhook deployment template is updated to admissionregistration.k8s.io/v1. This also requires declaring the following in the webhook config: admissionReviewVersions and sideEffects

Chart install times out when using helm install --repo reference

Hi, I'm having some issues installing the admission controller. The helm install hangs, the debug output doesn't say much but if I manual inspect with kubectl, I see:

# kubectl describe job admission-init-ca
...
Events:
  Type     Reason        Age                   From            Message
  ----     ------        ----                  ----            -------
  Warning  FailedCreate  119s (x5 over 4m29s)  job-controller  Error creating: Internal error occurred: failed calling admission webhook "admission-anchore-admission-controller.admission.anchore.io": the server could not find the requested resource

# kubectl describe ValidatingWebhookConfiguration admission-anchore-admission-controller.admission.anchore.io

Name:         admission-anchore-admission-controller.admission.anchore.io
Namespace:
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"admissionregistration.k8s.io/v1beta1","kind":"ValidatingWebhookConfiguration","metadata":{"annotations":{},"name":"admissio...
API Version:  admissionregistration.k8s.io/v1beta1
Kind:         ValidatingWebhookConfiguration
Metadata:
  Creation Timestamp:  2019-02-03T14:39:49Z
  Generation:          1
  Resource Version:    623778
  Self Link:           /apis/admissionregistration.k8s.io/v1beta1/validatingwebhookconfigurations/admission-anchore-admission-controller.admission.anchore.io
  UID:                 8f5a7ff6-27c1-11e9-a425-0a002ac40f5c
Webhooks:
  Client Config:
    Ca Bundle:  <redacted>
    Service:
      Name:        kubernetes
      Namespace:   default
      Path:        /apis/admission.anchore.io/v1beta1/imagechecks
  Failure Policy:  Fail
  Name:            admission-anchore-admission-controller.admission.anchore.io
  Namespace Selector:
  Rules:
    API Groups:

    API Versions:
      *
    Operations:
      CREATE
    Resources:
      pods
Events:  <none>

It's worth mentioning there's been a few reinstalls of the chart at this point, and it was working initially, but I'm seeing this after running helm delete --purge and running the cleanup script.

As a side note, it might be that the ValidatingWebhookConfiguration object also needs adding to the cleanup script.

NOTE tested again referencing the chart from a local git clone and didn't encounter this problem - does charts.anchore.io need updating?

SAML secret should be put in kubernetes secret

Currently in the stable/anchore-engine/templates/engine_configmap.yaml, the saml secret is placed via a hard-coded values.yaml value. This should be able to be placed in a kubernetes secret.

    # Locations for keys used for signing and encryption. Only one of 'secret' or 'public_key_path'/'private_key_path' needs to be set. If all are set then the keys take precedence over the secret value
    # Secret is for a shared secret and if set, all components in anchore should have the exact same value in their configs.
    keys:
     >>> secret: {{ .Values.anchoreGlobal.saml.secret }}<<<
      {{- with .Values.anchoreGlobal.saml.publicKeyName }}
      public_key_path: /home/anchore/certs/{{- . }}
      {{- end }}
      {{- with .Values.anchoreGlobal.saml.privateKeyName }}
      private_key_path: /home/anchore/certs/{{- . }}
      {{- end }}

Cannot install admission controller using helm3

When running helm3 -n anchore install anchore-democtl -f ctl_values.yaml . it shows the following error -

Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(ValidatingWebhookConfiguration): unknown field "labels" in io.k8s.api.admissionregistration.v1beta1.ValidatingWebhookConfiguration

Horizontal Pod Autoscaling Support

Add support for HPA to the Chart templates for enterprise. The following snippet was created by a user for Anchore Engine in the template:

---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: {{ .Release.Name }}-hpa-analyzer
labels:
app: {{ .Release.Name }}-anchore-engine
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
maxReplicas: {{ .Values.anchoreHorizontalAutoScaler.maxReplicas }}
minReplicas: {{ .Values.anchoreHorizontalAutoScaler.minReplicas }}
targetCPUUtilizationPercentage: {{ .Values.anchoreHorizontalAutoScaler.targetCpu }}
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ .Release.Name }}-anchore-engine-analyzer

Add helm/charts/stable/anchore-engine to this repository

Our official stable/anchore-engine chart will need to be migrated to this chart repository in preparation for the helm2 deprecation.

The stable chart repository will be deprecated to security only fixes on May 13, 2020 and completely removed on November 13, 2020.

Vendor in dependent charts

To allow the chart to be installed on kubernetes v1.15+ the dependent charts need to be unpacked & the api versions changed from extensions/v1beta1 to apps/v1. This will allow seemless upgrades for existing users, but will allow the chart to be installed to kubernetes v1.15 & higher.

Add chart testing process

All charts in this repository should go through a unified e2e chart testing process.

There is an example of a chart testing process utilizing GitHub actions for the admission-controller chart. We would like this process initially to be setup in CircleCI so that it utilizes the same system as all other projects. If the CircleCI process is too complex or requires too much custom scripting/setup, we can consider using GitHub Actions instead.

The initial e2e test setup should use KIND to stand up k8s clusters utilizing multiple versions of k8s and ensure that the chart installs as expected.

Any reason for the Recreate strategy for all anchore application pods in the helm charts

Hi Team,

I am using anchore helm charts and observed that the application pods have recreate strategy rather than the rolling update strategy . Rolling update strategy is very helpful during the upgrades for less downtime. Just want to know if there is any reason for this.

For example, analyzer pod has recreate strategy in the below file on line number 25.

https://github.com/anchore/anchore-charts/blob/master/stable/anchore-engine/templates/analyzer_deployment.yaml

Option for using cert-manager for TLS / clarify docs

https://github.com/anchore/anchore-charts/blob/master/stable/anchore-admission-controller/templates/init-ca/init-ca-script.yaml seems to have some manual jobs to be applied for generating some TLS certificates, and it will require internet access as it's based on some downloads, apt etc. This is a bit of a hassle /tricky in air-gapped environments where you might need to go through a proxy etc.

Depending on what the requirements to this certificate (what is it used for / why is it - could maybe be more smooth by using cert-manager for certificate issuing and rotation? Maybe it's even better to handle this outside of the chart and simply refer to an existing secret (which could be generated by any means - including cert-manager).

Background in slack: https://anchorecommunity.slack.com/archives/C4PJFNEEM/p1575834841399400

Document migration process

The migration process for using moving from legacy/anchore-engine and helm/charts/stable/anchore-engine to anchore/anchore-charts/stable/enterprise should be documented.

Umbrella Issue for new Harbor Chart

We will be refactoring the Kubernetes admission controller chart

-[] Create chart for use in Kubernetes v1.16 and above
-[]Create documentation (to include use cases for minikube and kind_
-[] TBD

Inconsistent Secret Naming Convention in Helm Charts

In almost all cases within your helm charts, you will see secrets with the option to pull from the existingSecret secret, like this:

name: {{ default (include "anchore-engine.fullname" .) .Values.anchoreGlobal.existingSecret }}

However, in the case of the Enterprise Feeds, you will see the following template used instead

name: {{ template "anchore-engine.fullname" . }}

This is extra confusing since the .feedsDbPassword is an option that is documented here: https://github.com/anchore/anchore-charts/blob/e0e753ca42d6b5ec041d9923e60062e5bf5a0c54/stable/anchore-engine/templates/secrets.yaml

Can you please fix the chart so that feeds can pull from existingSecret?

Refactor/improve chart

Before solving other issues I'd like to refactor the chart to follow best practices. The rbac.yaml contains this rolebinding:

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  namespace: kube-system
  name: extension-{{ template "anchore-admission-controller.fullname" . }}-authentication-reader-default
roleRef:
  kind: Role
  apiGroup: rbac.authorization.k8s.io
  name: extension-api{{ template "anchore-admission-controller.fullname" . }}-authentication-reader
subjects:
- kind: ServiceAccount
  name: {{ template "anchore-admission-controller.fullname" . }}

which I find a bit strange as the role is not defined.

Is it really for binding to the roles defined in https://kubernetes.io/docs/tasks/access-kubernetes-api/ ?

I also see a cluster-rolebinding to a CR named namespace-reservation-{{ template "anchore-admission-controller.fullname" . }} - this does not exist, while {{ template "anchore-admission-controller.fullname" . }} does - is it a misspelling?

Update postgresql dependency

The postgresql dependent chart should be updated from v1.0.0 to the latest available postgresql chart. This upgrade has been avoided due to backwards compatibility issues with deployments using the v1.0.0 due to the required upgrade path for the postgresql chart.

The migration to a new chart repository will be a good time to update the dependencies of stable/anchore-engine in conjunction with the upgrade to using the Chart apiVersion: v2 for Helm3 support.

Anchore Helm Chart for Enterprise UI Ignores "existingSecret" Postgres Password

Throughout nearly all of the Helm Chart for Anchore, the existingSecret is properly respected for pulling secrets. However, in the case of the Enterprise UI, this existing secret is completely ignored for the Postgres Password, and .Values.postgresql.postgresPassword is used (which won't be the right password).

https://github.com/anchore/anchore-charts/blob/master/stable/anchore-engine/templates/enterprise_ui_config_secret.yaml#L55

Can the UI deployment please read this password from the existing secret instead of insecurely reading only from values?

Metrics of the Analyzer

Hello, I have just installed the Anchore Helm chart and I am very interested in having access to the metrics.
I see that we can have access to the metrics of each component by accessing the /metrics path.
In my case, I have deployed Anchore using an ALB ingress, everything is working fine, but I need a service for the analyzer, that does not exist.
If I am not wrong the only way I have to access the analyzer metrics is to create an extra service.
Am I missing something?

Thank you very much in advance.

Support specifying k8s service account in values.yaml

Hello Anchore!

I was wondering if it'd be possible to support specifying k8s service account for anchore deployments.
I'm not too familiar with helm, but perhaps my suggesting fix is to add these lines under pod spec.
So that we can specify it in values.yaml which k8s service account to run deployments with.

      {{- with .Values.anchoreGlobal.serviceAccountName }}
      serviceAccountName:
        {{ . }}
      {{- end }}

https://github.com/anchore/anchore-charts/blob/master/stable/anchore-engine/templates/analyzer_deployment.yaml#L45

Thanks,

[stable/feeds-service]

The stable/feeds-service chart should use the latest bitnami postgres chart as a depedency.

Update chart dependencies

The stable/enterprise chart dependencies should use the latest bitnami postgres and redis charts from helm hub.

Configure custom endpoint for feeds service

Currently there is no way in the chart to specify a feeds service that is deployed externally to the chart. This functionality should be added using a conditional like postgres/redis support.

Remove specific defaults and add notes that user input is explicitly required.

Based on a comment @nurmi, there was a proposal that we should remove certain default values from the Helm chart and explicitly request the user set them manually. The goal was to avoid the user from deploying an instance of Anchore with too few resources which would always be a risk if defaults were assumed to work equally for evals, POCs and production.

automate webhook registration and improve policy choice

https://github.com/anchore/anchore-charts/blob/master/stable/anchore-admission-controller/files/get_validating_webhook_config.sh seems to contain some manual steps for registering the webhook. This should be documented in the README.md

Additionally could this simply be templated and handled by helm alongside the rest of the chart?

Further I think it makes sense to set the failurePolicy to Ignore in order to avoid the control-plane malfunctioning if the controller should start failing.

[anchore-engine] Alter upgrade process for Anchore deployments to use either hooks or init containers

Relevant once the chart has migrated to this repo from stable.

Anchore upgrades itself, including db schema updates, but those can take time, so a separate, non-service bound, entity in the deployment should handle the upgrade. Could use upgrade hooks, jobs, or init-containers to accomplish this, but the requirements for the process are:

  • All engine services must be down prior to upgrade operation
  • During db upgrade no services are started
  • On successful upgrade, the services are started
  • On a failed upgrade, the upgrade can be retried but rollback will require a restore of old db version manually (not part of the chart most likely). Upgrade failures can leave the db in an intermediate stage, and a full upgrade operation must be successfully completed prior to system startup.

Document migration process

The migration process from helm/charts/stable/anchore-engine to the anchore-charts/stable/anchore-engine chart needs to be documented to make the process as simple as possible for existing users.

Chart missing affinity and tolerations

Hi. According to values.yaml file there should be affinity and tolerations, but they are missing in the deployment template, can you guys please add them?

Document migration process

The migration process for using moving from legacy/anchore-engine and helm/charts/stable/anchore-engine to anchore/anchore-charts/stable/anchore-engine should be documented.

Document migration process

The migration process for using moving from legacy/anchore-engine and helm/charts/stable/anchore-engine to anchore/anchore-charts/stable/feeds-service should be documented.

Create a chart for deploying the anchore feeds service

Currently the feeds service is deployed using the same chart as anchore-engine/enterprise. This deployment should be self contained in it's own chart.

The chart should deploy the anchore feeds service with all options available for configuration. The deployment should interface with anchore-engine/enterprise charts seemlessly.

Fix the admission controller README to reflect the updated chart and its credential handling

The updated chart will create a credentials secret by default with empty content unless existingCredentialsSecret is used, but the README references the older key, so its easy to create an external secret but not have it actually mounted to the controller and thus all the policy reference lookups fail.

One way to confirm this condition is to:

kubectl exec -t <controller pod> /bin/cat /credentials/credentials.json

If that returns {} then you're probably using the old method and it isn't in the pod properly.

Engine upgrade job needs additional configuration

The stable/anchore-engine/templates/engine_upgrade_job.yaml does not have/use nodeSelector, affinity, or tolerations properties in its job spec. We label and taint specific nodes for anchore with specific access to an external postgres database. The rest of the components have configurable properties to allow & require scheduling on our anchore nodes, but the upgrade job does not, preventing the post-install hook job to complete.

The enterprise_upgrade_job.yaml looks to have the same issue.

I'd be happy to submit a PR for this if it would be acceptable.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.