This repository is deprecated and no longer maintained.
If you're looking for a host-local container vulnerability scanner see our new projects:
Software Bill of Materials for Containers: Syft
Container Vulnerability Scanning: Grype
Helm charts for Anchore tools and services
Home Page: http://charts.anchore.io
License: Apache License 2.0
When I set up a different internal port than 443 in values, it won't work due to the admission-controller will use the default port 443 because the secure-port
isn't set. The readiness probe will fail because the pod's port is the defined one but the admission-controller uses the default 443.
fixed in: #7
The stable/engine chart should only install the open source components of anchore-engine. All enterprise related components will be removed from the chart to reduce complexity.
Currently you cannot easily roll out a new license to enterprise deployments without making changes to the deployment spec. Add a value to the anchoreEnterpriseGlobal section for forcing license updates.
Receiving the following error when trying to deploy the chart on Helm v3:
Error: unable to build kubernetes objects from release manifest: unable to recognize no matches for kind "Deployment" in version "extensions/v1beta1"
The issue is resolved when the webhook deployment template is updated to admissionregistration.k8s.io/v1. This also requires declaring the following in the webhook config: admissionReviewVersions and sideEffects
Hi, I'm having some issues installing the admission controller. The helm install hangs, the debug output doesn't say much but if I manual inspect with kubectl, I see:
# kubectl describe job admission-init-ca
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreate 119s (x5 over 4m29s) job-controller Error creating: Internal error occurred: failed calling admission webhook "admission-anchore-admission-controller.admission.anchore.io": the server could not find the requested resource
# kubectl describe ValidatingWebhookConfiguration admission-anchore-admission-controller.admission.anchore.io
Name: admission-anchore-admission-controller.admission.anchore.io
Namespace:
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"admissionregistration.k8s.io/v1beta1","kind":"ValidatingWebhookConfiguration","metadata":{"annotations":{},"name":"admissio...
API Version: admissionregistration.k8s.io/v1beta1
Kind: ValidatingWebhookConfiguration
Metadata:
Creation Timestamp: 2019-02-03T14:39:49Z
Generation: 1
Resource Version: 623778
Self Link: /apis/admissionregistration.k8s.io/v1beta1/validatingwebhookconfigurations/admission-anchore-admission-controller.admission.anchore.io
UID: 8f5a7ff6-27c1-11e9-a425-0a002ac40f5c
Webhooks:
Client Config:
Ca Bundle: <redacted>
Service:
Name: kubernetes
Namespace: default
Path: /apis/admission.anchore.io/v1beta1/imagechecks
Failure Policy: Fail
Name: admission-anchore-admission-controller.admission.anchore.io
Namespace Selector:
Rules:
API Groups:
API Versions:
*
Operations:
CREATE
Resources:
pods
Events: <none>
It's worth mentioning there's been a few reinstalls of the chart at this point, and it was working initially, but I'm seeing this after running helm delete --purge and running the cleanup script.
As a side note, it might be that the ValidatingWebhookConfiguration object also needs adding to the cleanup script.
NOTE tested again referencing the chart from a local git clone and didn't encounter this problem - does charts.anchore.io need updating?
Currently in the stable/anchore-engine/templates/engine_configmap.yaml
, the saml secret is placed via a hard-coded values.yaml value. This should be able to be placed in a kubernetes secret.
# Locations for keys used for signing and encryption. Only one of 'secret' or 'public_key_path'/'private_key_path' needs to be set. If all are set then the keys take precedence over the secret value
# Secret is for a shared secret and if set, all components in anchore should have the exact same value in their configs.
keys:
>>> secret: {{ .Values.anchoreGlobal.saml.secret }}<<<
{{- with .Values.anchoreGlobal.saml.publicKeyName }}
public_key_path: /home/anchore/certs/{{- . }}
{{- end }}
{{- with .Values.anchoreGlobal.saml.privateKeyName }}
private_key_path: /home/anchore/certs/{{- . }}
{{- end }}
Currently, the update process is manual to update the state of charts.anchore.io to mirror the state of this repository.
The stable/feeds-service chart should use the latest bitnami postgresql chart as a dependency.
When running helm3 -n anchore install anchore-democtl -f ctl_values.yaml . it shows the following error -
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(ValidatingWebhookConfiguration): unknown field "labels" in io.k8s.api.admissionregistration.v1beta1.ValidatingWebhookConfiguration
Right now the ANCHORE_DB_HOST environment variable is specified in the configmap and cannot be overridden with the .Values.anchoreGlobal.existingSecret value
Add support for HPA to the Chart templates for enterprise. The following snippet was created by a user for Anchore Engine in the template:
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: {{ .Release.Name }}-hpa-analyzer
labels:
app: {{ .Release.Name }}-anchore-engine
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
maxReplicas: {{ .Values.anchoreHorizontalAutoScaler.maxReplicas }}
minReplicas: {{ .Values.anchoreHorizontalAutoScaler.minReplicas }}
targetCPUUtilizationPercentage: {{ .Values.anchoreHorizontalAutoScaler.targetCpu }}
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ .Release.Name }}-anchore-engine-analyzer
Our official stable/anchore-engine chart will need to be migrated to this chart repository in preparation for the helm2 deprecation.
The stable
chart repository will be deprecated to security only fixes on May 13, 2020 and completely removed on November 13, 2020.
Only the latest chart is reachable.
https://hub.helm.sh/charts/anchore/anchore-engine
When you build a new chart the previous one disappears.
To allow the chart to be installed on kubernetes v1.15+ the dependent charts need to be unpacked & the api versions changed from extensions/v1beta1
to apps/v1
. This will allow seemless upgrades for existing users, but will allow the chart to be installed to kubernetes v1.15 & higher.
is there a volume i can mount or sth like that?
or can i manage it with git?
All charts in this repository should go through a unified e2e chart testing process.
There is an example of a chart testing process utilizing GitHub actions for the admission-controller chart. We would like this process initially to be setup in CircleCI so that it utilizes the same system as all other projects. If the CircleCI process is too complex or requires too much custom scripting/setup, we can consider using GitHub Actions instead.
The initial e2e test setup should use KIND to stand up k8s clusters utilizing multiple versions of k8s and ensure that the chart installs as expected.
Hi Team,
I am using anchore helm charts and observed that the application pods have recreate strategy rather than the rolling update strategy . Rolling update strategy is very helpful during the upgrades for less downtime. Just want to know if there is any reason for this.
For example, analyzer pod has recreate strategy in the below file on line number 25.
Currently the chart README has very vague instructions on how to enable TLS with external database connections. These instructions should have more detailed steps to make configuring TLS as easy as possible
https://github.com/anchore/anchore-charts/blob/master/stable/anchore-admission-controller/templates/init-ca/init-ca-script.yaml seems to have some manual jobs to be applied for generating some TLS certificates, and it will require internet access as it's based on some downloads, apt etc. This is a bit of a hassle /tricky in air-gapped environments where you might need to go through a proxy etc.
Depending on what the requirements to this certificate (what is it used for / why is it - could maybe be more smooth by using cert-manager for certificate issuing and rotation? Maybe it's even better to handle this outside of the chart and simply refer to an existing secret (which could be generated by any means - including cert-manager).
Background in slack: https://anchorecommunity.slack.com/archives/C4PJFNEEM/p1575834841399400
We will be refactoring the Kubernetes admission controller chart
-[] Update chart for use in Kubernetes v1.16 and above
-[] Update documentation (to include use cases for minikube and kind)
-[] Refactor admission controller framework
-[] TBD
The migration process for using moving from legacy/anchore-engine and helm/charts/stable/anchore-engine to anchore/anchore-charts/stable/enterprise should be documented.
We will be refactoring the Kubernetes admission controller chart
-[] Create chart for use in Kubernetes v1.16 and above
-[]Create documentation (to include use cases for minikube and kind_
-[] TBD
In almost all cases within your helm charts, you will see secrets with the option to pull from the existingSecret secret, like this:
However, in the case of the Enterprise Feeds, you will see the following template used instead
This is extra confusing since the .feedsDbPassword is an option that is documented here: https://github.com/anchore/anchore-charts/blob/e0e753ca42d6b5ec041d9923e60062e5bf5a0c54/stable/anchore-engine/templates/secrets.yaml
Can you please fix the chart so that feeds can pull from existingSecret?
When migrating stable/anchore-engine chart to the new chart repo, we should consider breaking it up into 3 smaller charts rather then keeping it 1 large monolithic chart.
Before solving other issues I'd like to refactor the chart to follow best practices. The rbac.yaml
contains this rolebinding:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: kube-system
name: extension-{{ template "anchore-admission-controller.fullname" . }}-authentication-reader-default
roleRef:
kind: Role
apiGroup: rbac.authorization.k8s.io
name: extension-api{{ template "anchore-admission-controller.fullname" . }}-authentication-reader
subjects:
- kind: ServiceAccount
name: {{ template "anchore-admission-controller.fullname" . }}
which I find a bit strange as the role is not defined.
Is it really for binding to the roles defined in https://kubernetes.io/docs/tasks/access-kubernetes-api/ ?
I also see a cluster-rolebinding to a CR named namespace-reservation-{{ template "anchore-admission-controller.fullname" . }}
- this does not exist, while {{ template "anchore-admission-controller.fullname" . }}
does - is it a misspelling?
Ensure clear docs of how charts must be updated (PRs) and how updates are pushed to the chart repo.
The postgresql dependent chart should be updated from v1.0.0 to the latest available postgresql chart. This upgrade has been avoided due to backwards compatibility issues with deployments using the v1.0.0 due to the required upgrade path for the postgresql chart.
The migration to a new chart repository will be a good time to update the dependencies of stable/anchore-engine in conjunction with the upgrade to using the Chart apiVersion: v2
for Helm3 support.
Throughout nearly all of the Helm Chart for Anchore, the existingSecret is properly respected for pulling secrets. However, in the case of the Enterprise UI, this existing secret is completely ignored for the Postgres Password, and .Values.postgresql.postgresPassword is used (which won't be the right password).
Can the UI deployment please read this password from the existing secret instead of insecurely reading only from values?
Hello, I have just installed the Anchore Helm chart and I am very interested in having access to the metrics.
I see that we can have access to the metrics of each component by accessing the /metrics path.
In my case, I have deployed Anchore using an ALB ingress, everything is working fine, but I need a service for the analyzer, that does not exist.
If I am not wrong the only way I have to access the analyzer metrics is to create an extra service.
Am I missing something?
Thank you very much in advance.
The anchore-charts repository will need to be added to the helm hub when the stable/anchore-engine chart is migrated.
Hello Anchore!
I was wondering if it'd be possible to support specifying k8s service account for anchore deployments.
I'm not too familiar with helm, but perhaps my suggesting fix is to add these lines under pod spec.
So that we can specify it in values.yaml which k8s service account to run deployments with.
{{- with .Values.anchoreGlobal.serviceAccountName }}
serviceAccountName:
{{ . }}
{{- end }}
Thanks,
The stable/feeds-service chart should use the latest bitnami postgres chart as a depedency.
The stable/enterprise chart dependencies should use the latest bitnami postgres and redis charts from helm hub.
Currently there is no way in the chart to specify a feeds service that is deployed externally to the chart. This functionality should be added using a conditional like postgres/redis support.
Add templates for fixes and feature requests to help streamline things
Per the changes in anchore/anchore-engine#641, we should be able to specify these db_engine_args in a helm chart values.yaml file.
Relevant files:
Helm chart testing works now with the Helm tool 'ct' and the GitHub Action helm/chart-testing-action.
This task is to add our own end to end or smoke testing, piggy backing on the existing Action.
Based on a comment @nurmi, there was a proposal that we should remove certain default values from the Helm chart and explicitly request the user set them manually. The goal was to avoid the user from deploying an instance of Anchore with too few resources which would always be a risk if defaults were assumed to work equally for evals, POCs and production.
Ensure the chart can run on OpenShift 4.5 and its limited write and PID options (dynamic pids, and volume mounts)
The enterprise deployment process should be self-contained in a chart that does not include standard open source engine installations.
https://github.com/anchore/anchore-charts/blob/master/stable/anchore-admission-controller/files/get_validating_webhook_config.sh seems to contain some manual steps for registering the webhook. This should be documented in the README.md
Additionally could this simply be templated and handled by helm alongside the rest of the chart?
Further I think it makes sense to set the failurePolicy to Ignore
in order to avoid the control-plane malfunctioning if the controller should start failing.
Each chart should have specific tests in the tests/
directory of each chart repo that tests various different deployments of the chart (see - https://helm.sh/docs/topics/chart_tests/).
These tests will consists of a matrix of different values files to test all possible deployment options of the chart.
Relevant once the chart has migrated to this repo from stable
.
Anchore upgrades itself, including db schema updates, but those can take time, so a separate, non-service bound, entity in the deployment should handle the upgrade. Could use upgrade hooks, jobs, or init-containers to accomplish this, but the requirements for the process are:
The migration process from helm/charts/stable/anchore-engine to the anchore-charts/stable/anchore-engine chart needs to be documented to make the process as simple as possible for existing users.
Hi. According to values.yaml
file there should be affinity and tolerations, but they are missing in the deployment template, can you guys please add them?
The migration process for using moving from legacy/anchore-engine and helm/charts/stable/anchore-engine to anchore/anchore-charts/stable/anchore-engine should be documented.
The migration process for using moving from legacy/anchore-engine and helm/charts/stable/anchore-engine to anchore/anchore-charts/stable/feeds-service should be documented.
Currently the feeds service is deployed using the same chart as anchore-engine/enterprise. This deployment should be self contained in it's own chart.
The chart should deploy the anchore feeds service with all options available for configuration. The deployment should interface with anchore-engine/enterprise charts seemlessly.
The updated chart will create a credentials secret by default with empty content unless existingCredentialsSecret
is used, but the README references the older key, so its easy to create an external secret but not have it actually mounted to the controller and thus all the policy reference lookups fail.
One way to confirm this condition is to:
kubectl exec -t <controller pod> /bin/cat /credentials/credentials.json
If that returns {}
then you're probably using the old method and it isn't in the pod properly.
The stable/anchore-engine/templates/engine_upgrade_job.yaml
does not have/use nodeSelector
, affinity
, or tolerations
properties in its job spec. We label and taint specific nodes for anchore with specific access to an external postgres database. The rest of the components have configurable properties to allow & require scheduling on our anchore nodes, but the upgrade job does not, preventing the post-install hook job to complete.
The enterprise_upgrade_job.yaml
looks to have the same issue.
I'd be happy to submit a PR for this if it would be acceptable.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.