lacework / helm-charts Goto Github PK
View Code? Open in Web Editor NEWOfficial Lacework Helm Charts
License: Apache License 2.0
Official Lacework Helm Charts
License: Apache License 2.0
The Lacework Agent Helm chart package contains
tar -xzvf <path>/helm-charts/lacework-agent-5.5.2.tgz
lacework-agent/Chart.yaml
lacework-agent/values.yaml
lacework-agent/values.schema.json
lacework-agent/templates/_helpers.tpl
lacework-agent/templates/access-token.yaml
lacework-agent/templates/configmap.yaml
lacework-agent/templates/daemonset.yaml
lacework-agent/templates/tests/test-agent.yaml
lacework-agent/README.md
lacework-agent/dev_install.sh
lacework-agent/dev_uninstall.sh
lacework-agent/index.yaml
lacework-agent/release_install.sh
All Bash files are intended for development only and shouldn't be included in the Helm package itself.
Two items that should be addressed before clusterAgent is released:
latest
tag; I assume this should be pinned to a releasehttps://github.com/lacework/helm-charts/blob/main/lacework-agent/templates/daemonset.yaml#L14-L15
https://github.com/lacework/helm-charts/blob/main/lacework-agent/templates/daemonset.yaml#L131-L132
Surprisingly, neither Helm nor Kubernetes seem to care that this key is duplicated. However, flux does refuse to deploy this Helm chart because of this duplication. Also, undefined behavior is never good.
Looks like L14-L15 are a mistaken, and should be removed.
By default, Lacework uses the default service account, which is something that CIS Benchmark recommends against:
Create explicit service accounts wherever a Kubernetes workload requires specific access to the Kubernetes API server.
I noticed that a serviceAccountName has been added to laceworkConfig. However, this appears to be specific to OpenShift, and assigning a value to this will not create a ServiceAccount resource with this name.
Adding a namespace will enable templates to render correctly.
Hi Lacework support team:
We are facing the issue as below
apiVersion: v2
appVersion: "1.0"
version: 5.5.0
name: lacework-agent
namespace: lacework
description: A Helm chart for Kubernetes Lacework Agent
type: application
icon: https://www.lacework.com/wp-content/uploads/2019/07/Lacework_Logo_color_2019.svg
dependencies:
- name: lacework-agent
version: 5.5.0
repository: https://lacework.github.io/helm-charts/
And we have a value.yaml file to inject the values
lacework-agent:
laceworkConfig:
accessToken: XXX
serverUrl: "https://api.lacework.net"
fim:
coolingPeriod: 60
crawlInterval: 60
noAtime: true
runAt: 03:01
kubernetesCluster: XXX
env: staging
autoUpgrade: disable
resources:
requests:
cpu: 200m
memory: 128Mi
limits:
cpu: 300m
memory: 1024Mi
lacework % helm template -f ./value-dev-staging.yaml lacework-agent ./
Error: values don't meet the specifications of the schema(s) in the following chart(s):
lacework-agent:
- (root): Additional property global is not allowed
The issue is "parent chart is implicit to share the 'global' values from parent to child", but at https://github.com/lacework/helm-charts/blob/main/lacework-agent/values.schema.json, the "additionalProperties" is all false
some other helm charts like istio also reporting same issue before, see below 3 links
istio/istio#35496
helm/helm#10392
helm/helm#8489
The workaround solution is to either
A. change all "additionalProperties" to true
B. delete values.schema.json files ....
Can you please help for this issues? Thanks!~~
Hi.
I had to modify the chart to allow passing proxyurl
setting in the JSON config file embedded in the configmap. Could you add this setting in your chart ? Thanks !
under the setup instructions for jfrog there is a section for wetting up webhooks for the registry to notify the proxy scanner when a new image has been pushed. As part of this you need the IP of the proxy scanner, my understanding is that as-is I have no way to get to the proxy scanner without an ingress as well exposing the service through a domain. It would be very helpful to have the ability to deploy the needed ingress and define the domain it matches on in the values.yaml so that it updates when it gets deployed
There's an inconsistance between the release and the image version.
The datacollector image is still in 6.5.2
Hi.
The chart exposes daemonset affinity through .Values.daemonset.affinity
but this value is not used in the Daemonset template. It's needed for me to avoid the pod being targeted on EKS Fargate nodes.
As many of you are probably aware, Amazon EKS supports IRSA:
An easy way to configure it is to either allow the chart to use a custom ServiceAccount, or to add a field to add annotations to the existing ServiceAccount. IE something like this:
In values.yaml:
serviceAccount:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::1111111111:role/lacework-proxy-agent
Then in serviceaccount.yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Values.name }}
annotations:
{{- range $key, $val := .Values.serviceAccount.annotations }}
{{ $key }}: {{ $val }}
{{- end }}
Why are these scrape times required?
"clusterAgent": {
"type": "object",
"required": [
"image",
"enable",
"clusterType",
"scrapeInitialDelayMins",
"scrapeIntervalMins"
],
Their type declarations include null
.
Suggested fix:
null
is allowed then remove them from requiredvalues.yaml
and remove null from their typesOriginally posted by @joebowbeer in #85 (comment)
Hi All,
I am trying to deploy this helmchart as below config:
lacework-agent:
laceworkConfig:
accessToken: XXX
serverUrl: "https://api.lacework.net"
kubernetesCluster: XXX
env: staging
autoUpgrade: disable
resources:
requests:
cpu: 200m
memory: 128Mi
limits:
cpu: 300m
memory: 1024Mi
And I see the error as below:
lacework % helm template lacework-agent . -f value-dev-staging.yaml --include-crds
Error: values don't meet the specifications of the schema(s) in the following chart(s):
lacework-agent:
- laceworkConfig.fim: coolingPeriod is required
- laceworkConfig.fim: crawlInterval is required
- laceworkConfig.fim: noAtime is required
- laceworkConfig.fim: runAt is required
However as official chart describe, these parameters should Optional:
https://github.com/lacework/helm-charts/blob/main/lacework-agent/values.yaml
# [Optional] Configure File Integrity Monitoring
# https://docs.lacework.com/configure-agent-behavior-in-configjson-file#file-integrity-monitoring-fim-properties
fim:
# [Optional] Configure coolingperiod in minutes
coolingPeriod:
# [Optional] Configure crawlinterval in minutes
crawlInterval:
enable: true
# [Optional] Configure file paths to ignore
fileIgnore: []
# [Optional] Configure file paths to include
filePath: []
# [Optional] Set to true to prevent atime from being used in metadata hash computation
noAtime:
# [Optional] Run the FIM scan interval at the specifid time of the day (HH:MM)
runAt:
But the values.schema.json force us to setup these
https://github.com/lacework/helm-charts/blob/main/lacework-agent/values.schema.json
"fim": {
"type": "object",
"additionalProperties": false,
"required": [
"coolingPeriod",
"crawlInterval",
"enable",
"fileIgnore",
"filePath",
"noAtime",
"runAt"
],
I already submit the support tickete to lacework vendor and get below answer:
Hi Lili,
I hope you had a good weekend. My name is Katherine and I've been looking into your questions around enabling FIM. When you enable this feature in the agent there will be an increase in CPU consumption. Since there's a high correlation between number of connections the agent is being asked to monitor this is unique to the workload on that machine, I'd recommend starting with some of the default values outlined below dependent on your needs and we can assist with tuning them if you find the agent is consuming too much CPU or not running often enough.
Here's some guidance on what each parameter refers to:
[coolingPeriod](https://docs.lacework.com/configure-agent-behavior-in-configjson-file#mode-property): whether to wait before starting FIM. If desired that this runs immediately, set to 0, otherwise the default should be 60 minutes
[crawlInterval](https://docs.lacework.com/deploy-on-kubernetes#helm-configuration-options): the FIM scan interval (how frequently it scans FIM), the default should be 60 minutes
[noAtime](https://docs.lacework.com/deploy-on-kubernetes#helm-configuration-options): true - when true, the access time of a file is not included in the computation of the hash of the file
[runAt](https://docs.lacework.com/configure-agent-behavior-in-configjson-file#runat-property): 03:01 - by default, FIM is run at a random time of day, to change this behaviour and have FIM run at a specific time, set this parameter
It's important to note that if you set the runAt property, this will override the crawlInterval with the effect of it being read as if '0' had been entered.
Please let me know if you have any additional questions.
Many thanks,
Katherine
Special due to below answer, we don't want to setup runAT, SO we DO NOT WANT to set up those parameters
It's important to note that if you set the runAt property, this will override the crawlInterval with the effect of it being read as if '0' had been entered.
They should be all Optional parameters, not required
Can you please make those parameters NOT must required? Thank you so much for your help and looking forward to hear from you.
Lili
After deploying the proxy scanner using the official helm chart we are unable to do an on-demand scan, getting connection refused error, from the logs we can see that server is listening on port 8080 but not responding when we curl the service
image version 0.9.0
Proxy scanner logs
[WARNING]: 2022-09-20 07:15:08 - Error while loading cache file. Scanner will start with bootstap mode: open /opt/lacework/lacework_proxy_scanner_state.json.gz: no such file or directory
[ERROR]: 2022-09-20 07:15:08 - Error while loading cache. Running in bootstrap mode. open /opt/lacework/lacework_proxy_scanner_state.json.gz: no such file or directory
[INFO]: 2022-09-20 07:15:13 - Starting server..
[INFO]: 2022-09-20 07:15:13 - ScanDataHandlerWorker #1: Starting..
[INFO]: 2022-09-20 07:15:13 - RegistryScannerWorkers #0: Starting..
[INFO]: 2022-09-20 07:15:13 - Listener started
[INFO]: 2022-09-20 07:15:13 - server started successfully on port 8080
curl output
curl lacework-scanner-proxy-scanner.lacework.svc.cluster.local:8080
curl: (7) Failed to connect to lacework-scanner-proxy-scanner.lacework.svc.cluster.local port 8080 after 3 ms: Connection refused
Hello,
It seams that there's some new release for the lacework-agent
chart, however the .tgz archive are not release : helm isn't able to download the chart.
Hi.
I had some troubles with initial setup of the agents deployed through a daemonset, because the value I had configured for .Values.laceworkConfig.serverUrl
was missing the "https://" prefix. Could you append the Helm JSON schema with a pattern for validating this field ?
The cluster-agent use the same priorityClassName and tolerations as the daemonset (.Values.tolerations
& .Values.priorityClassName
).
We should be able to configure these values for the cluster-agent independently, for example under clusterAgent.priorityClassName
.
Instead of providing the access token in plain text through Helm values, which is fine for testing purposes, allow the sensitive values to be provided from a K8s secret as VolumeMount or Environment variables
This would be the minimum step up in security to get a production-ready deployment for us
As per lacework documentation:
https://docs.lacework.net/onboarding/integrate-proxy-scanner-with-jfrog-registry-auto-polling
scan_public_registries: false
static_cache_location: /opt/lacework
lacework:
account_name: <my-lacework-account-name>
integration_access_token: <my-lacework-access-token>
registries:
- domain: <my-jfrog-artifactory-domain>>
name: <name-for-registry-integration>
ssl: true
auto_poll: true
credentials:
user_name: "jfrog-user-name"
password: "jfrog-user-password"
poll_frequency_minutes: 20
disable_non_os_package_scanning: false
go_binary_scanning:
enable: true
whereas,
domain: Adjust the domain to your JFrog environment. Do not include the http(s):// portion in the domain.
Use the same domain that you use for Docker login. For example:
If you log into Docker using dockerHost:Port, use domain: dockerHost:Port.
If you log into Docker using dockerHost, use domain: dockerHost.
From JFROG documentation:
https://jfrog.com/help/r/jfrog-artifactory-documentation/docker-registries-and-repositories
Both Artifactory and Docker use the term "repository", but each uses it in a different way.
A Docker repository is a hosted collection of tagged images that, together, create the file system for a container
A Docker registry is a host that stores Docker repositories
So my domian name for my jfrog artifactory is " artifactory.mgmt.aws.uk.org "
When i use config file as below : ( note that registries are given under config.registries as per documentation )
where lacework-values.yaml are below
config:
scan_public_registries: false
static_cache_location: /opt/lacework
lacework:
account_name: xxx
integration_access_token: xxxxxxxxxxx
registries :
- auto_poll: true
credentials:
password: "xxxxxx"
user_name: "xxxxxxx"
domain: artifactory.mgmt.aws.uk.org
go_binary_scanning:
enable: false
scan_directory_path: ""
is_public: false
name: docker-local
poll_frequency_minutes: 20
ssl: false
and pod fails running
errors:
[WARNING]: 2024-01-24 19:06:09 - Error while loading cache file. Scanner will start with bootstap mode: open /opt/lacework/lacework_proxy_scanner_state.json.gz: no such file or directory
[ERROR]: 2024-01-24 19:06:09 - Error while loading cache. Running in bootstrap mode. open /opt/lacework/lacework_proxy_scanner_state.json.gz: no such file or directory
[INFO]: 2024-01-24 19:06:09 - Response headers: {"Connection":"keep-alive","Content-Length":"87","Content-Type":"application/json;charset=ISO-8859-1","Date":"Wed, 24 Jan 2024 19:06:09 GMT","Docker-Distribution-Api-Version":"registry/2.0","Strict-Transport-Security":"max-age=31536000","Www-Authenticate":"Bearer realm=\"https://artifactory.mgmt.aws.uk.org/v2/token\",service=\"artifactory.mgmt.aws.uk.org\""}
[INFO]: 2024-01-24 19:06:09 - registry (https://artifactory.mgmt.aws.uk.org) - got response status: 401 Unauthorized
[INFO]: 2024-01-24 19:06:09 - request url: https://artifactory.mgmt.aws.uk.org/v2/
[INFO]: 2024-01-24 19:06:09 - registry (https://artifactory.mgmt.aws.uk.org) - got wwwAuthenticateHeader: Bearer realm="https://artifactory.mgmt.aws.uk.org/v2/token",service="artifactory.mgmt.aws.uk.org"
[INFO]: 2024-01-24 19:06:09 - Using authentication method: Bearer
[INFO]: 2024-01-24 19:06:09 - Requesting bearerAccessToken from https://artifactory.mgmt.aws.uk.org/v2/token?service=artifactory.mgmt.aws.uk.org&account=lacework&scope=registry:catalog:*
[ERROR]: 2024-01-24 19:06:09 - registry(https://artifactory.mgmt.aws.uk.org): Error wile parsing catalog response: EOF
[FATAL]: 2024-01-24 19:06:09 - Invalid credentials found for registry(https://artifactory.mgmt.aws.uk.org). Please correct credentials. Can not validate credential for registry
But the same work good if I add registries under ( config.lacework.registries ) instead of ( config.registries ) as below
where lacework-values.yaml are below
config:
scan_public_registries: false
static_cache_location: /opt/lacework
lacework:
account_name: xxx
integration_access_token: xxxxxxxxxxx
registries :
- auto_poll: true
credentials:
password: "xxxxxx"
user_name: "xxxxxxx"
domain: artifactory.mgmt.aws.uk.org
go_binary_scanning:
enable: false
scan_directory_path: ""
is_public: false
name: docker-local
poll_frequency_minutes: 20
ssl: false
and and pod running good now
logs :
[WARNING]: 2024-01-25 11:01:09 - Error while loading cache file. Scanner will start with bootstap mode: open /opt/lacework/lacework_proxy_scanner_state.json.gz: no such file or directory
[ERROR]: 2024-01-25 11:01:09 - Error while loading cache. Running in bootstrap mode. open /opt/lacework/lacework_proxy_scanner_state.json.gz: no such file or directory
[INFO]: 2024-01-25 11:01:09 - Starting server..
[INFO]: 2024-01-25 11:01:09 - ScanDataHandlerWorker #1: Starting..
[INFO]: 2024-01-25 11:01:09 - Listener started
[INFO]: 2024-01-25 11:01:09 - server started successfully on port 8080
Also able to get all docker based registires scanned succesfully in lacework console this time and all 10 docker type registries in my jfrog artifactory are displayed in console .
Questions:
Since the naming convection differ for word "registry" for lacework and jfrog
https://jfrog.com/help/r/jfrog-artifactory-documentation/docker-registries-and-repositories
https://jfrog.com/help/r/jfrog-artifactory-documentation/local-docker-repositories
https://docs.lacework.net/onboarding/integrate-proxy-scanner-with-jfrog-registry-auto-polling
Looking for faster response!
Thank you
Currently only the lacework-agent has the option to assign resources (and default values are provided).
clusterAgent lacks both defaults and the option to assign them.
This is very helpful when Secrets or configmap content changes. Using Stakater Reloader annotations these deployments PODs would be restarted and configuration would be refreshed
At the moment I have to restart the PODs manually each time
Another common technique is to use Helm checksum annotations
admission-controller
proxy-scanner
First logs on every proxy-scanner start look like this:
[WARNING]: 2024-04-28 17:35:44 - Error while loading cache file. Scanner will start with bootstap mode: open /opt/lacework/lacework_proxy_scanner_state.json.gz: no such file or directory
[ERROR]: 2024-04-28 17:35:44 - Error while loading cache. Running in bootstrap mode. open /opt/lacework/lacework_proxy_scanner_state.json.gz: no such file or directory
It would make sense to be able to create PVC (and add volume in the deployment) just for the cache, so it would be persistent across pod restarts
Lacework proxyscanner chart, didn't support pass resource Requests and limits values over values.yaml file.
The deployment in the admission-controller chart, has no resource requests and limits set.
Is this intended to be left out?
I'm guessing it should be something like:
spec:
template:
spec:
containers:
- name: {{ include "admission.name" . }}
{{- with .Values.resources }}
resources:
{{ toYaml . | nindent 10 | trim }}
{{- end }}
I would be happy to open a PR
Thanks Gideon
Right now, the logic in proxy-scanner chart is as follows:
skipCert
is set to false
, then add data to this secretThis logic is broken, because of 3 reasons:
certs.serverKey
){{ .Values.name }}-certs
)Instead, it should work like in lacework-agent
, where it is possible to provide name of existing secret (https://github.com/lacework/helm-charts/blob/main/lacework-agent/templates/daemonset.yaml#L61)
While this is defined, setting either true
or all
seem to work.
Lacework proxyscanner helm chart, didn't support pass resource Requests and limits values over values.yaml file.
When we configure new clusters we would like to be able to configure the access token with an existing secret instead of needing to hard code the token in a values file that we commit to a repository.
Hi - can you provide some guidance on this ? I'm trying to install the lacework agent using helm chart and use existing secret.
it works when i put the access token value in the values.yaml file but i dont want to expose the secret in helm but have it pull from AWS secrets manager
. I have configured the secret-csi-driver
and secret-store
. I also have SecretProviderClass
which will create and mount the kubernetes secret for lacework agent.
I'm following this documentation --> https://docs.lacework.com/onboarding/deploy-on-kubernetes#specify-an-existing-secret
The secret should be created with name LaceworkAcessToken in the EKS cluster.
my values.yaml
laceworkConfig:
accessToken:
existingSecret:
key: LaceworkAccessToken
name: LaceworkAccessToken
env: {{my-env}}
kubernetesCluster: {{my-cluster-name}}
serviceAccountName: lacework-sa
however i end up getting the below error
Error: Failed to render chart: exit status 1: Error: values don't meet the specifications of the schema(s) in the following chart(s):
lacework-agent:
- laceworkConfig: accessToken is required
can you let me know about this issue ?
Hi there!
I am currently setting up the lacework proxy scanner to scan our on premise harbor registry. I would like to set this up with the lacework proxy scanner helm chart from this repository.
I am not sure, if there is a problem with the handling of the config.registry_secret_name value or if I maybe don't really understand this, hopefully someone could help me here. :)
From the docs (https://docs.lacework.com/onboarding/integrate-proxy-scanner) I assumed, that I can either set the value config.registries or config.registry_secret_name and not both at the same time. So I want to set only the config.registry_secret_name. The problem I am encountering is, that when I provide a secret with the registries config in it and name it, e.g. "lacework-proxy-registries", and set the name in the config.registry_secret_name value, there will be no volume for the registries in the pod as the helm templating is setting not adding this when config.registries is not set.
My example values.yaml:
config:
static_cache_location: /opt/lacework
scan_public_registries: true
lacework:
account_name: <our account>
integration_access_token:
existingSecret:
name: lacework-proxy-scanner-access-token
key: LACEWORK_PROXY_SCANNER_ACCESS_TOKEN
default_registry: <our harbor registry URL>
registry_secret_name: lacework-proxy-registries
I forked this repo and tried to change the behaviour within this Pull request, just in case I understand all correctly and not missing something, that could fix the behaviour, right? #160
According to the docs, lacework agent is not compatible with Kubernetes 1.29
https://docs.lacework.net/onboarding/deploy-on-kubernetes#supported-kubernetes-environments
The current version appears to be using the following APIs that were removed in 1.29:
Looks like .laceworkConfig.anonymizeIncoming was introduced in v6.3.0 as an optional field, as well as the child value .netmask
. Unfortunately, attempting to deploy it results in
Error: values don't meet the specifications of the schema(s) in the following chart(s):
lacework-agent:
- laceworkConfig.anonymizeIncoming: netmask is required
I can get past it by including the .laceworkConfig.anonymizeIncoming
key with a null value, in my values.
I'm unable to diagnose it further at the moment and will be moving forward with that workaround. If I was debugging this, I'd check to see if there's a conflict with values.yaml
and .netmask
being set as a required value of .laceworkConfig.anonymizeIncoming
.
I have cloudservice.gke.autopilot: true
set in my values.yaml
, but I'm getting the following error when attempting to deploy the daemonset:
Error from server (GKE Warden constraints violations): error when creating "STDIN": admission webhook "gkepolicy.common-webhooks.networking.gke.io" denied the request: GKE Warden rejected the request because it violates one or more constraints.
Violations details: {"[denied by autogke-disallow-hostnamespaces]":["enabling hostPID is not allowed in Autopilot.","enabling hostNetwork is not allowed in Autopilot."],"[denied by autogke-disallow-privilege]":["container lacework is privileged; not allowed in Autopilot"],"[denied by autogke-no-write-mode-hostpath]":["hostPath volume log in container lacework is accessed in write mode; disallowed in Autopilot.","hostPath volume sys in container lacework is accessed in write mode; disallowed in Autopilot.","hostPath volume hostlacework in container lacework is accessed in write mode; disallowed in Autopilot.","hostPath volume hostlaceworkcontroller in container lacework is accessed in write mode; disallowed in Autopilot.","hostPath volume passwd used in container lacework uses path /etc/passwd which is not allowed in Autopilot. Allowed path prefixes for hostPath volumes are: [/var/log/].","hostPath volume group used in container lacework uses path /etc/group which is not allowed in Autopilot. Allowed path prefixes for hostPath volumes are: [/var/log/].","hostPath volume hostroot used in container lacework uses path / which is not allowed in Autopilot. Allowed path prefixes for hostPath volumes are: [/var/log/]."]}
Is there documentation around what actually needs to be set or unset when deploying to GKE autopilot?
Cluster-agent is a deployment, so we should be able to pass a nodeSelector, currently not supported:
https://github.com/lacework/helm-charts/blob/main/lacework-agent/templates/cluster-agent.yaml
We are facing problems trying to deploy lacework-agent with ArgoCD.
chart.yaml:
apiVersion: v2
appVersion: "1.0"
description: Lacework Agent
home: https://www.lacework.com
icon: https://www.lacework.com/wp-content/uploads/2019/07/Lacework_Logo_color_2019.svg
keywords:
- monitoring
- security
- run-time
- metric
- troubleshooting
kubeVersion: '> 1.9.0-0'
maintainers:
- email: [email protected]
name: lacework-support
name: lacework-agent
version: 6.10.4
values.yaml:
lacework-agent:
clusterAgent:
# [Optional] Should we install cluster agent.
enable: true
# [Optional] Cluster type.
clusterType: eks
# [Optional] Cluster region.
clusterRegion: xx-xxxxx-x
laceworkConfig:
# [Required] An access token is required before running agents.
# Visit https://<LACEWORK UI URL> for eg: https://lacework.lacework.net
accessToken: xxxxxxxxxx
# [Optional] Give your k8s environment a friendly name
# https://docs.lacework.com/onboarding/add-agent-tags
env: dev
# [Optional] Kubernetes cluster name
# https://support.lacework.com/hc/en-us/articles/360005263034-Deploy-on-Kubernetes
kubernetesCluster: cluster-dev
# [Required] Region specific Lacework service URL. Defaults to the US region.
serverUrl: https://api.fra.lacework.net
ArgoCD error:
Failed to load target state: failed to generate manifest for source 1 of 1: rpc error: code = Unknown desc = `helm template . --name-template lacework --namespace lacework --kube-version 1.26 --values <path to cached source>/values.yaml <api versions removed> --include-crds` failed exit status 1: Error: template: lacework-agent/templates/_helpers.tpl:38:28: executing "lacework-agent.image" at <.Values.image.registry>: nil pointer evaluating interface {}.registry Use --debug flag to render out invalid YAML
looks like a issue with the values.schema.json file. Can disable this validation?
We can deploy it successfully through helm cli, seems the values.yaml file is correct.
Stack versions:
ArgoCD version v2.9.0+9cf0c69
Helm v3
EKS version 1.26
kind regards!
If I set a toleration to allow the node agent to work on all the nodes, the scanner process also gets that toleration which is very undesirable
The values in the values.yaml file as well as the comments suggest that these values are optional
helm-charts/lacework-agent/values.yaml
Lines 38 to 51 in 80f998e
These values are required in the schema file
helm-charts/lacework-agent/values.schema.json
Lines 142 to 150 in 80f998e
So conflicting information here :) are they required or optional ?
The schema says this value is used to "Give your k8s environment a friendly name"
The only example I can find in the docs is: "Env":"k8s"
Is this displayed in the dashboard? Is there a way to filter on this?
The Secret gets created no matter what. This value in the values.yaml file is respected in the Deployment object, but a pointless Secret resource is made. I will be submitting a PR that fixes this oversight.
CertManager is an operator to manage certificates from different issuers, like LetsEncrypt, Vaul, Self-signed...
In a similar fashion as Prometheus Operator (now called kube-prometheus-stack
) these charts can generate their own certificates without requiring any external provisioning
https://github.com/prometheus-community/helm-charts/blob/c3cc929a74d77b9486171c604e976fa18843ee5c/charts/kube-prometheus-stack/templates/prometheus-operator/certmanager.yaml#L1
Highly recommended for Lacework to adopt using helm unittest
within the CI/CD framework. This will help prevent issues like #239.
The doc says lacework-agent chart is supported on k8s up to 1.22. However this issue seems indicate docker should be the only blocker moving on to 1.23. But it has been fixed.
We are considering k8s 1.26 and really wish to use lacework-agent chart there. Could you verify the highest supported k8s version and update the doc accordingly if necessary? Thanks!
I'm looking into switching away from Docker as our container runtime in preparation for upgrading to Kubernetes 1.23, which no longer has dockershim.
However, I see that the lacework-agent Helm chart still hardcodes the Docker socket mount, even though it seems the agent itself supports containerd.
Could the chart be updated to make this path customisable, so it's possible to use with Kubernetes 1.23 or later?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.