GithubHelp home page GithubHelp logo

coder / enterprise-helm Goto Github PK

View Code? Open in Web Editor NEW
44.0 8.0 15.0 1.07 MB

Operate Coder v1 on Kubernetes

Home Page: https://coder.com/docs/setup/installation

License: GNU General Public License v3.0

Shell 7.85% Smarty 10.72% Makefile 1.18% Go 80.25%
coder helm helm-chart

enterprise-helm's Introduction

Coder Helm Chart

build Twitter Follow

Coder moves developer workspaces to your cloud and centralizes their creation and management. Keep developers in flow with the power of the cloud and a superior developer experience.

The Coder Helm Chart is the best way to install and operate Coder on Kubernetes. It contains all the required components, and can scale to large deployments.

Coder Dashboard

Getting Started

⚠️ Warning: This repository will not represent the latest Coder release. Reference our installation docs for instructions on a tagged release.

View our docs for detailed installation instructions.

Values

Key Type Description Default
certs object Certificate that will be mounted inside Coder services. {"secret":{"key":"","name":""}}
certs.secret.key string Key pointing to a certificate in the secret. ""
certs.secret.name string Name of the secret. ""
coderd object Primary service responsible for all things Coder! {"affinity":{"podAntiAffinity":{"preferredDuringSchedulingIgnoredDuringExecution":[{"podAffinityTerm":{"labelSelector":{"matchExpressions":[{"key":"app.kubernetes.io/name","operator":"In","values":["coderd"]}]},"topologyKey":"kubernetes.io/hostname"},"weight":1}]}},"alternateHostnames":[],"annotations":{},"builtinProviderServiceAccount":{"annotations":{},"labels":{},"migrate":true},"clientTLS":{"secretName":""},"devurlsHost":"","extraEnvs":[],"extraLabels":{},"image":"","imagePullSecret":"","liveness":{"failureThreshold":30,"initialDelaySeconds":30,"periodSeconds":10,"timeoutSeconds":3},"networkPolicy":{"enable":true},"oidc":{"enableRefresh":false,"redirectOptions":{}},"podSecurityContext":{"runAsGroup":1000,"runAsNonRoot":true,"runAsUser":1000,"seccompProfile":{"type":"RuntimeDefault"}},"proxy":{"exempt":"cluster.local","http":"","https":""},"readiness":{"failureThreshold":15,"initialDelaySeconds":10,"periodSeconds":10,"timeoutSeconds":3},"replicas":1,"resources":{"limits":{"cpu":"1000m","memory":"1Gi"},"requests":{"cpu":"250m","memory":"512Mi"}},"reverseProxy":{"headers":[],"trustedOrigins":[]},"satellite":{"accessURL":"","enable":false,"primaryURL":""},"scim":{"authSecret":{"key":"secret","name":""},"enable":false},"securityContext":{"allowPrivilegeEscalation":false,"readOnlyRootFilesystem":true,"runAsGroup":1000,"runAsNonRoot":true,"runAsUser":1000,"seccompProfile":{"type":"RuntimeDefault"}},"serviceAnnotations":{},"serviceNodePorts":{"http":null,"https":null},"serviceSpec":{"externalTrafficPolicy":"Local","loadBalancerIP":"","loadBalancerSourceRanges":[],"type":"LoadBalancer"},"superAdmin":{"passwordSecret":{"key":"password","name":""}},"tls":{"devurlsHostSecretName":"","hostSecretName":""},"trustProxyIP":false,"workspaceServiceAccount":{"annotations":{},"labels":{}}}
coderd.affinity object Allows specifying an affinity rule for the coderd deployment. The default rule prefers to schedule coderd pods on different nodes, which is only applicable if coderd.replicas is greater than 1. {"podAntiAffinity":{"preferredDuringSchedulingIgnoredDuringExecution":[{"podAffinityTerm":{"labelSelector":{"matchExpressions":[{"key":"app.kubernetes.io/name","operator":"In","values":["coderd"]}]},"topologyKey":"kubernetes.io/hostname"},"weight":1}]}}
coderd.alternateHostnames list A list of hostnames that coderd (including satellites) will allow for OIDC. If this list is not set, all OIDC traffic will go to the configured access URL in the admin settings on the dashboard (or the satellite's primary URL as configured by Helm). []
coderd.annotations object Apply annotations to the coderd deployment. https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ {}
coderd.builtinProviderServiceAccount object Customize the built-in Kubernetes provider service account. {"annotations":{},"labels":{},"migrate":true}
coderd.builtinProviderServiceAccount.annotations object A KV mapping of annotations. See: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ {}
coderd.builtinProviderServiceAccount.labels object Add labels to the service account used for the built-in provider. {}
coderd.builtinProviderServiceAccount.migrate bool Will migrate the built-in workspace provider using the coded environment. true
coderd.clientTLS object Client-side TLS configuration for coderd. {"secretName":""}
coderd.clientTLS.secretName string Secret containing a PEM encoded cert file. ""
coderd.devurlsHost string Wildcard hostname to allow matching against custom-created dev URLs. Leaving as an empty string results in DevURLs being disabled. ""
coderd.extraEnvs list Add additional environment variables to the coderd deployment containers. Overriding any environment variables that the Helm chart sets automatically is unsupported and will result in undefined behavior. You can find a list of the environment variables we set by default by inspecting the helm template files or by running kubectl describe against your existing coderd deployment. https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/ []
coderd.extraLabels object Allows specifying additional labels to pods in the coderd deployment (.spec.template.metadata.labels). {}
coderd.image string Injected by Coder during release. ""
coderd.imagePullSecret string The secret used for pulling the coderd image from a private registry. ""
coderd.liveness object Configure the liveness check for the coderd service. {"failureThreshold":30,"initialDelaySeconds":30,"periodSeconds":10,"timeoutSeconds":3}
coderd.networkPolicy object Configure the network policy to apply to coderd. {"enable":true}
coderd.networkPolicy.enable bool Manage a network policy for coderd using Helm. If false, no policies will be created for the Coder control plane. true
coderd.podSecurityContext object Fields related to the pod's security context (as opposed to the container). Some fields are also present in the container security context, which will take precedence over these values. {"runAsGroup":1000,"runAsNonRoot":true,"runAsUser":1000,"seccompProfile":{"type":"RuntimeDefault"}}
coderd.podSecurityContext.runAsGroup int Sets the group id of the pod. For security reasons, we recommend using a non-root group. 1000
coderd.podSecurityContext.runAsNonRoot bool Requires that containers in the pod run as an unprivileged user. If setting runAsUser to 0 (root), this will need to be set to false. true
coderd.podSecurityContext.runAsUser int Sets the user id of the pod. For security reasons, we recommend using a non-root user. 1000
coderd.podSecurityContext.seccompProfile object Sets the seccomp profile for the pod. If set, the container security context setting will take precedence over this value. {"type":"RuntimeDefault"}
coderd.proxy object Whether Coder should initiate outbound connections using a proxy. {"exempt":"cluster.local","http":"","https":""}
coderd.proxy.exempt string Bypass the configured proxy rules for this comma-delimited list of hosts or prefixes. This corresponds to the no_proxy environment variable. "cluster.local"
coderd.proxy.http string Proxy to use for HTTP connections. If unset, coderd will initiate HTTP connections directly. This corresponds to the http_proxy environment variable. ""
coderd.proxy.https string Proxy to use for HTTPS connections. If this is not set, coderd will use the HTTP proxy (if set), otherwise it will initiate HTTPS connections directly. This corresponds to the https_proxy environment variable. ""
coderd.readiness object Configure the readiness check for the coderd service. {"failureThreshold":15,"initialDelaySeconds":10,"periodSeconds":10,"timeoutSeconds":3}
coderd.replicas int The number of Kubernetes Pod replicas. Consider increasing replicas as you add more nodes and more users are accessing Coder. 1
coderd.resources object Kubernetes resource specification for coderd pods. To unset a value, set it to "". To unset all values, set resources to nil. Consider increasing resources as more users are accessing Coder. {"limits":{"cpu":"1000m","memory":"1Gi"},"requests":{"cpu":"250m","memory":"512Mi"}}
coderd.reverseProxy object Whether Coder should trust proxy headers for inbound connections, important for ensuring correct IP addresses when an Ingress Controller, service mesh, or other Layer 7 reverse proxy are deployed in front of Coder. {"headers":[],"trustedOrigins":[]}
coderd.reverseProxy.headers list A list of trusted headers. []
coderd.reverseProxy.trustedOrigins list A list of IPv4 or IPv6 subnets to consider trusted, specified in CIDR format. If hosts are part of a matching network, the configured headers will be trusted; otherwise, coderd will rely on the connecting client IP address. []
coderd.satellite object Deploy a satellite to geodistribute access to workspaces for lower latency. {"accessURL":"","enable":false,"primaryURL":""}
coderd.satellite.accessURL string URL of the satellite that clients will connect to. e.g. https://sydney.coder.myorg.com ""
coderd.satellite.enable bool Run coderd as a satellite pointing to a primary deployment. Satellite enable low-latency access to workspaces all over the world. Read more: https://coder.com/docs/coder/latest/admin/satellites false
coderd.satellite.primaryURL string URL of the primary Coder deployment. Must be accessible from the satellite and clients. eg. https://coder.myorg.com ""
coderd.scim.authSecret.key string The key of the secret that contains the SCIM auth header. "secret"
coderd.scim.authSecret.name string Name of a secret that should be used to determine the auth header used for the SCIM server. The secret should be contained in the field secret, or the manually specified one. ""
coderd.scim.enable bool Enable SCIM support in coderd. SCIM allows you to automatically provision/deprovision users. If true, authSecret.name must be set. false
coderd.securityContext object Fields related to the container's security context (as opposed to the pod). Some fields are also present in the pod security context, in which case these values will take precedence. {"allowPrivilegeEscalation":false,"readOnlyRootFilesystem":true,"runAsGroup":1000,"runAsNonRoot":true,"runAsUser":1000,"seccompProfile":{"type":"RuntimeDefault"}}
coderd.securityContext.allowPrivilegeEscalation bool Controls whether the container can gain additional privileges, such as escalating to root. It is recommended to leave this setting disabled in production. false
coderd.securityContext.readOnlyRootFilesystem bool Mounts the container's root filesystem as read-only. It is recommended to leave this setting enabled in production. This will override the same setting in the pod true
coderd.securityContext.runAsGroup int Sets the group id of the pod. For security reasons, we recommend using a non-root group. 1000
coderd.securityContext.runAsNonRoot bool Requires that the coderd and migrations containers run as an unprivileged user. If setting runAsUser to 0 (root), this will need to be set to false. true
coderd.securityContext.runAsUser int Sets the user id of the pod. For security reasons, we recommend using a non-root user. 1000
coderd.securityContext.seccompProfile object Sets the seccomp profile for the migration and runtime containers. {"type":"RuntimeDefault"}
coderd.serviceAnnotations object Apply annotations to the coderd service. https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ {}
coderd.serviceNodePorts object Allows manually setting static node ports for the coderd service. This is only helpful if static ports are required, and usually should be left alone. By default these are dynamically chosen. {"http":null,"https":null}
coderd.serviceNodePorts.http string Sets a static 'coderd' service non-TLS nodePort. This should usually be omitted. nil
coderd.serviceNodePorts.https string Sets a static 'coderd' service TLS nodePort This should usually be omitted. nil
coderd.serviceSpec object Specification to inject for the coderd service. See: https://kubernetes.io/docs/concepts/services-networking/service/ {"externalTrafficPolicy":"Local","loadBalancerIP":"","loadBalancerSourceRanges":[],"type":"LoadBalancer"}
coderd.serviceSpec.externalTrafficPolicy string Set the traffic policy for the service. See: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip "Local"
coderd.serviceSpec.loadBalancerIP string Set the IP address of the coderd service. ""
coderd.serviceSpec.loadBalancerSourceRanges list Traffic through the LoadBalancer will be restricted to the specified client IPs. This field will be ignored if the cloud provider does not support this feature. []
coderd.serviceSpec.type string Set the type of Service. See: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types "LoadBalancer"
coderd.superAdmin.passwordSecret.key string The key of the secret that contains the super admin password. "password"
coderd.superAdmin.passwordSecret.name string Name of a secret that should be used to determine the password for the super admin account. The password should be contained in the field password, or the manually specified one. ""
coderd.tls object TLS configuration for coderd. These options will override dashboard configuration. {"devurlsHostSecretName":"","hostSecretName":""}
coderd.tls.devurlsHostSecretName string The secret to use for DevURL TLS. ""
coderd.tls.hostSecretName string The secret to use for TLS. ""
coderd.trustProxyIP bool Configures Coder to accept X-Real-IP and X-Forwarded-For headers from any origin. This option is deprecated and will be removed in a future release. Use the coderd.reverseProxy setting instead, which supports configuring an allowlist of trusted origins. false
coderd.workspaceServiceAccount object Customize the default service account used for workspaces. {"annotations":{},"labels":{}}
coderd.workspaceServiceAccount.annotations object A KV mapping of annotations. See: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ {}
coderd.workspaceServiceAccount.labels object Add labels to the service account used for workspaces. {}
envbox object Required for running Docker inside containers. See requirements: https://coder.com/docs/coder/latest/admin/workspace-management/cvms {"image":""}
envbox.image string Injected by Coder during release. ""
ingress object Configure an Ingress to route traffic to Coder services. {"annotations":{"nginx.ingress.kubernetes.io/proxy-body-size":"0"},"className":"","enable":false,"host":"","tls":{"enable":false}}
ingress.annotations object Additional annotations to add to the Ingress object. The behavior is typically dependent on the Ingress Controller implementation, and useful for managing features like TLS termination. {"nginx.ingress.kubernetes.io/proxy-body-size":"0"}
ingress.className string The ingressClassName to set on the Ingress. ""
ingress.enable bool A boolean controlling whether to create an Ingress. false
ingress.host string The hostname to proxy to the Coder installation. The cluster Ingress Controller typically uses server name indication or the HTTP Host header to route traffic. The dev URLs hostname is specified in coderd.devurlsHost. ""
ingress.tls object Configures TLS settings for the Ingress. TLS certificates are specified in coderd.tls.hostSecretName and coderd.tls.devurlsHostSecretName. {"enable":false}
ingress.tls.enable bool Determines whether the Ingress handles TLS. false
logging object Configures the logging format and output of Coder. {"human":"/dev/stderr","json":"","splunk":{"channel":"","token":"","url":""},"stackdriver":"","verbose":true}
logging.human string Location to send logs that are formatted for readability. Set to an empty string to disable. "/dev/stderr"
logging.json string Location to send logs that are formatted as JSON. Set to an empty string to disable. ""
logging.splunk object Coder can send logs directly to Splunk in addition to file-based output. {"channel":"","token":"","url":""}
logging.splunk.token string Splunk HEC collector token. ""
logging.splunk.url string Splunk HEC collector endpoint. ""
logging.stackdriver string Location to send logs that are formatted for Google Stackdriver. Set to an empty string to disable. ""
logging.verbose bool Toggles coderd debug logging. true
metrics object Configure various metrics to gain observability into Coder. {"amplitudeKey":""}
metrics.amplitudeKey string Enables telemetry pushing to Amplitude. Amplitude records how users interact with Coder, which is used to improve the product. No events store any personal information. Amplitude can be found here: https://amplitude.com/ Keep empty to disable. ""
postgres.connector string Option for configuring database connector type. valid values are: - "postgres" -- default connector - "awsiamrds" -- uses AWS IAM account in environment to authenticate using IAM to connect to an RDS instance. "postgres"
postgres.database string Name of the database that Coder will use. You must create this database first. ""
postgres.default object Configure a built-in PostgreSQL deployment. {"annotations":{},"enable":true,"image":"","networkPolicy":{"enable":true},"resources":{"limits":{"cpu":"250m","memory":"1Gi"},"requests":{"cpu":"250m","memory":"1Gi","storage":"10Gi"}},"storageClassName":""}
postgres.default.annotations object Apply annotations to the default postgres service. https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ {}
postgres.default.enable bool Deploys a PostgreSQL instance. We recommend using an external PostgreSQL instance in production. If true, all other values are ignored. true
postgres.default.image string Injected by Coder during release. ""
postgres.default.networkPolicy object Configure the network policy to apply to the built-in PostgreSQL deployment. {"enable":true}
postgres.default.networkPolicy.enable bool Manage a network policy for PostgreSQL using Helm. If false, no policies will be created for the built-in database. true
postgres.default.resources object Kubernetes resource specification for the PostgreSQL pod. To unset a value, set it to "". To unset all values, set resources to nil. {"limits":{"cpu":"250m","memory":"1Gi"},"requests":{"cpu":"250m","memory":"1Gi","storage":"10Gi"}}
postgres.default.resources.requests.storage string Specifies the size of the volume claim for persisting the database. "10Gi"
postgres.default.storageClassName string Set the storageClass to store the database. ""
postgres.host string Host of the external PostgreSQL instance. ""
postgres.noPasswordEnv bool If enabled, passwordSecret will be specified as a volumeMount and the env DB_PASSWORD_PATH will be set instead to point to that location. The default behaviour is to set the environment variable DB_PASSWORD to the value of the postgres password secret. false
postgres.passwordSecret string Name of an existing secret in the current namespace with the password of the PostgreSQL instance. The password must be contained in the secret field password. This should be set to an empty string if the database does not require a password to connect. ""
postgres.port string Port of the external PostgreSQL instance. ""
postgres.searchPath string Optional. Schema for coder tables in the external PostgresSQL instance. This changes the 'search_path' client configuration option (https://www.postgresql.org/docs/current/runtime-config-client.html). By default, the 'public' schema will be used. ""
postgres.ssl object Options for configuring the SSL cert, key, and root cert when connecting to Postgres. {"certSecret":{"key":"","name":""},"keySecret":{"key":"","name":""},"rootCertSecret":{"key":"","name":""}}
postgres.ssl.certSecret object Secret containing a PEM encoded cert file. {"key":"","name":""}
postgres.ssl.certSecret.key string Key pointing to a certificate in the secret. ""
postgres.ssl.certSecret.name string Name of the secret. ""
postgres.ssl.keySecret object Secret containing a PEM encoded key file. {"key":"","name":""}
postgres.ssl.keySecret.key string Key pointing to a certificate in the secret. ""
postgres.ssl.keySecret.name string Name of the secret. ""
postgres.ssl.rootCertSecret object Secret containing a PEM encoded root cert file. {"key":"","name":""}
postgres.ssl.rootCertSecret.key string Key pointing to a certificate in the secret. ""
postgres.ssl.rootCertSecret.name string Name of the secret. ""
postgres.sslMode string Provides variable levels of protection for the PostgreSQL connection. For acceptable values, see: https://www.postgresql.org/docs/11/libpq-ssl.html "require"
postgres.user string User of the external PostgreSQL instance. ""
services object Kubernetes Service configuration that applies to Coder services. {"annotations":{},"clusterDomainSuffix":".svc.cluster.local","nodeSelector":{"kubernetes.io/arch":"amd64","kubernetes.io/os":"linux"},"tolerations":[],"type":"ClusterIP"}
services.annotations object A KV mapping of annotations. See: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ DEPRECATED -- Please use the annotations value for each object. {}
services.clusterDomainSuffix string Custom domain suffix for DNS resolution in your cluster. See: https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/ ".svc.cluster.local"
services.nodeSelector object See: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector {"kubernetes.io/arch":"amd64","kubernetes.io/os":"linux"}
services.tolerations list Each element is a toleration object. See: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ []
services.type string See the following for configurable types: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types "ClusterIP"

Contributing

Thanks for considering a contribution to this Chart! Please see CONTRIBUTING.md for our conventions and practices.

Support

If you experience issues, have feedback, or want to ask a question, open an issue or pull request in this repository. Feel free to contact us instead.

Copyright and License

Copyright (C) 2020-2022 Coder Technologies Inc.

This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this program. If not, see https://www.gnu.org/licenses/.

enterprise-helm's People

Contributors

ammario avatar cmoog avatar coadler avatar deansheather avatar dependabot[bot] avatar emyrk avatar ericpaulsen avatar f0ssel avatar jawnsy avatar johnstcn avatar jsjoeio avatar kylecarbs avatar lilshoff avatar sharkymark avatar sreya avatar tychoish avatar wethinkagile avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

enterprise-helm's Issues

specify image pull secrets for envbuilder image

We are hosting the envbuilder image w/ some modifications in our private registry that requires authentication. I need a way to specify the image pull secrets for this image, but can't figure out how to do it since the image name is specified as an ENV.

Is there a way to do this already that I am missing? If not, can it be added?

Workspace terminals can crash Kubernetes nodes or other pods

It is possible to deploy a workspace pod and use that pod to fill up storage on the local node, which in turn can either crash the node and/or other pods running on that node, depending on how volumes are planned on deployment.

Steps to re-create:

Follow the instructions to deploy Coder using the EKS environment:
https://coder.com/docs/coder/v1.23/setup/kubernetes/aws

Once the environment is online:

  1. SSH to Kubernetes nodes.
  2. In the node(s), run df -h to show the current storage. Note specifically the root / partition size.
  3. In Coder, running on this cluster, launch a new workspace.
  4. Open the Terminal in the workspace.
  5. In the workspace terminal, run df -h to show current pod storage. Note specifically the root / partition and /etc/hosts storage size. These should reflect the same size as the root in step 2 above.
  6. Take the available storage from step 5 and divide it by half.
  7. In the workspace terminal, run the following command: sudo fallocate -l <size> /testfile.img

    Note: The <size> in the fallocate command should be determined from step 6.
  8. In the workspace terminal, determine the file was created and meets the size set: ls -lah /testfile.img
  9. In the workspace terminal, run df -h to show current pod storage. The available space show now be half of what it was originally.
  10. In the node terminal, run df -h to show current storage. It should now be in half as well.

What's happening: By default, the /var/lib/docker directory is attached to the root volume. As there's no storage limits on the pod when it is created, it has full access to the available storage of the volume containing the container, in this case the root directory of the host node.

To help prevent crashing the host node, additional storage volume can be created and mapped to /var/lib/docker. However, this would not resolve the issue with the pod having full access to the volume to max it out. This could potentially break existing pods or prevent new pods from being deployed on the node or nodes affected.

Solution: Since the pod uses a PVC for storage for the home directory, there is no need to allow directories outside of home to be wide open to use up available storage on the nodes. Instead they should be limited to the total amount of storage by setting requests and limits for local ephemeral storage when creating the workspace pod. The only way to break the node volumes from this point would be to create a significant number of workspaces and purposefully max out each root volume.

More details here:
https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#setting-requests-and-limits-for-local-ephemeral-storage

It may be worthwhile to have a means to detect workspaces that are using storage outside of the home directory and stop them if a certain size is detected.

CA Cert injection is broken

From 1.18.1 -> 1.19.1, CA Cert injection is broken. Our BitBucket Git OAuth provider is using self signed TLS CA. Now when trying to link an account, there is a TLS CA error. Skimming the YAML here it looks like the CA Bundle got dropped from being mounted in #59

Allow configuring enable-underscores-in-headers for nginx ingress in values.yaml

The nginx ingress that we ship with will by default drop headers that contain underscores. This can be undesirable when using the devurls feature.

A workaround for the time being is to pull down the chart, and modify https://github.com/cdr/enterprise-helm/blob/master/templates/ingress.yaml#L26 to include enable-underscores-in-headers: "true", and then helm upgrade with the modified chart. The end result should look like:

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: {{ .Release.Namespace | quote }}
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
data:
  enable-underscores-in-headers: "true"
  proxy-set-headers: "{{ .Release.Namespace }}/custom-headers"

Add option to set AWS_STS_REGIONAL_ENDPOINTS variable

Hello,

Currently when using ECR images with coder and using IRSA, the coderd will authenticate with sts.amazonaws.com.
However in some settings, such as a VPC without an internet gateway we do not always have access to this URL. For these purposes, we use the regional endpoint which would be enabled with a VPC endpoint. For instance in eu-west-1 sts.eu-west-1.amazonaws.com.

Using the reigonal endpoint is done automatically by adding a environment variable AWS_STS_REGIONAL_ENDPOINTS=regional in the env attributes of coderd.

For instance, we could have a value coderd.awsStsRegionalEndpoints and then in the coderd.yaml file add the following in the env sections:

            {{- if .Values.coderd.awsStsRegionalEndpoints }}
             - name : AWS_STS_REGIONAL_ENDPOINTS
                value : "regional"
            {{- end }}

I have already tested by manually adding the environment variable in the deployment on an existing configuration and can confirm that it resolves the issue.

We could set the value awsStsRegionalEndpoints to false by default to ensure retro-compatibility.
Is this something you could implement ?

I could also open an MR if you want.

Allow for NodeSelector as Admin Defined and User Option?

I'm currently evaluating Coder and so far its great! Definitely beats manually provisioning workspaces.

I had a few questions and some minor issues

Environment

  • Provider: aws-eks
  • K8s Version: 1.21
  • Coder Helm Version: 1.29.1

In our cluster, we use ASGs, and specifically for GPUs, we separate them by the instance-type size as well as the GPU type.

Example

ASG 1: T4-XL
- g4dn.xlarge
  - Node Labels: compute-role:gpu, compute-size:xlarge, gpu-type:t4

ASG 2: A10G-XL
- g5.xlarge
  - Node Labels: compute-role:gpu, compute-size:xlarge, gpu-type:a10g

ASG 3: Mixed-XL
- g4dn.xlarge
- g5.xlarge
  - Node Labels: compute-role:gpu, compute-size:xlarge, gpu-type:mixed

Questions:

  1. Are there any future plans to allow the admin to specify node-selectors/taints based on images? For CUDA enabled images, we would pre-select the node-selectors and taints to ensure that the image gets properly provisioned with a GPU node, rather than a CPU node.

  2. Follow-on, would it be possible to allow users to specify the node-selectors/taints when creating workspaces without using a template? (if option is enabled by admin)

  3. Is there a way to adjust/specify the session-timeout for OIDC? Currently it seems like the limit is 60 mins before refresh kicks in and requires reauth.


Issues:

I was trying to have the node-selector modified by using a template that did specify compute-type:gpu, compute-role:coder, but within the provider settings, only compute-role:coder is defined.

image

However, after testing the template, and subsequently deleting it, several workspaces that were provisioned afterwards retained the nodeSelectors that were defined only in the template itself, rather than sticking strictly with the provider specified one.

image

In Template Policy, I do have write enabled for node-selector so I wonder if that's what's causing the issue.

Thanks!

Ingress Dev Url bugs and fixes

A quick helm dry run gives me:

  rules:
  - host: "coder.theDomain.io"
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: coderd
            port:
              name: tcp-coderd
  - host: "*.theDomain.io"
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: coderd
            port:
              name: tcp-coderd
  tls:
    - hosts:
      - "code.theDomain.io"
      secretName: "coder-theDomain-io-tls"

In above snippet I would have expected the wildcard tls host entry to get created as well. When I look in Ingress template I found these ambiguities:

Line 40: .Values.coderd.devurlsHost
Line 57: .Values.devurls

Next, in the Docs it is written that one needs to create a seperate ingress manually. An own ingress will most likely conflict. I think this is a Documentation bug/ambiguity and should be adressed.

Next, there is no option to set ingress classname.

See my Pull Requests adressing a few of these issues:
PR #232 created.
PR #233 created.

RBAC assignments are too broad for OpenShift

Hi team.

I've been trying to install Coder v1 in OpenShift 4 with a trial license using oc + helm.
I ran into the following error:

Error: UPGRADE FAILED: failed to create resource: roles.rbac.authorization.k8s.io "coder" is forbidden: user "my-user" (groups=["MY AD GROUPS" "system:authenticated:oauth" "system:authenticated"]) is attempting to grant RBAC permissions not currently held:                                                                                                                   
{APIGroups:[""], Resources:["deployments"], Verbs:["create" "update" "delete" "deletecollection"]}                                                                                                                                            
{APIGroups:[""], Resources:["networkpolicies"], Verbs:["create" "update" "delete" "deletecollection"]}                                                                                                                                        
{APIGroups:[""], Resources:["pods/log"], Verbs:["create" "update" "delete" "deletecollection"]}                                                                                                                                               
{APIGroups:["apps"], Resources:["events"], Verbs:["create" "update" "delete" "deletecollection"]}                                                                                                                                             
{APIGroups:["apps"], Resources:["networkpolicies"], Verbs:["create" "update" "delete" "deletecollection"]}                                                                                                                                    
{APIGroups:["apps"], Resources:["persistentvolumeclaims"], Verbs:["create" "update" "delete" "deletecollection"]}                                                                                                                             
{APIGroups:["apps"], Resources:["pods"], Verbs:["create" "update" "delete" "deletecollection"]}                                                                                                                                               
{APIGroups:["apps"], Resources:["pods/exec"], Verbs:["create" "update" "delete" "deletecollection"]}                                                                                                                                          
{APIGroups:["apps"], Resources:["pods/log"], Verbs:["create" "update" "delete" "deletecollection"]}                                                                                                                                           
{APIGroups:["apps"], Resources:["secrets"], Verbs:["create" "update" "delete" "deletecollection"]}                                                                                                                                            
{APIGroups:["apps"], Resources:["serviceaccounts"], Verbs:["create" "update" "delete" "deletecollection"]}                                                                                                                                    
{APIGroups:["apps"], Resources:["services"], Verbs:["create" "update" "delete" "deletecollection"]}                                                                                                                                           
{APIGroups:["networking.k8s.io"], Resources:["deployments"], Verbs:["create" "update" "delete" "deletecollection"]}                                                                                                                           
{APIGroups:["networking.k8s.io"], Resources:["events"], Verbs:["create" "update" "delete" "deletecollection"]}                                                                                                                                
{APIGroups:["networking.k8s.io"], Resources:["persistentvolumeclaims"], Verbs:["create" "update" "delete" "deletecollection"]}                                                                                                                
{APIGroups:["networking.k8s.io"], Resources:["pods"], Verbs:["create" "update" "delete" "deletecollection"]}                                                                                                                                  
{APIGroups:["networking.k8s.io"], Resources:["pods/exec"], Verbs:["create" "update" "delete" "deletecollection"]}                                                                                                                             
{APIGroups:["networking.k8s.io"], Resources:["pods/log"], Verbs:["create" "update" "delete" "deletecollection"]}                                                                                                                              
{APIGroups:["networking.k8s.io"], Resources:["secrets"], Verbs:["create" "update" "delete" "deletecollection"]}                                                                                                                               
{APIGroups:["networking.k8s.io"], Resources:["serviceaccounts"], Verbs:["create" "update" "delete" "deletecollection"]}                                                                                                                       
{APIGroups:["networking.k8s.io"], Resources:["services"], Verbs:["create" "update" "delete" "deletecollection"]

My user has namespace-admin privileges so I should be able to create a ServiceAccount with most of the privileges here.
I had to tweak templates/rbac.yml like this to make it work:

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: coder
  namespace: {{ .Release.Namespace | quote }}
  labels:
    app.kubernetes.io/name: {{ .Chart.Name }}
    helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
    app.kubernetes.io/managed-by: {{ .Release.Service }}
    app.kubernetes.io/instance: {{ .Release.Name }}
    app.kubernetes.io/version: {{ .Chart.AppVersion }}
    app.kubernetes.io/component: {{ include "coder.serviceName" . }}
rules:
  - apiGroups: [""] # "" indicates the core API group
    resources: ["persistentvolumeclaims", "pods", "services", "secrets", "pods/exec", "events", "serviceaccounts"]
    verbs: ["create", "get", "list", "watch", "update", "patch", "delete", "deletecollection"]

  - apiGroups: [""]
    resources: ["pods/log"]
    verbs: ["get"]

  - apiGroups: ["apps"]
    resources: ["deployments"]
    verbs: ["create", "get", "list", "watch", "update", "patch", "delete", "deletecollection"]

  - apiGroups: ["networking.k8s.io"]
    resources: ["networkpolicies"]
    verbs: ["create", "get", "list", "watch", "update", "patch", "delete", "deletecollection"]

  - apiGroups: ["metrics.k8s.io", "storage.k8s.io"]
    resources: ["pods", "storageclasses"]
    verbs: ["get", "list", "watch"]

I think the problem is that by default they're too broad and I'm also not sure if all of them actually exist in OpenShift (i.e. create on pods/log).

It would be nice to have this customizable in the values.yml as we do for SecurityContexts so we don't have to maintain a custom Chart with these changes.

function "lookup" not defined

This is helm3 on K8s 1.20

$ helm repo add coder https://helm.coder.com
$ helm install coder coder/coder --namespace coder
Error: parse error at (coder/templates/envproxy.yaml:10): function "lookup" not defined

Add CONTRIBUTING.md

It might be nice to have a basic CONTRIBUTING.md so others know how to contribute (future me at least)

Database updates or failovers can break Coder

When using a production database, running updates on the database could break Coder and prevent users from being able to use the service.

Example:

  1. Create a highly available database.
  2. Connect Coder to the database using the database cluster endpoint.
  3. Perform a failover of the HA database, for example during a software update.

What's happening: From time to time during a database failover, Coder will maintain the connection to the database and will not release it once the failover has occurred, preventing access to the database. The only way to solve this problem is to delete the coderd pods and wait for them to come back online.

This is problematic, as a schedule maintenance may occur over the weekend and Coder may not release the database connection. While the database update and/or testing may have worked as expected, anyone who would go to use Coder would find the service unavailable.

It would be beneficial if Coder would recognize when it can no longer read/write from a database, timeout the connection automatically, and retry the connection again. If it cannot read the database after several attempts, it would log the behavior which can easily be captured and alerted on.

Unused PVCs remain after long periods of time

When a user creates a workspace, it creates an associated PVC for the pod to use and store data.

In the use case we're seeing, a workspace has been abandoned or left unused for a long period of time. As a result, the PVCs remain, detached, and incur cost within the cloud environment.

It would be useful if we could have a method to show old workspaces and have a method to perform maintenance on the old environments. An additional useful tool would be to notify/alert users of old workspaces and/or provide a method to automatically remove workspaces that have been unused for 30 days, 90 days, etc. This feature could be added to the Account Dormancy page.

TLS certificates not associated with Ingress

Today, I spun up a new instance using Helm chart version 1.22.

Here are the values.yaml I used for deployment:

coderd:
  replicas: 3
  serviceSpec:
    type: ClusterIP
    externalTrafficPolicy: null
    loadBalancerIP: null
    loadBalancerSourceRanges: null
  trustProxyIP: true  
  devurlsHost: "*.<domainName>"
  tls:
    devurlsHostSecretName: coder-certs
    hostSecretName: coder-certs

postgres:
  host: "<databaseDNS>"
  port: 5432
  user: "<databaseUser>"
  database: "<databaseName>"
  passwordSecret: "coder-pg-password"
  sslMode: "require"

  default:
    enable: false

logging:
  json: /dev/stderr

Upon deployment, an Ingress was generated in the Coder namespace with the following configuration:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: coderd-ingress
namespace: coder
labels:
  app.kubernetes.io/managed-by: Helm
annotations:
  meta.helm.sh/release-name: coder
  meta.helm.sh/release-namespace: coder
  nginx.ingress.kubernetes.io/proxy-body-size: '0'
status:
loadBalancer:
  ingress:
    - hostname: <snip>.us-east-2.elb.amazonaws.com
spec:
rules:
  - http:
      paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: coderd
              port:
                number: 80
  - host: '*.<snip>'
    http:
      paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: coderd
              port:
                number: 80

After reviewing the codebase, I discovered there is a section in the ingress.tpl template file which is supposed to assign the expected TLS variables, but it doesn't look like it's being called correctly in this case:

{{- define "coder.ingress.tls" }}
{{- if (merge .Values dict | dig "ingress" "tls" "enable" false) }}
  tls:
    {{- if and .Values.ingress.host .Values.ingress.tls.hostSecretName }}
    - hosts:
      - {{ .Values.ingress.host | quote }}
      secretName: {{ .Values.ingress.tls.hostSecretName }}
    {{- end }}
    {{- if .Values.devurls }}
    {{- if and .Values.devurls.host .Values.ingress.tls.devurlsHostSecretName }}
    - hosts:
      - {{ include "movedValue" (dict "Values" .Values "Key" "coderd.devurlsHost") }}
      secretName: {{ .Values.ingress.tls.devurlsHostSecretName }}
    {{- end }}
    {{- end }}
{{- end }}
{{- end }}

Additionally, there is a slight problem I wished bring up. The way the Ingress deploys, it sets a wildcard for the host unless otherwise specified. It would be beneficial if this behavior was changed to allow the values.yaml to be populated with a defined domain name and use this to populate the devurl, or allow both to be populated seperately.

To get around these issues, I created a separate ingress:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: coderd-ingress-new
  namespace: coder
  labels:
  annotations:
    nginx.ingress.kubernetes.io/proxy-body-size: '0'
status:
  loadBalancer:
    ingress:
      - hostname: <snip>.us-east-2.elb.amazonaws.com
spec:
  tls:
    - hosts:
        - <domainName>
        - '*.<domainName>'
      secretName: coder-certs
  rules:
    - host: <domainName>
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: coderd
                port:
                  number: 80
    - host: '*.<domainName>'
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: coderd
                port:
                  number: 80

Default resources are too small

We seem to have

resources:
  requests:
    cpu: "250m"
    memory: "512Mi
  limits:
    cpu:  "250m"
    memory: "512Mi"

Should start with

resources:
  requests:
    cpu: "1000m"
    memory: "1Gi
  limits:
    cpu:  "1000m"
    memory: "1Gi"

Even 50 users can max out a core if the use web proxying extensively.

Provide ARM64 support

Followed the install steps (using helm 3)

kubectl create namespace coder
kubectl config set-context --current --namespace=coder
helm repo add coder https://helm.coder.com/
helm install coder coder/coder --namespace coder --version=1.30.0

Result

NAME PF IMAGE READY STATE INIT
timescale ● docker.io/coderenvs/timescale:1.30.0 false CrashLoopBackOff false

log

standard_init_linux.go:228: exec user process caused: exec format error │
│ Stream closed EOF for coder/timescale-0 (timescale)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.