GithubHelp home page GithubHelp logo

drupalwxt / helm-drupal Goto Github PK

View Code? Open in Web Editor NEW
30.0 30.0 22.0 65.97 MB

Helm chart for running Drupal on Kubernetes

Home Page: https://drupalwxt.github.io/helm-drupal/index.yaml

License: MIT License

PHP 87.09% Makefile 0.18% Shell 0.41% Mustache 10.47% Smarty 1.85%
charts drupal helm kubernetes

helm-drupal's People

Contributors

bernardmaltais avatar davidheerema avatar diamondshark avatar drupalwxt-svc avatar joshuacox avatar markwooff avatar mgifford avatar nathanpw avatar patheard avatar rikarux avatar ryanhyma avatar spotzero avatar stemirabo avatar sylus avatar vitaliss avatar zachomedia avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

helm-drupal's Issues

ValidationError(Deployment.spec.template.spec.volumes[1]): unknown field

With the current build, and most recent helm:
helm install drupaltest -f values-nfs-azurefile.yaml --wait --timeout 5m --namespace drupaltest .
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Deployment.spec.template.spec.volumes[1]): unknown field "items" in io.k8s.api.core.v1.Volume

removeing the items element from the drupal.yaml allows the deployment to start, but is probably not a proper fix.

Error in themes/custom/wxt_bootstrap/templates/system/page.html.twig

After deploying the Drupal container, the following error is displayed:
The website encountered an unexpected error. Please try again later.

Twig\Error\SyntaxError: The block 'header' has already been defined line 120. in Twig\TokenParser\BlockTokenParser->parse() (line 138 of themes/custom/wxt_bootstrap/templates/system/page.html.twig).

The directory /var/www/files_private does not exist.

FILE SYSTEM Writable (public download method) The directory /var/www/files_private does not exist. An automated attempt to create this directory failed, possibly due to a permissions problem. To proceed with the installation, either create the directory and modify its permissions manually or ensure that the installer has the permissions to create it automatically. For more information, see INSTALL.txt or the online handbook.

This is probably an easy fix, but gets in the way of the local install.

Error: unable to build kubernetes objects from release manifest

I have attempted to deploy the drupalwxt using the helm chart with the following command:

helm install gocweb --namespace gocweb ./drupal -f ./drupal/values.yaml

but I run into this error:

helm install gocweb --namespace gocweb ./drupal -f ./drupal/values.yaml
Error: unable to build kubernetes objects from release manifest: [unable to recognize "": no matches for kind "Deployment" in version "apps/v1beta2", unable to recognize "": no matches for kind "Deployment" in version "extensions/v1beta1"]

I am running helm version:
version.BuildInfo{Version:"v3.0.3", GitCommit:"ac925eb7279f4a6955df663a0128044a8a6b7593", GitTreeState:"clean", GoVersion:"go1.13.6"}

on kubernetes v1.17.3 using k3d:

k3d version v1.7.0
k3s version v1.17.3-k3s1

Backup script will capture the DB cache and will create huge backup files as a result

Hello guys,

I am looking at a way to feed an ignore tables list to the backup job to prevent exporting useless tables like cache. At the moment I don't see a way to do this in the chart short of not enabling backup and creating my own cronjob outside the helm chart... but we are using the chart because everything needed is available in there... so it would defeat the purpose.

Is this something you had on the roadmap? How do you handle not backing caches in the DB for your own deployment?

I might propose an idea. How about adding an optional value to the backup section where we could capture extra drush -y sql-dump options? That way we could specify which tables to ignore with:

# Add option extra options to the drush -y sql-dump command
extra_options: --skip-tables-list=cache,cache_*

If you are interested I could submit a PR with the proposed changes to support the feature?

Bernard

Install fails when using NFS and defaults

helm install drupaltest -f values-nfs-azurefile.yaml --wait --timeout 20m --namespace drupaltest .

The pods come up but the installation fails with the following:
// You are about to DROP all tables in your 'wxt' database. Do you want to
// continue?: yes.

[notice] Starting Drupal installation. This takes a while.

In FileSystem.php line 506:

File 'modules/contrib/video_embed_field/modules/video_embed_media/images/ic
ons/video.png' could not be copied because a file by that name already exis
ts in the destination directory ('').

varnish pod crashloopback

Trying to make use of the recently updated varnish template result in a pod crashloopback with the following error:

image

Error:
--
Sun, Nov 22 2020 8:38:21 am | Message from VCC-compiler:
Sun, Nov 22 2020 8:38:21 am | Backend host '"drupal-wxt-varnish-drupal"' could not be resolved to an IP address:
Sun, Nov 22 2020 8:38:21 am | Name or service not known
Sun, Nov 22 2020 8:38:21 am | (Sorry if that error message is gibberish.)
Sun, Nov 22 2020 8:38:21 am | ('/etc/varnish/default.vcl' Line 4 Pos 11)
Sun, Nov 22 2020 8:38:21 am | .host = "drupal-wxt-varnish-drupal";
Sun, Nov 22 2020 8:38:21 am | ----------###########################-
Sun, Nov 22 2020 8:38:21 am |  
Sun, Nov 22 2020 8:38:21 am |  
Sun, Nov 22 2020 8:38:21 am | In backend specification starting at:
Sun, Nov 22 2020 8:38:21 am | ('/etc/varnish/default.vcl' Line 3 Pos 1)
Sun, Nov 22 2020 8:38:21 am | backend default {
Sun, Nov 22 2020 8:38:21 am | #######----------
Sun, Nov 22 2020 8:38:21 am |  
Sun, Nov 22 2020 8:38:21 am | Running VCC-compiler failed, exited with 2
Sun, Nov 22 2020 8:38:21 am | VCL compilation failed

Error encountered during module installation

Hello, we are attempting to configure the Helm Chart Drupal WxT for development purposes and ran into trouble when installing modules. Could this be related to a configuration issue from our installation? This error also appeared during an unrelated process and failed for the same script but different line number. Thank you.

Here are the details:
Extend > Install new module
config_devel-8x-1.8.tar.gz

Notice: Undefined index: persistent in Drupal\redis\ClientFactory::getClient() (line 190 of /var/www/html/modules/contrib/redis/src/ClientFactory.php).The website encountered an unexpected error. Please try again later.
RuntimeException: Failed to start the session because headers have already been sent by "/var/www/html/core/includes/errors.inc" at line 285. in Symfony\Component\HttpFoundation\Session\Storage\NativeSessionStorage->start() (line 152 of /var/www/vendor/symfony/http-foundation/Session/Storage/NativeSessionStorage.php).

Alex

Drupal Log in

Hello, we had another team install Drupal in Azure but did not provide our team with credentials. Examining the Azure YAML file the user name is listed as admin and the password is commented out. Would it have been possible to install drupalwxt/helm-drupal with those defaults? I'm not familiar with Drupal in Azure but I would imagine they could simply redeploy the YAML file? Thank you.

Using an Azure MySQL DB is twice as slow as an AKS hosted MySQL DB

Good afternoon Will and Zach,

This is not an issue with the helm chart per say... but rather a question regarding the recommendation to use a CSP PaaS DB vs a K8s POD based DB for production. I just tried migrating our site to an Azure MySQL Flexible instance and I am finding that the site is less responsive... about half as responsive. The Azure MySQL PaaS instance is using a Private Endpoint on the same vnet as the AKS cluster running the Drupal solution. The MySQL PaaS instance is also super beefy... like as big as the node pool running the whole Drupal solution ;-) and yet it still perform poorly (not that the POD based DB was fast either).

Have you experienced the same at Stats Canada? Do you have any idea why this would be the case?

Regards,

Bernard

Varnish, $settings['trusted_host_patterns'] and HTTP 400 errors

A gotcha I just discovered is that if you're using Varnish and have Drupal trusted_host_patterns set, you'll need to add a host pattern for your Drupal pod.

In our case, the Helm release is called drupal-wxt-dev, so this does the trick:

varnish:
  enabled: true

extraSettings: |-
  $settings['trusted_host_patterns'] = ['^somedomain\.com$', '^drupal-wxt-dev.*$'];

Happy to submit a PR documenting this if you let me know where you'd like it.

Issue deploying the chart

I am trying to deploy the chart with:

kubectl create namespace drupal-wxt
helm install drupal-wxt --namespace gocweb -f values.yaml --wait .

but I am getting the following error:

helm install drupal-wxt --namespace gocweb -f values.yaml --wait .
Error: template: drupal/charts/minio/templates/deployment.yaml:210:20: executing "drupal/charts/minio/templates/deployment.yaml" at <(not .Values.gcsgateway.enabled) (not .Values.azuregateway.enabled) (not .Values.s3gateway.enabled) (not .Values.b2gateway.enabled)>: can't give argument to non-function not .Values.gcsgateway.enabled

Look like it is possibly related to minio?

Persistent storage and environment specific variable question

I have a question or looking for confirmation on my understanding of persistent storage mechanisms available to Drupal via these helm charts.

Context.: We need to be able to differentiate between environments (think dev/test/prod) for a Drupal migration. The migrations contain environment specific variables that we need to set and access within the containers for the migrations. Things like urls or database connections, etc... are different in dev/test/prod. We have a mechanisms to set the variables for each environment via our deployment mechanisms.

Concern.: Our main concern is depending on where these are set, like say in code within the containers, they may not persist if the containers get destroyed and come back up in Kubernetes. (at least my understanding).

Assumptions/Understanding.:

  • Files in the container are not persistent. Any changes made to files at runtime in Kubernetes in the containers (except the mounted persistent file storage) will be lost if the containers get destroyed and "spin" back up.
  • Linux environment variables within the containers will not persist. Nor will it propagate to all containers in a multi container/replication setup?
  • Database is persistent. Could be used (thinking Drupal config) and can be set with the post-install or post-update methods.
  • File storage (mounted storage) is persistent. Although not ideal for this context, it could be used set with post-install or post update methods.
  • Settings.php is persistent and can be modified via "extraSettings" and will persist if the container gets destroyed in Kubernetes and comes back up.

I am pretty sure about the above, however would like to confirm (especially the settings.php) and know if there are other "persistent storage mechanisms" we are unaware of or could be used for this type of context (persistent environment variables available int eh containers).

Appreciate any input/suggestion or feedback and all the work gone into these charts!

Ability to keep last install/upgrade job pods

I have had many occurrence where Drupal upgrade goes failing due to some issue with drush cim or drush updb. The current job will be deleted after 5 failed execution. Would it be possible to retain the last job pod to facilitate log reviews? I think the parameter for that might be:

  annotations:
    "helm.sh/hook": post-upgrade
    "helm.sh/hook-weight": "10"
    "helm.sh/hook-delete-policy": hook-succeeded

to

  annotations:
    "helm.sh/hook": post-upgrade
    "helm.sh/hook-weight": "10"
    "helm.sh/hook-delete-policy": before-hook-creation

What do you think? Possibility?

The directory /private is not writable

Screenshot_20210523_144044

I had a similar issue with the redis container on my setup, I needed to change its stanza to:

redis:
  enabled: true
  persistence:
    enabled: true
    storageClass: openebs-lvmpv
    size: 8Gi
  volumePermissions:
    enabled: true

full values.yaml

Is there a similar volumePermissions stanza I can add to the drupal container? I did try just adding that exact stanza to the drupal with no luck.

I am using the lvm-localpv driver for openEBS. But all looks to be okay there:

➜  drupal git:(master) ✗ kubectl get sc openebs-lvmpv
NAME                      PROVISIONER            RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
openebs-lvmpv (default)   local.csi.openebs.io   Delete          Immediate           false                  78m
➜  drupal git:(master) ✗ kubectl get pvc              
NAME                               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS    AGE
drupal-drupal                      Bound    pvc-d1d2b924-5138-4225-8fc6-1ca42d2ef3f2   8Gi        RWO            openebs-lvmpv   12m
drupal-mysql                       Bound    pvc-ec368551-cc28-48eb-b22e-edf41c971fcc   8Gi        RWO            openebs-lvmpv   12m
drupal-nginx                       Bound    pvc-c90ca6d5-427d-487a-9c6b-fa88fdd05de6   8Gi        RWO            openebs-lvmpv   12m
redis-data-drupal-redis-master-0   Bound    pvc-73180e38-cb7b-4a60-bbc6-1e32e78fbae3   8Gi        RWO            openebs-lvmpv   12m
redis-data-drupal-redis-slave-0    Bound    pvc-109dea85-656d-4d41-baff-0e94d45c55d9   8Gi        RWO            openebs-lvmpv   12m
redis-data-drupal-redis-slave-1    Bound    pvc-6dd2c58b-8373-4bab-9aee-e5033c58b745   8Gi        RWO            openebs-lvmpv   11m
➜  drupal git:(master) ✗ kubectl get pv 
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                      STORAGECLASS    REASON   AGE
pvc-109dea85-656d-4d41-baff-0e94d45c55d9   8Gi        RWO            Delete           Bound    default/redis-data-drupal-redis-slave-0    openebs-lvmpv            12m
pvc-6dd2c58b-8373-4bab-9aee-e5033c58b745   8Gi        RWO            Delete           Bound    default/redis-data-drupal-redis-slave-1    openebs-lvmpv            11m
pvc-73180e38-cb7b-4a60-bbc6-1e32e78fbae3   8Gi        RWO            Delete           Bound    default/redis-data-drupal-redis-master-0   openebs-lvmpv            12m
pvc-c90ca6d5-427d-487a-9c6b-fa88fdd05de6   8Gi        RWO            Delete           Bound    default/drupal-nginx                       openebs-lvmpv            12m
pvc-d1d2b924-5138-4225-8fc6-1ca42d2ef3f2   8Gi        RWO            Delete           Bound    default/drupal-drupal                      openebs-lvmpv            12m
pvc-ec368551-cc28-48eb-b22e-edf41c971fcc   8Gi        RWO            Delete           Bound    default/drupal-mysql                       openebs-lvmpv            12m
➜  drupal git:(master) ✗ kubectl get po
NAME                                      READY   STATUS    RESTARTS   AGE
drupal-765f49855f-lf5vc                   3/3     Running   0          17m
drupal-mysql-7b7769b55d-n24sd             2/2     Running   0          17m
drupal-nginx-6f57fc6dd6-zw794             2/2     Running   0          17m
drupal-redis-master-0                     2/2     Running   0          17m
drupal-redis-slave-0                      2/2     Running   0          17m
drupal-redis-slave-1                      2/2     Running   0          16m
drupal-varnish-787c7f8cfc-6bs77           2/2     Running   0          17m

It seems to me that whatever the redis pod is doing with:

  volumePermissions:
    enabled: true

The drupal pod needs to do the same thing? I assume a chown or chmod, or perhaps a change of user.

Sending Drupal logs to syslog

Good afternoon @sylus ,

I wonder if you have ever tried to configure your drupal site to send logs to syslog vs the mysql DB? We have tried at our end and can't see any logs in the pod when trying to do this.

Any ideas?

Bernard

Samples folder

Any thoughts on a samples folder for the values config files?
I have some sample value files that contain security fixes based on kube-scan that could be beneficial.
Every config is different though so it is hard to add to the defaults.

Issue with nginx docker images

I am seeing that nginx is running as root in my containers. Is it possible to update the nginx docker images so they don't run as root?

varnish will not validate when used in values file

I tried to add the varnish component by configuring the values.yaml file as shown in the example values files. Unfortunatly it result in a strange error as see below:

vscode ➜ /tf/caf (master ✗) $ helm upgrade drupal-wxt --namespace drupal-wxt --reuse-values -f values-azure-upgrade.yaml drupalwxt/drupal
Error: UPGRADE FAILED: error validating "": error validating data: ValidationError(Deployment.spec.template.spec): unknown field "annotations" in io.k8s.api.core.v1.PodSpec

The values for varnish I tried to use are the following. I even tried adding an annotations section but no luck:

## Configuration values for the Varnish dependency sub-chart
## ref: https://github.com/StatCan/charts/blob/master/stable/varnish/README.md
varnish:
  enabled: true
  varnishd:
    image: varnish
    tag: 6.4.0
    imagePullPolicy: IfNotPresent
  service:
    type: ClusterIP
    port: 80
  resources: {}
  #  requests:
  #    memory: "512Mi"
  #    cpu: "100m"
  #  limits:
  #    memory: "1Gi"
  #    cpu: "500m"
  annotations: {}
  nodeSelector: {}
  tolerations: []
  affinity: {}

Any ideas?

How does stderr stdout logging work?

How does stderr stdout logging work? Particularly for NGINX and PHP-FPM. Is this a feature these charts provide?

We noticed that the nginx logs in the Drupal container (/var/log/nginx are symlinked to /dev/stderr /dev/stdout). Was curious if there was any best practice, direction, or config to manage logs. Particularly around retention time and how to access/export them (on say a file system for analysis). We typically access the logs via lens, but wondering if they are written somewhere and retained/rotated or whatever?

Hopefully these questions make sense, as we aren't really helm/k8s experts.

I did read some of this.: https://kubernetes.io/docs/concepts/cluster-administration/logging/ Which suggest there should be "cluster-level logging" just not sure how (or if) this is implemented by these charts (or is up to the user/implementer)?

Backup cronjob fail due to tar file reporting file changes during backup

I ran into a similar issue as what @Stemirabo reported during restore and fixed via a patch.

I have fixed my issue with a patch for the cronjob script... but ultimately, I think the whole script should be made customizable via the values files. This will require quite a bit of work and would be a breaking change... so, I did not bother creating a pull request for it.

I think discussing long term possibilities for making script more customizable is required to better plan how it should be implemented.

Here is a copy of the patch I created to fix the problem:

spec:
  jobTemplate:
    spec:
      template:
        spec:
          containers:
            - name: drush
              command:
                - /bin/sh
                - '-c'
                - |
                  # Errors should fail the job
                  set -e
                  set +x

                  # Wait for DB to be available
                  until drush sql:query 'SHOW GLOBAL STATUS LIKE "Uptime";'; do echo Waiting for DB; sleep 3; done;
                  echo DB available;

                  # Check Drush status
                  drush status

                  # Run cron
                  BACKUPNAME=$(date +%Y%m%d.%H%M%S)
                  mkdir -p /backup/$BACKUPNAME
                  echo "Backup folder name: $BACKUPNAME"

                  echo "Backup DB"
                  drush -y sql-dump --skip-tables-list=cache,cache_* | gzip > /backup/$BACKUPNAME/db.sql.gz
                  echo "...backup DB completed."

                  echo "Backup public files"
                  set +e
                  tar --exclude="./js" --exclude="./css" --exclude="./styles" -czf /backup/$BACKUPNAME/files.tar.gz --directory=sites/default/files .
                  echo "...backup public files completed."
                  exitcode=$?
                  if [ "$exitcode" != "1" ] && [ "$exitcode" != "0" ];
                  then
                    exit $exitcode
                  fi
                  set -e

                  echo "Backup private files"
                  tar --exclude="./config" -czf /backup/$BACKUPNAME/private.tar.gz --directory=/private .

                  echo "...backup private files completed."

I guess we could run into the same issue with the private files backup... so, I might have to add the fix there to also...

I also added a bunch of echo to better see when things are completed and changed how I detect when the DB is ready. Therefore full script customization would be good as each teams using the chart might need slightly different backup and restore scripts.

Cron jobs and timezone

Hello,

I had some questions regarding setting the time zone for running our cronjobs as we have some that are time-sensitive and are subject to daylight savings changes. For example, if we wish to run a cronjob at 8:30AM everyday, depending on whether it's EST or EDT we will need to adjust the schedule by one hour as kubernetes runs on UTC0 which doesn't account for such thing.

I had tried to change the alpine containers timezone to EST using tzdata, and the associated schedules to EST / EDT. This works - however, the cronjobs still run at UTC time even though the container are in EST.

Does this mean the kubernetes cron controller (or whatever kicks off the cronjobs from the kubernetes level) is starting the crons in their UTC timezone regardless of container timezone?

drupal-site-install - Errors setting file permissions

The install does work, but the log does not look, maybe not important:

[warning] chmod(): Operation not permitted FileSystem.php:236
[success] Installation complete.
real 3m 55.02s
user 1m 24.62s
sys 0m 15.01s

// Do you want to update wxt.theme key in wxt_library.settings config?: yes.

[success] Cache rebuild complete.
[error] The file permissions could not be set on public://218x291.png.

1/13 [==>-------------------------] 7% [error] The file permissions could not be set on public://265x352.png.

2/13 [====>-----------------------] 15% [error] The file permissions could not be set on public://355x113.png.

3/13 [======>---------------------] 23% [error] The file permissions could not be set on public://360x203.png.

4/13 [========>-------------------] 30% [error] The file permissions could not be set on public://520x296.png.

5/13 [==========>-----------------] 38% [error] The file permissions could not be set on public://653x194-1.png.

6/13 [============>---------------] 46% [error] The file permissions could not be set on public://653x194-2.png.

7/13 [===============>------------] 53% [error] The file permissions could not be set on public://653x194-3.png.

8/13 [=================>----------] 61% [error] The file permissions could not be set on public://750x222-1.png.

9/13 [===================>--------] 69% [error] The file permissions could not be set on public://750x222-2.png.

10/13 [=====================>------] 76% [error] The file permissions could not be set on public://1170x347-1.png.

11/13 [=======================>----] 84% [error] The file permissions could not be set on public://1170x347-2.png.

12/13 [=========================>--] 92% [error] The file permissions could not be set on public://1170x347-3.png.

13/13 [============================] 100% [notice] Processed 13 items (13 created, 0 updated, 0 failed, 0 ignored) - done with 'wxt_file'
[notice] Processed 0 items (0 created, 0 updated, 0 failed, 0 ignored) - done with 'wxt_file'

1/2 [==============>-------------] 50%
2/2 [============================] 100% [notice] Processed 2 items (2 created, 0 updated, 0 failed, 0 ignored) - done with 'wxt_node_page'

1/13 [==>-------------------------] 7%
2/13 [====>-----------------------] 15%
3/13 [======>---------------------] 23%
4/13 [========>-------------------] 30%
5/13 [==========>-----------------] 38%
6/13 [============>---------------] 46%
7/13 [===============>------------] 53%
8/13 [=================>----------] 61%
9/13 [===================>--------] 69%
10/13 [=====================>------] 76%
11/13 [=======================>----] 84%
12/13 [=========================>--] 92%
13/13 [============================] 100% [notice] Processed 13 items (13 created, 0 updated, 0 failed, 0 ignored) - done with 'wxt_media'

1/3 [=========>------------------] 33%
2/3 [==================>---------] 66%
3/3 [============================] 100% [notice] Processed 3 items (3 created, 0 updated, 0 failed, 0 ignored) - done with 'wxt_media_slideshow'

1/4 [=======>--------------------] 25%
2/4 [==============>-------------] 50%
3/4 [=====================>------] 75%
4/4 [============================] 100% [notice] Processed 4 items (4 created, 0 updated, 0 failed, 0 ignored) - done with 'gcweb_block'

1/3 [=========>------------------] 33%
2/3 [==================>---------] 66%
3/3 [============================] 100% [notice] Processed 3 items (3 created, 0 updated, 0 failed, 0 ignored) - done with 'gcweb_block_spotlight'

1/2 [==============>-------------] 50%
2/2 [============================] 100% [notice] Processed 2 items (2 created, 0 updated, 0 failed, 0 ignored) - done with 'gcweb_node_landing_page'

3/38 [==>-------------------------] 7%
6/38 [====>-----------------------] 15%
9/38 [======>---------------------] 23%
12/38 [========>-------------------] 31%
15/38 [===========>----------------] 39%
18/38 [=============>--------------] 47%
21/38 [===============>------------] 55%
24/38 [=================>----------] 63%
27/38 [===================>--------] 71%
30/38 [======================>-----] 78%
33/38 [========================>---] 86%
36/38 [==========================>-] 94%
38/38 [============================] 100% [notice] Processed 38 items (38 created, 0 updated, 0 failed, 0 ignored) - done with 'gcweb_menu_link'

Azure fileshare on Kubernetes 1.19 bug

discovered a little bug when deploying to kubernetes 1.19 on AKS with Azure fileshare - in your values.yaml file lines 286 and 293 need to append the secretNamespace or it looks in the default namspace, existing code:

  `secretName: drupal-storage`
  `shareName: drupal-public`
  `readOnly: false`

fix is to include the secretNamespace attr:

 `secretName: drupal-storage`
 `secretNamespace: {active-ns}`
 `shareName: drupal-public`
 `readOnly: false`

Drupal 7 redis documentation and chart versions don't align

Steps to recreate:

Expected results the documentation should match the chart?

Problem the Chart anbd documentation specify different versions.

Drupal 7 redis.cluster.enabled is now mandatory?

This is more of a question, maybe an issue... We recently tried to update our D7 site and saw the following error.:

Error: template: drupal7/templates/job/post-upgrade-reconfigure.yaml:78:32: executing "drupal7/templates/job/post-upgrade-reconfigure.yaml" at <.Values.redis.cluster.enabled>: nil pointer evaluating interface {}.enabled

We had to specify redis.cluster.enabled to false, which we didn't have to before in our yaml.

I noticed the d7 and d9 charts seem to diverge in this.:

https://github.com/drupalwxt/helm-drupal/blob/master/drupal7/templates/job/post-install-site-install.yaml#L78

https://github.com/drupalwxt/helm-drupal/blob/master/drupal/templates/job/post-install-site-install.yaml#L78

I didn't see any mention in the changelog about redis updates or having to specify this value now in d7... I guess I am wondering if you are aware of this, and this is correct or an issue. I appologize if this doesn't make sense, helm isn't my native language. 📦

P.S. Appreciate you reading this and all the hard work that goes into this chart.

Remove requirement for lightning_core to be enabled.

While attempting to deploy to Prod I'm running into the error where lighting is trying to update. Since minimal version doesn't require lightning_core to be enabled, I'd like to remove the requirement (drush -y update:lightning) that it be enabled. That way it can be manually added to extraInstallScripts for the apps that still make use of Lighting. If that would break too many apps, could a way of disabling update:lighting be devised?

Helm chart insist on manually creating PV with specific volume name for public and private

I ran into an issue where the current code is forcing the creation of PV with specific volume names. This is causing some issues when trying to use a custom StorageClass like:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: azurefile-csi-premium-sscplus
provisioner: file.csi.azure.com
parameters:
  skuName: Premium_LRS
reclaimPolicy: Delete
mountOptions:
  - dir_mode=0777
  - file_mode=0777
  - uid=82
  - gid=82
  - mfsymlinks
  - nobrl
  - cache=none
allowVolumeExpansion: true
volumeBindingMode: Immediate

I created a new pull request to allow the disabling of volumeNames for the shareddisk and azurefile PVC:

#116

I also discovered that the current sharedDisk implementation appear to be broken... There is only a private shareddisk pv yaml file. When trying to use the default shareddisk values the deployment will keep failing. Disabling the volumeName and ensuring the shareddisk-private.yaml is not used fixed the issue for me.

This is allowing me to move the public and private PV to faster azurefile-csi-premium driver and not have to create fileshare in a storage account. The azurefile-csi-premium offer significant performance gain over the previous azurefile on a storage account method.

I still use an azurefile backup PV to store backups.

Using new CSI shared disk instead of AzureFile

@spotzero I just saw this potential replacement for AzureFile: https://docs.microsoft.com/en-us/azure/aks/azure-disk-csi#shared-disk

Have you ever looked at it? This might make the whole public, private, backup much cleaner. It also appear to be a block storage... so restoring and accessing files might be much faster.

Is there a way to use existing PVC for public and private files? Like it is done for backup? I have not been able to switch yet to properly test.

drush query to see if network and database is available for additionalCrons

Hello,

Would it be possible to implement the following code for "additionalCrons" in order for cron containers to realize network and database connectivity:

# Wait for DB and network to be available
until drush sql:query 'SHOW TABLES;'; do echo Waiting for DB; sleep 3; done
echo DB available

I believe this is due to latency between the proxy and cron containers initialization?

Thanks.

Error: couldn't find key postgresql-password in Secret wxtdrupal/wxtdrupal-release-postgresql

Hi team,

Having issues after enabling postgresql:

mysql:
enabled: false

postgresql:
enabled: true

1s          Normal    Pulled                  pod/wxtdrupal-release-5ccdb5dd59-7vhb7                      Container image "drupalwxt/site-wxt:4.3.3" already present on machine
1s          Warning   Failed                  pod/wxtdrupal-release-5ccdb5dd59-7vhb7                      Error: couldn't find key postgresql-password in Secret wxtdrupal/wxtdrupal-release-postgresql
1s          Normal    Pulled                  pod/wxtdrupal-release-site-install-h2rn2                    Container image "drupalwxt/site-wxt:4.3.3" already present on machine
1s          Warning   Failed                  pod/wxtdrupal-release-site-install-h2rn2                    Error: couldn't find key postgresql-password in Secret wxtdrupal/wxtdrupal-release-postgresql

kubectl logs -f wxtdrupal-release-postgresql-0 -n wxtdrupal

postgresql 18:13:20.91 INFO  ==> ** PostgreSQL setup finished! **

postgresql 18:13:20.94 INFO  ==> ** Starting PostgreSQL **
2022-08-22 18:13:20.956 GMT [1] LOG:  syntax error in file "/opt/bitnami/postgresql/conf/postgresql.conf" line 1, near token "listenAddresses:"
2022-08-22 18:13:20.956 GMT [1] LOG:  syntax error in file "/opt/bitnami/postgresql/conf/postgresql.conf" line 2, near token "maxConnections:"
2022-08-22 18:13:20.956 GMT [1] LOG:  syntax error in file "/opt/bitnami/postgresql/conf/postgresql.conf" line 3, near token "sharedBuffers:"
2022-08-22 18:13:20.956 GMT [1] LOG:  syntax error in file "/opt/bitnami/postgresql/conf/postgresql.conf" line 4, near token "workMem:"
2022-08-22 18:13:20.956 GMT [1] LOG:  syntax error in file "/opt/bitnami/postgresql/conf/postgresql.conf" line 5, near token "effectiveCacheSize:"
2022-08-22 18:13:20.956 GMT [1] LOG:  syntax error in file "/opt/bitnami/postgresql/conf/postgresql.conf" line 6, near token "maintenanceWorkMem:"
2022-08-22 18:13:20.956 GMT [1] LOG:  syntax error in file "/opt/bitnami/postgresql/conf/postgresql.conf" line 7, near token "minWalSize:"
2022-08-22 18:13:20.956 GMT [1] LOG:  syntax error in file "/opt/bitnami/postgresql/conf/postgresql.conf" line 8, near token "maxWalSize:"
2022-08-22 18:13:20.956 GMT [1] LOG:  syntax error in file "/opt/bitnami/postgresql/conf/postgresql.conf" line 9, near token "walBuffers:"
2022-08-22 18:13:20.956 GMT [1] LOG:  syntax error in file "/opt/bitnami/postgresql/conf/postgresql.conf" line 10, near token "byteaOutput:"
2022-08-22 18:13:20.956 GMT [1] FATAL:  configuration file "/opt/bitnami/postgresql/conf/postgresql.conf" contains errors

Workaround:

  1. Change "postgresql-password" to "password" in the following files:
    image
    e.g.:
{{- else if .Values.postgresql.enabled }}
        - name: POSTGRES_PASSWORD
          valueFrom:
            secretKeyRef:
              name: "{{ .Release.Name }}-postgresql"
              key: password
  1. Comment lines #555 - #565 in the values.yaml:
    image

Drupal podAnnotations and resources

Are you open to a small PR that adds podAnnotations and resources to the Drupal deployment? My use case is that I'm trying to meet the Gatekeeper policies I've assigned to the cluster for AppArmor and resource limits.

Backup options?

What would be required to make specific parts (db, public, private) of the backup action optional?

Documentation when complete

The instructions when you successfully run the site should say something about scrolling up. The instructions for "3. Optionally run the site installation through Drush" just look enough like error messages it took me a while to figure out there was useful information at the top.

I also can load the site here - http://127.0.0.1:8080 - but would be useful if the documentation said where to load the page in the browser.

There would obviously be slightly different instructions if this were being loaded on Amazon, but the default instructions should be for a local install.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.