Helm chart to install the Alfresco Activiti Enterprise (AAE) infrastructure to model and deploy your process applications:
- Alfresco Identity Service
- Modeling Service
- Modeling App
- Deployment Service
- Admin App
- Transformation (Tika) Service
Once installed, you can deploy new AAE applications:
- via the Admin App using the Deployment Service
- manually customising the alfresco-process-application helm chart.
For all the available values, see the chart README.md.
Setup a Kubernetes cluster following your preferred procedure.
Install the latest version of helm.
An ingress-nginx
should be installed and bound to an external DNS address, for example:
helm upgrade --install --wait --repo https://kubernetes.github.io/ingress-nginx ingress-nginx ingress-nginx
For any command on helm, please verify the output with --dry-run
option, then execute without.
To install from the development chart repo, use alfresco-incubator
rather than alfresco
as CHART_REPO variable.
Check deployment progress with kubectl get pods --watch --all-namespaces
until all containers are running.
If anything is stuck, check events with kubectl get events --watch
.
export DESIRED_NAMESPACE=${DESIRED_NAMESPACE:-aae}
Configure access to pull images from quay.io in the installation namespace:
kubectl create secret \
--namespace $DESIRED_NAMESPACE \
docker-registry quay-registry-secret \
--docker-server=quay.io \
--docker-username=$QUAY_USERNAME \
--docker-password=$QUAY_PASSWORD
where:
- QUAY_USERNAME is your username on Quay
- QUAY_PASSWORD is your password on Quay
export HELM_OPTS+=" --debug \
--namespace $DESIRED_NAMESPACE \
--set global.gateway.http=$HTTP \
--set global.gateway.domain=$DOMAIN"
where:
- HTTP is true/false depending if you want external URLs using HTTP or HTTPS
- DOMAIN is your DNS domain
export PROTOCOL=http
export DOMAIN=host.docker.internal
NB add to your /etc/hosts
the line 127.0.0.1 host.docker.internal
if not present
export CLUSTER=aaedev
export PROTOCOL=https
export DOMAIN=$CLUSTER.envalfresco.com
export GATEWAY_HOST=$DOMAIN
export SSO_HOST=$DOMAIN
export HTTP=$(if [[ "$PROTOCOL" == 'http' ]]; then echo true; else echo false; fi)
HELM_OPTS+=" --set global.gateway.http=$HTTP \
--set global.gateway.domain=$DOMAIN"
To disable alfresco-deployment-service in the infrastructure:
HELM_OPTS+="
--set alfresco-deployment-service.enabled=false
"
A StorageClass that can work across multiple availability zones need to be available to store project release files per each application:
- for EKS always use EFS
- for AKS only if Multi-AZ is configured, use AFS
Add the helm values to use it:
HELM_OPTS+="
--set alfresco-deployment-service.projectReleaseVolume.storageClass=${STORAGE_CLASS_NAME} \
--set alfresco-deployment-service.projectReleaseVolume.permission=ReadWriteMany
"
NB In order to set email connector all the variables need to be set. If these variables are set then deployment service will use these configs as default for any applications deployed. Once these variables are configured at the deployment of chart via Helm customer won’t have the possibility to override these values from the admin app. In case you want to configure email connector variable from admin-app please dont not configure email connector during helm deployment.
Add the helm properties to configure email connector:
HELM_OPTS+="
--set alfresco-deployment-service.applications.connectors.emailConnector.username=${email_connecor_username}
--set alfresco-deployment-service.applications.connectors.emailConnector.password=${email_connector_password}
--set alfresco-deployment-service.applications.connectors.emailConnector.host=${email_connector_host}
--set alfresco-deployment-service.applications.connectors.emailConnector.port=${email_connector_port}
"
Set install parameters:
export RELEASE_NAME=aae
export CHART_NAME=alfresco-process-infrastructure
then install from the stable repo using a released chart version:
helm upgrade --install --wait \
--repo https://kubernetes-charts.alfresco.com/stable \
$HELM_OPTS $RELEASE_NAME $CHART_NAME
or from the incubator repo a development chart version:
helm upgrade --install --wait \
--namespace $DESIRED_NAMESPACE \
--repo https://kubernetes-charts.alfresco.com/incubator \
$HELM_OPTS $RELEASE_NAME $CHART_NAME
or from the current repository directory:
helm repo update
helm dependency update helm/${CHART_NAME}
helm upgrade --install --wait \
--namespace $DESIRED_NAMESPACE \
$HELM_OPTS $RELEASE_NAME helm/$CHART_NAME
Both support the following optional vars:
- RELEASE_NAME to handle upgrade or a non auto-generated release name
- HELM_OPTS to pass extra options to helm
Just install/upgrade the AAE infrastructure.
To verify the k8s yaml output:
HELM_OPTS+="--debug --dry-run" ./install.sh
Verify the k8s yaml output than launch again without --dry-run
.
A custom extra values file to add settings for localhost is provided:
HELM_OPTS+=" -f values-localhost.yaml" ./install.sh
NB the startup might take as much as 10 minutes, use kubectl get pods -A -w
to check the status.
Open browser and login to IDS:
open ${SSO_URL}
To read back the realm from the secret, use:
kubectl get secret \
--namespace $DESIRED_NAMESPACE \
realm-secret -o jsonpath="{['data']['alfresco-realm\.json']}" | base64 -D > alfresco-realm.json
In an air gapped environment where the Kubernetes cluster has no direct access to external image repositories, use a tool like helm-image-mirror to tag and push images to your internal registry and modify helm charts with the new image locations.
Modify the file values-external-postgresql.yaml providing values for your external database per each service, then run:
export HELM_OPTS+=" -f values-external-postgresql.yaml"
Running on GH Actions.
For Dependabot PRs to be validated by CI, the label "CI" should be added to the PR.
Requires the following secrets to be set:
Name | Description |
---|---|
BOT_GITHUB_TOKEN | Token to launch other builds on GH |
BOT_GITHUB_USERNAME | Username to issue propagation PRs |
RANCHER2_URL | Rancher URL to perform helm tests |
RANCHER2_ACCESS_KEY | Rancher access key |
RANCHER2_SECRET_KEY | Rancher secret key |
SLACK_NOTIFICATION_BOT_TOKEN | Token to notify slack on failure |