validatedpatterns / common Goto Github PK
View Code? Open in Web Editor NEWLicense: Apache License 2.0
License: Apache License 2.0
In some environments we have noticed that in some instances the imperative
namespace does not get created on Spoke clusters. The policy in common/acm/templates/policies/acm-hub-ca-policy creates a Secret in the imperative
namespace. If the namespace is not created the ACM policy will fail.
Add the following objectDefinition to the ACM Policy:
...
- complianceType: musthave
objectDefinition:
kind: Namespace # must have namespace 'imperative'
apiVersion: v1
metadata:
name: imperative
...
This will create the namespace imperative
if it does not exist on the Spoke cluster.
In connection to validatedpatterns/patterns-operator#185
validatedpatterns/common may require mechanism to support configuration of http proxy for related argocd instances
Since yesterday's release of gitops 1.5.0 clicking on the secondary argo instance errors out with the following:
Failed to query provider "https://hub-gitops-server-multicloud-gitops-hub.apps.bandini-dc.blueprints.rhecoeng.com/api/dex": Get "http://hub-gitops-dex-server.multicloud-gitops-hub.svc.cluster.local:5556/api/dex/.well-known/openid-configuration": dial tcp 172.30.229.125:5556: connect: connection refused
The logs of the dex container give us the following:
W0421 07:32:56.239809 1 reflector.go:324] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: failed to list *v1.Secret: secrets is forbidden: User "system:serviceaccount:multicloud-gitops-hub:hub-gitops-argocd-dex-server" cannot list resource "secrets" in API group "" in the namespace "multicloud-gitops-hub"
E0421 07:32:56.239838 1 reflector.go:138] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1.Secret: failed to list *v1.Secret: secrets is forbidden: User "system:serviceaccount:multicloud-gitops-hub:hub-gitops-argocd-dex-server" cannot list resource "secrets" in API group "" in the namespace "multicloud-gitops-hub"
W0421 08:14:28.521492 1 reflector.go:324] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: failed to list *v1.ConfigMap: configmaps is forbidden: User "system:serviceaccount:multicloud-gitops-hub:hub-gitops-argocd-dex-server" cannot list resource "configmaps" in API group "" in the namespace "multicloud-gitops-hub"
E0421 08:14:28.521517 1 reflector.go:138] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps is forbidden: User "system:serviceaccount:multicloud-gitops-hub:hub-gitops-argocd-dex-server" cannot list resource "configmaps" in API group "" in the namespace "multicloud-gitops-hub"
It seems that we need to add some permissions to the hub-gitops-argocd-dex-server service account now
Some operators are not compatible with adding targetNamespace as their own, this is automatically done when we create a new namespace.
for example, performance-addon-operator
can't monitor its own namespace.
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"operators.coreos.com/v1","kind":"OperatorGroup","metadata":{"annotations":{},"labels":{"app.kubernetes.io/instance":"npss-tnc-hub"},"name":"openshift-performance-addon-operator-operator-group","namespace":"openshift-performance-addon-operator"},"spec":{"targetNamespaces":["openshift-performance-addon-operator"]}}
References:
https://argo-cd.readthedocs.io/en/stable/operator-manual/tls/#tls-certificates-used-by-argocd-server
What is the proper way to inject a Corporate CA into a Validated Pattern such that:
Use case:
Our team has its own utility container for working within our multicloud environment. We have rebased our container to hybridcloudpatterns/utility-container:latest so we can pick up all required VP tooling.
Our container makes use of a git clone to pull in the validated pattern repo as a subfolder. We include our own Makefile in our DIR_HOME to hand-off to VPs Makefile appropriately.
Containerfile Extract
FROM mycompany.com/hybridcloudpatterns/utility-container:latest
ARG CONTAINER_DIR_HOME=/home/root
ARG MY_REPO_CLI_BRANCH ?= 'dev'
ARG MY_REPO_CLI_URL=https://github.mycompany.com/my-utility-container.git
ARG VP_DIR_MULTICLOUD=vp-multicloud-gitops
ARG VP_DIR_VALUES=vp-values
ARG VP_PATTERN_NAME=my-pattern
ARG VP_REPO_MULTICLOUD_BRANCH=main
ARG VP_REPO_MULTICLOUD_URL=https://github.com/validatedpatterns/multicloud-gitops.git
RUN git clone --depth 1 ${VP_REPO_MULTICLOUD_URL} ${CONTAINER_DIR_HOME}/${VP_DIR_MULTICLOUD}; \
rm ${CONTAINER_DIR_HOME}/${VP_DIR_MULTICLOUD}/.github -R; \
rm ${CONTAINER_DIR_HOME}/${VP_DIR_MULTICLOUD}/.gitignore;
COPY ${VP_DIR_VALUES}/ ${CONTAINER_DIR_HOME}/${VP_DIR_MULTICLOUD}/${VP_DIR_VALUES}
COPY ./Makefile ${CONTAINER_DIR_HOME}
ENV KUBECONFIG=${CONTAINER_DIR_HOME}/.kube/config \
VP_DIR_MULTICLOUD=${VP_DIR_MULTICLOUD} \
VP_DIR_VALUES=${VP_DIR_VALUES} \
VP_REPO_MULTICLOUD_BRANCH=${VP_REPO_MULTICLOUD_BRANCH} \
VP_REPO_MULTICLOUD_URL=${VP_REPO_MULTICLOUD_URL} \
VP_PATTERN_NAME=${VP_PATTERN_NAME}
WORKDIR ${CONTAINER_DIR_HOME}
ENTRYPOINT ["sh", "run.sh"]
CMD ["help"]
Makefile
export MY_REPO_CLI_BRANCH ?= 'main'
export MY_REPO_CLI_ORIGIN ?= 'origin'
export MY_REPO_CLI_URL ?= 'https://github.mycompany.com/my-utility-container.git'
export VP_DIR_VALUES ?= 'my-default-vp-values-path'
export VP_DIR_MULTICLOUD ?= 'my-default-vp-mc-path'
export VP_PATTERN_NAME ?= 'my-default-pattern'
export VP_REPO_MULTICLOUD_BRANCH ?= 'main'
export VP_REPO_MULTICLOUD_URL ?= 'https://github.com/validatedpatterns/multicloud-gitops.git'
export NAME ?= ${VP_PATTERN_NAME}
export TARGET_ORIGIN ?= ${MY_REPO_CLI_ORIGIN}
export TARGET_REPO ?= ${MY_REPO_CLI_URL}
export TARGET_BRANCH ?= ${MY_REPO_CLI_BRANCH}
%:
@make $* -C vp-multicloud-gitops
Requested Changes
To facilitate the use case above I am requesting modifications be made to common/Makefile as follows:
VP_DIR_VALUES ?= '.'
TARGET_REPO ?= $[….]
TARGET_BRANCH ?= $[….]
The last set of changes I will leave to you but effectively any VP references to value files should be prefixed with $(VP_DIR_VALUES).
Example changes I have found so far include common/Makefile line 39 and 43:
HELM_OPTS=-f $(VP_DIR_VALUES)/values-global.yaml
And line 65:
$(eval CLUSTERGROUP ?= $(shell yq ".main.clusterGroupName" $(VP_DIR_VALUES)/values-global.yaml))
I believe other files are involved in this update as well like common/scripts/preview.sh.
We should make sure that if multisource is true, then the very same manifest with sources:
on the hub is deployed also on the spokes. Currently this is not the case.
This is coming from the TelCo team who I am working on creating the npss-tnc community pattern.
We currently support the creation of namespaces. There are instances in which users need to add additional annotations and labels to a namespace manifest. For example:
apiVersion: v1
kind: Namespace
metadata:
name: openshift-logging
annotations:
openshift.io/node-selector: ""
labels:
openshift.io/cluster-monitoring: "true"
The implementation would require us to update the way we define namespaces in our values file. The proposal would be something like this:
namespaces:
- name: namespaceName
labels:
- name: labelName
value: labelValue
annotations:
- name: "annotationName"
value: "annotationValue"
Question for the team is:
I've deployed a validated-patterns based cluster which was derived from multicloud-gitops
based on this repo:
When deploying I was using 'Red Hat Openshift on IBM Cloud' where:
Here I was using the default route hostname (described here): https://cloud.ibm.com/docs/openshift?topic=openshift-openshift_routes using an "IBM provided domain".
By default the routes appear to be encrypted with lets encrypt certs including the console. So any re-encrypt/Redirect or edge/Redirect routes have usable public certs.
The adverse impact of this is that by default the External secrets Operator enforces a particular certificate chain.
The following commit enabled the ESO to function. however, does decrease security posture.
butler54/validated-patterns-demos@e427f99
https://cloud.ibm.com/docs/openshift?topic=openshift-openshift_routes
Warning DeprecationNotice 27m ResourceCustomizations is deprecated, please use the new formats `ResourceHealthChecks`, `ResourceIgnoreDifferences`, and `Resource
Actions` instead.
oc describe of the cluster-wide argo instance gives the above deprecation notice. We need to fix this (in our common argocd.yaml)
Hi
When we have a channel like 4.10, it is translated into Argo as 4.1
a workaround was set to use "4.10" and it worked.
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: openshift-performance-addon-operator-subscription
namespace: openshift-performance-addon-operator
spec:
channel: "4.10"
name: performance-addon-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
It is currently rendering only the listed applications. To have a quick validation of what is being worked on without deploying it, it'd be useful to also render the clusterGroup chart which is the entry point for a pattern.
We should also allow overriding some of the local lookups we do against the cluster here:
platform=$(oc get Infrastructure.config.openshift.io/cluster -o jsonpath='{.spec.platformSpec.type}')
ocpversion=$(oc get clusterversion/version -o jsonpath='{.status.desired.version}' | awk -F. '{print $1"."$2}')
domain=$(oc get Ingress.config.openshift.io/cluster -o jsonpath='{.spec.domain}' | sed 's/^apps.//')
This will allow a simple way to see how things change when a user changes the platform etc.
Some operators require to create an operatorGroup without targetNamespaces, in example MetalLB Operator as described in the installation guide (https://docs.openshift.com/container-platform/4.12/networking/metallb/metallb-operator-install.html).
This is not possible with the current operatorgroup.yaml
template.
I'm not sure about the implications but this patch fixes it, and it creates an operatorGroup without targetNamespaces by default. Not a big deal but definitively a change in the behavior.
After deploying a validated pattern, e.g. Edge Anomaly Detection, I can view the ArgoCD Applications in the cluster and project ArgoCD instances, but I'm unable to manually trigger Sync. When attempting to manually trigger Sync, I receive the following error message:
Unable to deploy revision: permission denied: applications, sync, default/edge-anomaly-detection-hub, sub: CiRjMWFiNGZiNi1kMjkxLTQzNDgtODljNy1mYmI2Y2ViYjUxNWMSCW9wZW5zaGlmdA, iat: 2023-11-08T16:36:55Z
I'm logged in as a user with cluster-admin
role, but tt seems the default RBAC configuration of ArgoCD allows only kubeadmin full access. Deploying the pattern as kubeadmin
is not always feasible for regular pattern users, so I propose to extend the ArgoCD RBAC rules to grant any user associated with the cluster-admin
role edit permissions.
The current clustergroup schema does not allow the definition of extraParameters under the main section of a values file.
The user defined variables in the extraParameters section would only be applied if the user deploys the pattern via the command, using ./pattern.sh make install
or ./pattern.sh make operator-deploy
and not via the OpenShift Validated Patterns Operator UI.
Add the extraParameters to the definition of Main.properties in the values.schema.json:
...
"Main": {
"type": "object",
"additionalProperties": false,
"required": [
"clusterGroupName"
],
"title": "Main",
"description": "This section contains the 'main' variables which are used by the install chart only and are passed to helm via the Makefile",
"properties": {
"clusterGroupName": {
"type": "string"
},
"extraParameters": {
"type": "array",
"description": "Pass in extra Helm parameters to all ArgoCD Applications and the framework."
},
...
This will allow users to define extra parameters that will be added by the framework to the ArgoCD applications it creates.
extraParameters:
{{- range .Values.main.extraParameters }}
- name: {{ .name | quote }}
value: {{ .value | quote }}
{{- end }} {{/* range .Values.main.extraParameters */}}
{{- end }} {{/* if .Values.main.extraParameters */}}
main:
clusterGroupName: datacenter
multiSourceConfig:
enabled: false
experimentalCapabilities: initcontainers
extraParameters:
- name: clusterEnvironment
value: prod
oc get pattern -n openshift-operators
command.apiVersion: v1
items:
- apiVersion: gitops.hybrid-cloud-patterns.io/v1alpha1
kind: Pattern
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"gitops.hybrid-cloud-patterns.io/v1alpha1","kind":"Pattern","metadata":{"annotations":{},"name":"industrial-edge","namespace":"openshift-operators"},"spec":{"clusterGroupName":"datacenter","experimentalCapabilities":"initcontainers","extraParameters":[{"name":"environment","value":"dev"}],"gitSpec":{"targetRepo":"https://github.com/claudiol/industrial-edge.git","targetRevision":"deploy-industrial"},"multiSourceConfig":{"enabled":false}}}
creationTimestamp: "2024-05-15T16:23:18Z"
finalizers:
- foregroundDeletePattern
generation: 2
name: industrial-edge
namespace: openshift-operators
resourceVersion: "565790"
uid: 233d6200-7de2-4221-94e3-9c36351db8cc
spec:
clusterGroupName: datacenter
experimentalCapabilities: initcontainers
extraParameters:
- name: clusterEnvironment
value: dev
...
{{- range $k, $v := $.Values.extraParametersNested }}
- name: {{ $k }}
value: {{ printf "%s" $v | quote }}
{{- end }}
In a disconnected environment customers will have git repos that house their source code.
To deploy our validated patterns the private repositories will have to be configured in ArgoCD with a provided customer certificate for the git repository.
This will have to be configured by the patterns operator at creation of the ArgoCD instance creation for the pattern.
If application manifests are located in private repository then repository credentials have to be configured.
Proposal:
Currently the framework supports the provisioning of clusters using ManagedClusterSet, and ClusterPool and ClusterClaim.
The functionality is supported via the managedClusterGroups section described in the values-hub.yaml files. Below is an example of how to describe a ClusterPool associated with a label.
# This section is used by ACM
managedClusterGroups:
- name: resilient
helmOverrides:
- name: clusterGroup.isHubCluster
value: "false"
clusterSelector:
matchLabels:
clusterGroup: resilient
matchExpressions:
- key: vendor
operator: In
values:
- OpenShift
clusterPools:
# example of pool for primary spokes
aws-ca-central-1:
name: aws-region-ca
openshiftVersion: 4.12.49
baseDomain: blueprints.rhecoeng.com
platform:
aws:
region: us-west-2
clusters:
- customspoke1
controlPlane:
platform:
aws:
type: m5.2xlarge
zones:
- us-west-2a
workers:
replicas: 3
platform:
aws:
type: m5.xlarge
zones:
- us-west-2a
# example of pool for secondary spokes
We would like to extend the provisioning functionality in the VP framework to support provisioning clusters using ManagedClusterSet, ClusterDeployment, and Submariner add-on. Similar to how we describe ClusterPools we would like to add how to provision ClusterDeployments. Below is an initial example of what it could look like.
# This section is used by ACM
managedClusterGroups:
- name: resilient
helmOverrides:
- name: clusterGroup.isHubCluster
value: "false"
clusterSelector:
matchLabels:
clusterGroup: resilient
matchExpressions:
- key: vendor
operator: In
values:
- OpenShift
clusterDeployments:
# example of pool for primary spokes
ocp-primary-1:
name: ocp-primary
version: 4.14.15
install_config:
apiVersion: v1
metadata:
name: ocp-primary
baseDomain: blueprints.rhecoeng.com
controlPlane:
name: master
replicas: 3
platform:
aws:
type: m5.2xlarge
zones:
- us-east-1a
compute:
- name: worker
replicas: 5
platform:
aws:
type: m5.2xlarge
zones:
- us-east-1a
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OpenShiftSDN
serviceNetwork:
- 172.30.0.0/16
platform:
aws:
region: us-east-1
userTags:
project: ValidatedPatterns
publish: External
sshKey: ""
pullSecret: ""
# example of pool for secondary spokes
With ClusterDeployment the user can control the name of the clusters which is a requirement in certain environments. ClusterPool cluster names are auto-generated by ACM so there is no control over the cluster name.
Scenario:
Our company has a base set of namespaces/projects/subscriptions and application that we want deployed in all clusters (hub and spokes).
We attempted to implement this with a values-common.yaml in the overrides folder.
When we include this common file into values-hub.yaml or values-spoke as an entry in sharedValueFiles: it does not work. It appears that helm just overwrites the hub/spoke yaml namespaces/projects/subscriptions/applications with the contents of common.yaml.
As a user of validated patterns I would like to be able to use non 'route-53' based challenges.
The letsencrypt helm chart provides value in terms of dealing with the guts of integration with OCP' and ensuring route certificates are updated. However, if I am not using route53 but rather using one of the other supported DNS01 challenge providers for cert-manager (https://cert-manager.io/docs/configuration/acme/dns01/) I will effectively have to fork the chart.
Therefore I would suggest that there is a "BYO issuer" where the end user can configure their own DNS01 challenge provider.
The current clustergroup schema does not allow the definition of extraParameters under the main section of a values file.
The user defined variables in the extraParameters section would only be applied if the user deploys the pattern via the command, using ./pattern.sh make install or ./pattern.sh make operator-deploy and not via the OpenShift Validated Patterns Operator UI.
Add the extraParameters defined to the Hub and Spoke cluster ArgoCD Applications.
For more information please refer to #510
Under the current module, when the files: key is defined in values_secret.yaml but has no entries, it throws the error that NoneType is not iterable when validating the file paths.
This potentially surprising to the user - maybe there should be a validation that files: points to a dict type?
We have been adding the External Secrets Operator (Community Maintained) to our values file as a required subscription.
In reviewing the Validated Patterns common folder today I found a folder named golang-external-secrets that upon closer inspection contains the same operator we need (but in a localized format).
Because our company does not allow direct access to public repos like ghcr.io where that operator is hosted I would like to make cluster-wide use of your localized instance.
My questions:
Thanks,
Wade
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.