openshift / cluster-kube-apiserver-operator Goto Github PK
View Code? Open in Web Editor NEWThe kube-apiserver operator installs and maintains the kube-apiserver on a cluster
License: Apache License 2.0
The kube-apiserver operator installs and maintains the kube-apiserver on a cluster
License: Apache License 2.0
Present Error message: x509: certificate signed by unknown authority
It does not include which host, what ip. If these details are provided it will be easy to narrow down the issue to a particular webhook.
The following branches are being fast-forwarded from the current development branch (master) as placeholders for future releases. No merging is allowed into these release branches until they are unfrozen for production release.
release-4.17
release-4.18
For more information, see the branching documentation.
Running cluster-kube-apiserver-operator regenerate-certificates
on a failed master causes the /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-5/secrets/csr-signer/tls.crt
file to contain two certificates. The controller-manager CSR signer then fails signing certificates with this error:
E0509 20:04:43.804526 1 controllermanager.go:541] Error starting "csrsigning"
F0509 20:04:43.804566 1 controllermanager.go:240] error starting controllers: failed to start certificate controller: error parsing CA cert file "/etc/kubernetes/static-pod-resources/secrets/cs
r-signer/tls.crt": {"code":1003,"message":"the PEM file should contain only one object"}
Manually removing the offending certificates from tls.crt and rebooting the node fixes the issue allowing the controller-manager to sign certificates.
/cc @tnozicka
Hi,
May I ask you if there is a way to enable Kubernetes FeatureGates in a managed manner?
thanks.
Alessandro
kube-apiserver seems to be restarting on the bootstrap node causing the installer to fail.
one example is openshift/installer#964
another https://openshift-gce-devel.appspot.com/build/origin-ci-test/pr-logs/pull/openshift_machine-config-operator/251/pull-ci-openshift-machine-config-operator-master-e2e-aws/770/
This causes the master nodes to never come up.
/cc @deads2k
^^ not sure who else to cc ๐
It looks like its coming from the pod network, but hard to say for sure.
I1116 23:07:53.364992 1 logs.go:49] http: TLS handshake error from 10.0.11.206:22630: EOF
I1116 23:07:53.468272 1 logs.go:49] http: TLS handshake error from 10.0.22.71:5446: EOF
I1116 23:07:53.485773 1 logs.go:49] http: TLS handshake error from 10.0.81.142:42664: EOF
I1116 23:07:53.571327 1 logs.go:49] http: TLS handshake error from 10.0.56.194:57970: EOF
I1116 23:07:53.601100 1 logs.go:49] http: TLS handshake error from 10.0.11.206:7721: EOF
I1116 23:07:53.947873 1 logs.go:49] http: TLS handshake error from 10.0.67.121:4099: EOF
I1116 23:07:54.203697 1 logs.go:49] http: TLS handshake error from 10.0.74.56:45909: EOF
I1116 23:07:54.309702 1 logs.go:49] http: TLS handshake error from 10.0.88.7:11357: EOF
Possibly a misconfigured infrastructure component
[deads@deads-02 cluster-openshift-apiserver-operator]$ make build-images
hack/build-images.sh
[openshift/origin-cluster-openshift-apiserver-operator] --> FROM openshift/origin-release:golang-1.10 as 0
[openshift/origin-cluster-openshift-apiserver-operator] --> COPY . /go/src/github.com/openshift/cluster-openshift-apiserver-operator
[openshift/origin-cluster-openshift-apiserver-operator] --> RUN cd /go/src/github.com/openshift/cluster-openshift-apiserver-operator && go build ./cmd/cluster-openshift-apiserver-operator
[openshift/origin-cluster-openshift-apiserver-operator] --> FROM centos:7 as 1
[openshift/origin-cluster-openshift-apiserver-operator] --> COPY --from=0 /go/src/github.com/openshift/cluster-openshift-apiserver-operator/cluster-openshift-apiserver-operator /usr/bin/cluster-openshift-apiserver-operator
[openshift/origin-cluster-openshift-apiserver-operator] --> Committing changes to openshift/origin-cluster-openshift-apiserver-operator:d78ed24 ...
[openshift/origin-cluster-openshift-apiserver-operator] --> Tagged as openshift/origin-cluster-openshift-apiserver-operator:latest
[openshift/origin-cluster-openshift-apiserver-operator] --> Done
[openshift/origin-cluster-openshift-apiserver-operator] Removing .idea/
[openshift/origin-cluster-openshift-apiserver-operator] Removing _output/
I have install the ocp 4.12 release, and found the CKA-o๏ผ CKCM-o๏ผ KKS-o also met the revison too large, and the cluster has stuck for a long time to complete
and the CKA-o logs show as follows:
I0530 11:20:47.319151 1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"ad61e270-b42f-4eca-ae12-b723dae6be1c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 129 triggered by "secret/localhost-recovery-client-token has changed"
I0530 11:21:30.867123 1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"ad61e270-b42f-4eca-ae12-b723dae6be1c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 130 triggered by "secret/localhost-recovery-client-token has changed"
I0530 11:22:00.823064 1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"ad61e270-b42f-4eca-ae12-b723dae6be1c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 131 triggered by "secret/localhost-recovery-client-token has changed"
I0530 11:22:40.096664 1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"ad61e270-b42f-4eca-ae12-b723dae6be1c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 132 triggered by "secret/localhost-recovery-client-token has changed"
I0530 11:23:10.109621 1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"ad61e270-b42f-4eca-ae12-b723dae6be1c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 132 triggered by "secret/localhost-recovery-client-token has changed"
I0530 11:23:16.670830 1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"ad61e270-b42f-4eca-ae12-b723dae6be1c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 133 triggered by "secret/localhost-recovery-client-token has changed"
I0530 11:23:49.060741 1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"ad61e270-b42f-4eca-ae12-b723dae6be1c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 134 triggered by "secret/localhost-recovery-client-token has changed"
I0530 11:24:27.827461 1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"ad61e270-b42f-4eca-ae12-b723dae6be1c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 135 triggered by "secret/localhost-recovery-client-token has changed"
I0530 11:25:01.859358 1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"ad61e270-b42f-4eca-ae12-b723dae6be1c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 136 triggered by "secret/localhost-recovery-client-token has changed"
I0530 11:25:36.061161 1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"ad61e270-b42f-4eca-ae12-b723dae6be1c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 136 triggered by "secret/localhost-recovery-client-token has changed"
I0530 11:25:44.753436 1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"ad61e270-b42f-4eca-ae12-b723dae6be1c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 137 triggered by "secret/localhost-recovery-client-token has changed"
I0530 11:26:30.119970 1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"ad61e270-b42f-4eca-ae12-b723dae6be1c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 138 triggered by "secret/localhost-recovery-client-token has changed"
I0530 11:27:01.026656 1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"ad61e270-b42f-4eca-ae12-b723dae6be1c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 138 triggered by "secret/localhost-recovery-client-token has changed"
I0530 11:28:33.450015 1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"ad61e270-b42f-4eca-ae12-b723dae6be1c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 139 triggered by "secret/localhost-recovery-client-token has changed"
I0530 11:29:00.979907 1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"ad61e270-b42f-4eca-ae12-b723dae6be1c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 139 triggered by "secret/localhost-recovery-client-token has changed"
I0530 11:29:15.181776 1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"ad61e270-b42f-4eca-ae12-b723dae6be1c", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 140 triggered by "secret/localhost-recovery-client-token has changed"
openshift-kube-apiserver localhost-recovery-client-token kubernetes.io/service-account-token 4 145m
openshift-kube-apiserver localhost-recovery-client-token-123 Opaque 4 65m
openshift-kube-apiserver localhost-recovery-client-token-124 Opaque 4 64m
openshift-kube-apiserver localhost-recovery-client-token-125 Opaque 4 64m
openshift-kube-apiserver localhost-recovery-client-token-126 Opaque 4 63m
openshift-kube-apiserver localhost-recovery-client-token-127 Opaque 4 62m
openshift-kube-apiserver localhost-recovery-client-token-136 Opaque 4 51m
openshift-kube-apiserver localhost-recovery-client-token-137 Opaque 4 50m
openshift-kube-apiserver localhost-recovery-client-token-138 Opaque 4 50m
openshift-kube-apiserver localhost-recovery-client-token-139 Opaque 4 47m
openshift-kube-apiserver localhost-recovery-client-token-140 Opaque 4 47m
11:59:26.753590 1 revision_controller.go:178] Secret "localhost-recovery-client-token" changes for revision 244: {"data":{"ca.crt":"TU9ESUZJRUQ="},"metadata":{"annotations":{"kubernetes.io/service-account.name":"localhost-recovery-client","kubernetes.io/service-account.uid":"267c2116-4463-4566-b824-14dde1ea79f6"},"creationTimestamp":"2023-06-07T09:28:15Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/service-account.name":{}}},"f:type":{}},"manager":"cluster-kube-apiserver-operator","operation":"Update","time":"2023-06-07T09:28:15Z"},{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:ca.crt":{},"f:namespace":{},"f:service-ca.crt":{},"f:token":{}},"f:metadata":{"f:annotations":{"f:kubernetes.io/service-account.uid":{}}}},"manager":"kube-controller-manager","operation":"Update","time":"2023-06-07T11:59:26Z"}],"name":"localhost-recovery-client-token","ownerReferences":null,"resourceVersion":"4280369","uid":"12bebcd8-36cc-4e58-a131-9d8b4c5d11dd"},"type":"kubernetes.io/service-account-token"}
11:59:26.754532 1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-apiserver-operator", Name:"kube-apiserver-operator", UID:"27b80946-c3ed-4ab9-b2f4-154a9a74e9ae", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RevisionTriggered' new revision 245 triggered by "secret/localhost-recovery-client-token has changed"
Currently, all default SCCs (except privileged
) block users from setting seccomp to runtime/default
. The current behaviour seems to be a disservice as it blocks workloads to use more restrictive security controls, which may lead to folks simply set a workload SCC to privileged
in order to "get it to work".
This is becoming a larger problem as folks around the OSS community and the private sector start shipping workloads with seccomp set to runtime/default
- which is the recommended setting by CIS Benchmark for a few years now. They are now facing a few options:
privileged
.The suggested change is to allow all default SCCs to support:
unconfined
(current Kubernetes default for backwards compatibility)runtime/default
(future Kubernetes default and safer position)I am not entirety sure of the longevity and future plans of SCC. However, making this change will:
Looking forward to hear some thoughts around and understand how receptive the maintainers would be to the above.
cc: @JAORMX @jhrozek @saschagrunert
Upstream Context:
docker/default
and was later renamed to runtime/default
.1.19
: Seccomp made GA having profile unconfined
by default.1.22
: SeccompDefault feature gate created, enabling users to switch from unconfined
to runtime/default
across the entire cluster.1.25
(planned): SeccompDefault feature gate is enabled by default, meaning that all workloads will have seccomp profile runtime/default
unless otherwise set on a per workload (pod or container) basis.Remove manifests that have been added to the installer:
etcd-serving-ca.kube-system
-- openshift/installer#551etcd-client.kube-system
-- openshift/installer#581Currently, we either log warnings or simply provide no feedback when there is an issue observing the configuration. Change this to report status instead.
1 as the kube-apiserver has set the 'privileged: true'
cluster-kube-apiserver-operator/bindata/assets/kube-apiserver/pod.yaml
Lines 148 to 149 in 31380bd
2 but even without the 'privileged: true', the kube-apiserver can also write audit to /var/log/kube-apiserver
3 when using standard container runtimes (for example ContainerD or CRI-O) access to a privileged container allows for easy breakout to the underlying host, which in turn allows for access to all other workloads on that host and credentials for the node agent (Kubelet)
maybe we should remove the "privileged: true"
We saw this on a cluster built with CI images which disappear after some hours. openshift-apiserver is disfunctional with E0126 15:03:51.543533 1 authentication.go:62] Unable to authenticate the request due to an error: x509: certificate has expired or is not yet valid
.
$ kubectl get -n openshift-kube-apiserver pods
NAME READY STATUS RESTARTS AGE
installer-1-ip-10-0-28-37.ec2.internal 0/1 Completed 0 25h
installer-1-ip-10-0-34-137.ec2.internal 0/1 Completed 0 25h
installer-2-ip-10-0-28-37.ec2.internal 0/1 Completed 0 25h
installer-2-ip-10-0-34-137.ec2.internal 0/1 Completed 0 25h
installer-2-ip-10-0-7-66.ec2.internal 0/1 Completed 0 25h
installer-3-ip-10-0-28-37.ec2.internal 0/1 Completed 0 25h
installer-3-ip-10-0-34-137.ec2.internal 0/1 Completed 0 25h
installer-4-ip-10-0-28-37.ec2.internal 0/1 Completed 0 25h
installer-4-ip-10-0-34-137.ec2.internal 0/1 Completed 0 25h
installer-4-ip-10-0-7-66.ec2.internal 0/1 Completed 0 25h
installer-5-ip-10-0-28-37.ec2.internal 0/1 Completed 0 23h
installer-5-ip-10-0-34-137.ec2.internal 0/1 Completed 0 23h
installer-5-ip-10-0-7-66.ec2.internal 0/1 Completed 0 23h
installer-6-ip-10-0-34-137.ec2.internal 0/1 ImagePullBackOff 0 21h
openshift-kube-apiserver-ip-10-0-28-37.ec2.internal 1/1 Running 0 23h
openshift-kube-apiserver-ip-10-0-34-137.ec2.internal 1/1 Running 0 23h
openshift-kube-apiserver-ip-10-0-7-66.ec2.internal 1/1 Running 0 23h
revision-pruner-0-ip-10-0-28-37.ec2.internal 0/1 Completed 0 25h
revision-pruner-0-ip-10-0-34-137.ec2.internal 0/1 Completed 0 25h
revision-pruner-0-ip-10-0-7-66.ec2.internal 0/1 Completed 0 25h
revision-pruner-3-ip-10-0-28-37.ec2.internal 0/1 Completed 0 25h
revision-pruner-3-ip-10-0-34-137.ec2.internal 0/1 Completed 0 25h
revision-pruner-4-ip-10-0-28-37.ec2.internal 0/1 Completed 0 25h
revision-pruner-4-ip-10-0-34-137.ec2.internal 0/1 Completed 0 25h
revision-pruner-4-ip-10-0-7-66.ec2.internal 0/1 Completed 0 25h
revision-pruner-5-ip-10-0-28-37.ec2.internal 0/1 Completed 0 23h
revision-pruner-5-ip-10-0-34-137.ec2.internal 0/1 Completed 0 23h
revision-pruner-5-ip-10-0-7-66.ec2.internal 0/1 Completed 0 23h
revision-pruner-6-ip-10-0-34-137.ec2.internal 0/1 ImagePullBackOff 0 21h
The installer now creates a Cluster.cluster.k8s.io/v1alpha1
object, which is the correct way for operators to determine information about IP space.
Instead of parsing the old tectonic installer configuration, the operator should determine its list of restricted CIDRs from this object instead.
servicesSubnet
is set to 10.3.0.0 in master's kube API server -
serviceCIDR
set to 172.30.0.0/16
so most namespaces have Cluster IP 172.30.220.6 is not within the service CIDR 10.3.0.0/16; please recreate service
eventI am receiving the following error on the master node (libvirt install, installer master:87ede7c78af) with the openshift-apiserver:
2019-02-20T16:46:10.271098459Z AUDIT: id="ac456318-4539-4261-90b3-4bc5d2fa938d" stage="ResponseComplete" ip="10.128.0.39" method="get" user="system:serviceaccount:openshift-cluster-samples-operator:cluster-samples-operator" groups="\"system:serviceaccounts\",\"system:serviceaccounts:openshift-cluster-samples-operator\",\"system:authenticated\"" as="<self>" asgroups="<lookup>" namespace="openshift" uri="/apis/image.openshift.io/v1/namespaces/openshift/imagestreams/fis-karaf-openshift" response="200"
E0220 16:46:10.495607 1 metrics.go:86] Error in audit plugin 'log' affecting 1 audit events: can't open new logfile: open /var/log/openshift-apiserver/audit.log: permission denied
Impacted events:
2019-02-20T16:46:10.474500316Z AUDIT: id="83d3addd-edea-419e-a1d2-c4e2ee8ee5e1" stage="ResponseComplete" ip="10.128.0.39" method="get" user="system:serviceaccount:openshift-cluster-samples-operator:cluster-samples-operator" groups="\"system:serviceaccounts\",\"system:serviceaccounts:openshift-cluster-samples-operator\",\"system:authenticated\"" as="<self>" asgroups="<lookup>" namespace="openshift" uri="/apis/image.openshift.io/v1/namespaces/openshift/imagestreams/rhdm72-decisioncentral-indexing-openshift" response="200"
E0220 16:46:10.672637 1 metrics.go:86] Error in audit plugin 'log' affecting 1 audit events: can't open new logfile: open /var/log/openshift-apiserver/audit.log: permission denied
Additional, journalctl contains numerous selinux errors:
Feb 20 16:52:49 test1-master-0 kernel: type=1400 audit(1550681568.989:17345): avc: denied { write } for pid=30102 comm="hypershift" name="openshift-apiserver" dev="vda2" ino=31506996 scontext=system_u:system_r:container_t:s0:c898,c993 tcontext=system_u:object_r:container_log_t:s0 tclass=dir permissive=0
Feb 20 16:52:50 test1-master-0 kernel: type=1400 audit(1550681570.030:17346): avc: denied { write } for pid=30102 comm="hypershift" name="openshift-apiserver" dev="vda2" ino=31506996 scontext=system_u:system_r:container_t:s0:c898,c993 tcontext=system_u:object_r:container_log_t:s0 tclass=dir permissive=0
Feb 20 16:52:50 test1-master-0 kernel: type=1400 audit(1550681570.049:17347): avc: denied { write } for pid=30102 comm="hypershift" name="openshift-apiserver" dev="vda2" ino=31506996 scontext=system_u:system_r:container_t:s0:c898,c993 tcontext=system_u:object_r:container_log_t:s0 tclass=dir permissive=0
Hi,
I'm running 4.5.0-0.okd-2020-09-04-180756. During the September, cluster was turned off. After turning it back on, some certificates got expired, they were good until Sep 29th. For example kube-apiserver
containers on masters are full of:
I1020 11:59:06.252656 1 controller.go:127] OpenAPI AggregationController: action for item v1.quota.openshift.io: Rate Limited Requeue.
E1020 11:59:06.338161 1 authentication.go:53] Unable to authenticate the request due to an error: x509: certificate has expired or is not yet valid
E1020 11:59:07.540815 1 authentication.go:53] Unable to authenticate the request due to an error: x509: certificate has expired or is not yet valid
E1020 11:59:14.837577 1 reflector.go:178] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to list *v1.Group: the server is currently unable to handle the request (get groups.user.openshift.io)
E1020 11:59:18.497351 1 authentication.go:53] Unable to authenticate the request due to an error: x509: certificate has expired or is not yet valid
E1020 11:59:22.366040 1 authentication.go:53] Unable to authenticate the request due to an error: x509: certificate has expired or is not yet valid
E1020 11:59:28.014356 1 authentication.go:53] Unable to authenticate the request due to an error: x509: certificate has expired or is not yet valid
E1020 11:59:30.063245 1 authentication.go:53] Unable to authenticate the request due to an error: x509: certificate has expired or is not yet valid
E1020 11:59:30.611518 1 authentication.go:53] Unable to authenticate the request due to an error: x509: certificate has expired or is not yet valid
E1020 11:59:32.988534 1 authentication.go:53] Unable to authenticate the request due to an error: x509: certificate has expired or is not yet valid
Together with @vrutkovs we did some debugging, and cert-regeneration-controller
seems to be stuck in some loop. I'm attaching logs from its container.
At the moment, oc get pods --all-namespaces
show only 12 pods in Running state: a set of etcd, kube-apiserver, kube-controller-manager, openshift-kube-scheduler for each of 3 masters. All other pods are in Pending state.
kube-apiserver-cert-regeneration-controller-logs.txt
@tnozicka , could you take a look?
Find out whether we have access to the cluster config during the bootstrapping phase executed at https://github.com/openshift/installer/blob/master/pkg/asset/ignition/bootstrap/content/bootkube.go#L31 .
It's a text/template. So maybe we have the cluster config in one form or another around to be plugged in, or even as file on the bootstrap node.
Originally posted by @sttts in #74 (comment)
Hi,
This is more of a query and not an issue. I am not sure if this is the right forum.
Here is the query :
we are building some security policies in our application that validate if openshift configuration being used is adhering to security practices listed under "CIS RedHat OpenShift Container Platform Benchmark"
One of the policies validates that --token-auth-file parameter is not set. As per the CIS benchmark document, here are the list of steps to validate --token-auth-file is not set.
step1 : oc get configmap config -n openshift-kube-apiserver -ojson | jq -r '.data["config.yaml"]' | jq '.apiServerArguments'
step2: oc get configmap config -n openshift-apiserver -ojson | jq -r '.data["config.yaml"]' | jq '.apiServerArguments'
step3: oc get kubeapiservers.operator.openshift.io cluster -o json | jq '.spec.observedConfig.apiServerArguments'
Step 1 and Step 2 seems pretty straight-forward. I have query regarding how to validate step3 from our policy. Note that, for our application, we only intend to validate the static configuration/deployment files of openshift/kubernetes, and policies does not directly query the running openshift cluster.
Query1
What does step3 verify ?
Does that do a lookup in the configuration/deployment file in a running cluster '.spec.observedConfig.apiServerArguments'
(or)
are we doing a lookup on the response by invoking an api on the running cluster ?
Given our usecase, can we just do a lookup on /spec/observedConfig/apiServerArguments field in the kubeapiserver CRD ?
Query2
How to check if the kubeapiserver operator is cluster operator from the configuration/deployment file of the kubeapiserver operator ? or is it safe to assume that kubeapiserver resource is always condsidered cluster level operator.
outage calculation in upgrade looks incorrect. See
status:
conditions:
- lastTransitionTime: "2020-07-08T15:09:11Z"
message: 'openshift-apiserver-service-172-30-151-149-443: tcp connection to 172.30.151.149:443
succeeded'
reason: TCPConnectSuccess
status: "True"
type: Reachable
failures:
- latency: 10.005827597s
message: 'openshift-apiserver-service-172-30-151-149-443: failed to establish
a TCP connection to 172.30.151.149:443: dial tcp 172.30.151.149:443: i/o timeout'
reason: TCPConnectError
success: false
time: "2020-07-08T15:41:50Z"
- latency: 10.001308826s
message: 'openshift-apiserver-service-172-30-151-149-443: failed to establish
a TCP connection to 172.30.151.149:443: dial tcp 172.30.151.149:443: i/o timeout'
reason: TCPConnectError
success: false
time: "2020-07-08T15:41:36Z"
- latency: 10.002833732s
message: 'openshift-apiserver-service-172-30-151-149-443: failed to establish
a TCP connection to 172.30.151.149:443: dial tcp 172.30.151.149:443: i/o timeout'
reason: TCPConnectError
success: false
time: "2020-07-08T15:40:20Z"
- latency: 10.000673641s
message: 'openshift-apiserver-service-172-30-151-149-443: failed to establish
a TCP connection to 172.30.151.149:443: dial tcp 172.30.151.149:443: i/o timeout'
reason: TCPConnectError
success: false
time: "2020-07-08T15:40:17Z"
successes:
- latency: 1.281624ms
message: 'openshift-apiserver-service-172-30-151-149-443: tcp connection to 172.30.151.149:443
succeeded'
reason: TCPConnect
success: true
time: "2020-07-08T15:54:16Z"
- latency: 1.549382ms
message: 'openshift-apiserver-service-172-30-151-149-443: tcp connection to 172.30.151.149:443
succeeded'
reason: TCPConnect
success: true
time: "2020-07-08T15:54:15Z"
- latency: 1.819134ms
message: 'openshift-apiserver-service-172-30-151-149-443: tcp connection to 172.30.151.149:443
succeeded'
reason: TCPConnect
success: true
time: "2020-07-08T15:54:14Z"
- latency: 192.237ยตs
message: 'openshift-apiserver-service-172-30-151-149-443: tcp connection to 172.30.151.149:443
succeeded'
reason: TCPConnect
success: true
time: "2020-07-08T15:54:13Z"
- latency: 676.948ยตs
message: 'openshift-apiserver-service-172-30-151-149-443: tcp connection to 172.30.151.149:443
succeeded'
reason: TCPConnect
success: true
time: "2020-07-08T15:54:12Z"
- latency: 256.643ยตs
message: 'openshift-apiserver-service-172-30-151-149-443: tcp connection to 172.30.151.149:443
succeeded'
reason: TCPConnect
success: true
time: "2020-07-08T15:54:11Z"
- latency: 1.815262ms
message: 'openshift-apiserver-service-172-30-151-149-443: tcp connection to 172.30.151.149:443
succeeded'
reason: TCPConnect
success: true
time: "2020-07-08T15:54:10Z"
- latency: 435.609ยตs
message: 'openshift-apiserver-service-172-30-151-149-443: tcp connection to 172.30.151.149:443
succeeded'
reason: TCPConnect
success: true
time: "2020-07-08T15:54:09Z"
- latency: 1.409842ms
message: 'openshift-apiserver-service-172-30-151-149-443: tcp connection to 172.30.151.149:443
succeeded'
reason: TCPConnect
success: true
time: "2020-07-08T15:54:08Z"
- latency: 2.111243ms
message: 'openshift-apiserver-service-172-30-151-149-443: tcp connection to 172.30.151.149:443
succeeded'
reason: TCPConnect
success: true
time: "2020-07-08T15:54:07Z"
Originally posted by @deads2k in #893 (comment)
In OCP 4.2 the kube-apiservers pods are running with
hostNetwork: true
dnsPolicy: ClusterFirst
And as a result, ( I assume) the kube-apiservers pod unable to resolve internal cluster DNS names, a.k.a K8S services (e.g web-app.my-project.svc.cluster.local)
I think, the valid dnsPolicy
when hostNetwork: true
should be dnsPolicy: ClusterFirstWithHostNet
, which should allow to resolve first the internal cluster name, and after external names.
Probably it's not a OpenShift bug, but more probably the cluster-kube-apiserver-operator related bug, since that operator (I assume :) ) is the one who deploying kube-apiservers pods.
Since openshift/installer@57127ec (coreos/tectonic-installer#3270), the installer sets a kube-apiserver
secret in the kube-system
with both service-account.key
and service-account.pub
. We also do that for the openshift-apiserver
secret. But the public key can be computed from the private key, so it would be nice to not have to set both in the same secret. It looks like the kube-controller-manager operator uses the private key (although I haven't found code where it pulls it from the installer-generated secret); can we update this operator to use the private key too? Or should the installer be setting separate secrets for each operator? Or something (again, the connections are not very clear to me ;)?
While working on events, I noticed that this operator stomp the cluster role binding several times per second. It seems like the cluster version does not have the APIGroup set?
I1123 14:17:18.786035 1 rbac.go:67] cluster role binding changed: %!(EXTRA string={"metadata":{"name":"system:openshift:operator:openshift-kube-apiserver-installer","selfLink":"/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system%3Aopenshift%3Aoperator%3Aopenshift-kube-apiserver-installer","uid":"7b95c8e9-ef2a-11e8-96b5-126065f7d37a","resourceVersion":"1690","creationTimestamp":"2018-11-23T14:17:17Z"},"subjects":[{"kind":"ServiceAccount","name":"installer-sa","namespace":"openshift-kube-apiserver"}],"roleRef":{"apiGroup":"
A: rbac.authorization.k8s.io","kind":"ClusterRole","name":"cluster-admin"}}
B: ","kind":"ClusterRole","name":"cluster-admin"}}
)
/cc @deads2k
openshift-kube-apiserver installer-24-retry-1-master2 0/1 Error 0 2d11h
openshift-kube-apiserver installer-24-retry-10-master2 0/1 Error 0 2d11h
openshift-kube-apiserver installer-24-retry-100-master2 0/1 Error 0 2d1h
openshift-kube-apiserver installer-24-retry-101-master2 0/1 Error 0 2d1h
openshift-kube-apiserver installer-24-retry-102-master2 0/1 Error 0 2d
openshift-kube-apiserver installer-24-retry-103-master2 0/1 Error 0 2d
openshift-kube-apiserver installer-24-retry-104-master2 0/1 Error 0 2d
openshift-kube-apiserver installer-24-retry-105-master2 0/1 Error 0 2d
openshift-kube-apiserver installer-24-retry-106-master2 0/1 Error 0 2d
openshift-kube-apiserver installer-24-retry-107-master2 0/1 Error 0 2d
openshift-kube-apiserver installer-24-retry-108-master2 0/1 Error 0 2d
openshift-kube-apiserver installer-24-retry-109-master2 0/1 Error 0 2d
openshift-kube-apiserver installer-24-retry-11-master2 0/1 Error 0 2d10h
openshift-kube-apiserver installer-24-retry-110-master2 0/1 Error 0 2d
openshift-kube-apiserver installer-24-retry-111-master2 0/1 Error 0 2d
openshift-kube-apiserver installer-24-retry-112-master2 0/1 Error 0 2d
openshift-kube-apiserver installer-24-retry-113-master2 0/1 Error 0 2d
openshift-kube-apiserver installer-24-retry-114-master2 0/1 Error 0 2d
openshift-kube-apiserver installer-24-retry-115-master2 0/1 Error 0 2d
openshift-kube-apiserver installer-24-retry-116-master2 0/1 Error 0 2d
openshift-kube-apiserver installer-24-retry-117-master2 0/1 Error 0 2d
openshift-kube-apiserver installer-24-retry-118-master2 0/1 Error 0 2d
openshift-kube-apiserver installer-24-retry-119-master2 0/1 Error 0 2d
openshift-kube-apiserver installer-24-retry-12-master2 0/1 Error 0 2d10h
openshift-kube-apiserver installer-24-retry-120-master2 0/1 Error 0 2d
openshift-kube-apiserver installer-24-retry-121-master2 0/1 Error 0 2d
openshift-kube-apiserver installer-24-retry-122-master2 0/1 Error 0 2d
openshift-kube-apiserver installer-24-retry-123-master2 0/1 Error 0 2d
openshift-kube-apiserver installer-24-retry-124-master2 0/1 Error 0 2d
openshift-kube-apiserver installer-24-retry-125-master2 0/1 Error 0 2d
openshift-kube-apiserver installer-24-retry-126-master2 0/1 Error 0 2d
openshift-kube-apiserver installer-24-retry-127-master2 0/1 Error 0 2d
openshift-kube-apiserver installer-24-retry-128-master2 0/1 Error 0 47h
openshift-kube-apiserver installer-24-retry-129-master2 0/1 Error 0 47h
openshift-kube-apiserver installer-24-retry-13-master2 0/1 Error 0 2d10h
openshift-kube-apiserver installer-24-retry-130-master2 0/1 Error 0 47h
openshift-kube-apiserver installer-24-retry-131-master2 0/1 Error 0 47h
openshift-kube-apiserver installer-24-retry-132-master2 0/1 Error 0 47h
openshift-kube-apiserver installer-24-retry-133-master2 0/1 Error 0 47h
openshift-kube-apiserver installer-24-retry-134-master2 0/1 Error 0 47h
openshift-kube-apiserver installer-24-retry-135-master2 0/1 Error 0 47h
openshift-kube-apiserver installer-24-retry-136-master2 0/1 Error 0 47h
openshift-kube-apiserver installer-24-retry-137-master2 0/1 Error 0 47h
openshift-kube-apiserver installer-24-retry-138-master2 0/1 Error 0 47h
openshift-kube-apiserver installer-24-retry-139-master2 0/1 Error 0 47h
openshift-kube-apiserver installer-24-retry-14-master2 0/1 Error 0 2d10h
openshift-kube-apiserver installer-24-retry-140-master2 0/1 Error 0 47h
openshift-kube-apiserver installer-24-retry-141-master2 0/1 Error 0 47h
openshift-kube-apiserver installer-24-retry-142-master2 0/1 Error 0 47h
openshift-kube-apiserver installer-24-retry-143-master2 0/1 Error 0 47h
openshift-kube-apiserver installer-24-retry-144-master2 0/1 Error 0 47h
openshift-kube-apiserver installer-24-retry-145-master2 0/1 Error 0 47h
openshift-kube-apiserver installer-24-retry-146-master2 0/1 Error 0 47h
openshift-kube-apiserver installer-24-retry-147-master2 0/1 Error 0 47h
openshift-kube-apiserver installer-24-retry-148-master2 0/1 Error 0 47h
openshift-kube-apiserver installer-24-retry-149-master2 0/1 Error 0 47h
openshift-kube-apiserver installer-24-retry-15-master2 0/1 Error 0 2d10h
openshift-kube-apiserver installer-24-retry-150-master2 0/1 Error 0 47h
openshift-kube-apiserver installer-24-retry-151-master2 0/1 Completed 0 47h
openshift-kube-apiserver installer-24-retry-16-master2 0/1 Error 0 2d9h
openshift-kube-apiserver installer-24-retry-17-master2 0/1 Error 0 2d9h
openshift-kube-apiserver installer-24-retry-18-master2 0/1 Error 0 2d9h
openshift-kube-apiserver installer-24-retry-19-master2 0/1 Error 0 2d9h
openshift-kube-apiserver installer-24-retry-2-master2 0/1 Error 0 2d11h
openshift-kube-apiserver installer-24-retry-20-master2 0/1 Error 0 2d9h
openshift-kube-apiserver installer-24-retry-21-master2 0/1 Error 0 2d8h
openshift-kube-apiserver installer-24-retry-22-master2 0/1 Error 0 2d8h
openshift-kube-apiserver installer-24-retry-23-master2 0/1 Error 0 2d8h
openshift-kube-apiserver installer-24-retry-24-master2 0/1 Error 0 2d8h
openshift-kube-apiserver installer-24-retry-25-master2 0/1 Error 0 2d8h
openshift-kube-apiserver installer-24-retry-26-master2 0/1 Error 0 2d7h
openshift-kube-apiserver installer-24-retry-27-master2 0/1 Error 0 2d7h
openshift-kube-apiserver installer-24-retry-28-master2 0/1 Error 0 2d7h
openshift-kube-apiserver installer-24-retry-29-master2 0/1 Error 0 2d7h
openshift-kube-apiserver installer-24-retry-3-master2 0/1 Error 0 2d11h
openshift-kube-apiserver installer-24-retry-30-master2 0/1 Error 0 2d7h
openshift-kube-apiserver installer-24-retry-31-master2 0/1 Error 0 2d6h
openshift-kube-apiserver installer-24-retry-32-master2 0/1 Error 0 2d6h
openshift-kube-apiserver installer-24-retry-33-master2 0/1 Error 0 2d6h
openshift-kube-apiserver installer-24-retry-34-master2 0/1 Error 0 2d6h
openshift-kube-apiserver installer-24-retry-35-master2 0/1 Error 0 2d6h
openshift-kube-apiserver installer-24-retry-36-master2 0/1 Error 0 2d5h
openshift-kube-apiserver installer-24-retry-37-master2 0/1 Error 0 2d5h
openshift-kube-apiserver installer-24-retry-38-master2 0/1 Error 0 2d5h
openshift-kube-apiserver installer-24-retry-39-master2 0/1 Error 0 2d5h
openshift-kube-apiserver installer-24-retry-4-master2 0/1 Error 0 2d11h
openshift-kube-apiserver installer-24-retry-40-master2 0/1 Error 0 2d5h
openshift-kube-apiserver installer-24-retry-41-master2 0/1 Error 0 2d4h
openshift-kube-apiserver installer-24-retry-42-master2 0/1 Error 0 2d4h
openshift-kube-apiserver installer-24-retry-43-master2 0/1 Error 0 2d4h
openshift-kube-apiserver installer-24-retry-44-master2 0/1 Error 0 2d4h
openshift-kube-apiserver installer-24-retry-45-master2 0/1 Error 0 2d3h
openshift-kube-apiserver installer-24-retry-46-master2 0/1 Error 0 2d3h
openshift-kube-apiserver installer-24-retry-47-master2 0/1 Error 0 2d3h
openshift-kube-apiserver installer-24-retry-48-master2 0/1 Error 0 2d3h
openshift-kube-apiserver installer-24-retry-49-master2 0/1 Error 0 2d3h
openshift-kube-apiserver installer-24-retry-5-master2 0/1 Error 0 2d11h
openshift-kube-apiserver installer-24-retry-50-master2 0/1 Error 0 2d2h
openshift-kube-apiserver installer-24-retry-51-master2 0/1 Error 0 2d2h
openshift-kube-apiserver installer-24-retry-52-master2 0/1 Error 0 2d2h
openshift-kube-apiserver installer-24-retry-53-master2 0/1 Error 0 2d2h
openshift-kube-apiserver installer-24-retry-54-master2 0/1 Error 0 2d2h
openshift-kube-apiserver installer-24-retry-55-master2 0/1 Error 0 2d2h
openshift-kube-apiserver installer-24-retry-56-master2 0/1 Error 0 2d2h
openshift-kube-apiserver installer-24-retry-57-master2 0/1 Error 0 2d2h
openshift-kube-apiserver installer-24-retry-58-master2 0/1 Error 0 2d2h
openshift-kube-apiserver installer-24-retry-59-master2 0/1 Error 0 2d2h
openshift-kube-apiserver installer-24-retry-6-master2 0/1 Error 0 2d11h
openshift-kube-apiserver installer-24-retry-60-master2 0/1 Error 0 2d2h
openshift-kube-apiserver installer-24-retry-61-master2 0/1 Error 0 2d2h
openshift-kube-apiserver installer-24-retry-62-master2 0/1 Error 0 2d2h
openshift-kube-apiserver installer-24-retry-63-master2 0/1 Error 0 2d2h
openshift-kube-apiserver installer-24-retry-64-master2 0/1 Error 0 2d2h
openshift-kube-apiserver installer-24-retry-65-master2 0/1 Error 0 2d2h
openshift-kube-apiserver installer-24-retry-66-master2 0/1 Error 0 2d2h
openshift-kube-apiserver installer-24-retry-67-master2 0/1 Error 0 2d2h
openshift-kube-apiserver installer-24-retry-68-master2 0/1 Error 0 2d2h
openshift-kube-apiserver installer-24-retry-69-master2 0/1 Error 0 2d2h
openshift-kube-apiserver installer-24-retry-7-master2 0/1 Error 0 2d11h
openshift-kube-apiserver installer-24-retry-70-master2 0/1 Error 0 2d2h
openshift-kube-apiserver installer-24-retry-71-master2 0/1 Error 0 2d2h
openshift-kube-apiserver installer-24-retry-72-master2 0/1 Error 0 2d2h
openshift-kube-apiserver installer-24-retry-73-master2 0/1 Error 0 2d2h
openshift-kube-apiserver installer-24-retry-74-master2 0/1 Error 0 2d2h
openshift-kube-apiserver installer-24-retry-75-master2 0/1 Error 0 2d2h
openshift-kube-apiserver installer-24-retry-76-master2 0/1 Error 0 2d1h
openshift-kube-apiserver installer-24-retry-77-master2 0/1 Error 0 2d1h
openshift-kube-apiserver installer-24-retry-78-master2 0/1 Error 0 2d1h
openshift-kube-apiserver installer-24-retry-79-master2 0/1 Error 0 2d1h
openshift-kube-apiserver installer-24-retry-8-master2 0/1 Error 0 2d11h
openshift-kube-apiserver installer-24-retry-80-master2 0/1 Error 0 2d1h
openshift-kube-apiserver installer-24-retry-81-master2 0/1 Error 0 2d1h
openshift-kube-apiserver installer-24-retry-82-master2 0/1 Error 0 2d1h
openshift-kube-apiserver installer-24-retry-83-master2 0/1 Error 0 2d1h
openshift-kube-apiserver installer-24-retry-84-master2 0/1 Error 0 2d1h
openshift-kube-apiserver installer-24-retry-85-master2 0/1 Error 0 2d1h
openshift-kube-apiserver installer-24-retry-86-master2 0/1 Error 0 2d1h
openshift-kube-apiserver installer-24-retry-87-master2 0/1 Error 0 2d1h
openshift-kube-apiserver installer-24-retry-88-master2 0/1 Error 0 2d1h
openshift-kube-apiserver installer-24-retry-89-master2 0/1 Error 0 2d1h
openshift-kube-apiserver installer-24-retry-9-master2 0/1 Error 0 2d11h
openshift-kube-apiserver installer-24-retry-90-master2 0/1 Error 0 2d1h
openshift-kube-apiserver installer-24-retry-91-master2 0/1 Error 0 2d1h
openshift-kube-apiserver installer-24-retry-92-master2 0/1 Error 0 2d1h
openshift-kube-apiserver installer-24-retry-93-master2 0/1 Error 0 2d1h
openshift-kube-apiserver installer-24-retry-94-master2 0/1 Error 0 2d1h
openshift-kube-apiserver installer-24-retry-95-master2 0/1 Error 0 2d1h
openshift-kube-apiserver installer-24-retry-96-master2 0/1 Error 0 2d1h
openshift-kube-apiserver installer-24-retry-97-master2 0/1 Error 0 2d1h
openshift-kube-apiserver installer-24-retry-98-master2 0/1 Error 0 2d1h
openshift-kube-apiserver installer-24-retry-99-master2 0/1 Error 0 2d1h
Hello,
Audit produces a lot of logs.
[root@control-plane-0 kube-apiserver]# ls -ltrh
total 664M
-rw-r--r--. 1 root root 100M 3 janv. 16:56 audit-2021-01-03T16-56-54.361.log
-rw-r--r--. 1 root root 100M 3 janv. 17:16 audit-2021-01-03T17-16-50.803.log
-rw-r--r--. 1 root root 100M 3 janv. 17:28 audit-2021-01-03T17-28-49.458.log
-rw-r--r--. 1 root root 100M 3 janv. 17:44 audit-2021-01-03T17-44-29.228.log
-rw-r--r--. 1 root root 100M 3 janv. 18:04 audit-2021-01-03T18-04-00.401.log
-rw-r--r--. 1 root root 100M 3 janv. 18:23 audit-2021-01-03T18-23-17.021.log
-rw-r--r--. 1 root root 43M 5 janv. 21:27 audit.log
More than 600M only for 1h30... it's quite huge.
Could it be possible to add a profile or other mechanism to disable the auditing feature ?
I am running OpenShift version 4.5.0-0.okd-2020-10-15-235428
Keep looking for the profile feature available in 4.6.
Damien
These are configurable.
In the following commit, cert validity period minimum changed.
However, the aggregator-client-signer got the 30 / 15 days one. This is a signer cert so I think it should have 60 / 30 days.
Sep 20 15:35:41 master1 hyperkube[8624]: E0920 15:35:41.603133 8624 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_karmada-aggregated-apiserver-7f6c7d8dd5-gm7gb_karmada-system_ff9e40b2-245a-4b7c-95d4-917bf94cf3a6_0(b1b5310d3645d82e7c4f468cf48bd512a1f226c6de7d57ee312a8bc9fbec6c01): error adding pod karmada-system_karmada-aggregated-apiserver-7f6c7d8dd5-gm7gb to CNI network \"multus-cni-network\": plugin type=\"multus\" name=\"multus-cni-network\" failed (add): [karmada-system/karmada-aggregated-apiserver-7f6c7d8dd5-gm7gb/ff9e40b2-245a-4b7c-95d4-917bf94cf3a6:k8s-pod-network]: error adding container to network \"k8s-pod-network\": error getting ClusterInformation: Get \"https://21.101.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": x509: cannot validate certificate for 21.101.0.1 because it doesn't contain any IP SANs"
oc get network cluster -oyaml
apiVersion: config.openshift.io/v1
kind: Network
metadata:
creationTimestamp: "2023-09-20T07:10:47Z"
generation: 1
name: cluster
resourceVersion: "715"
uid: bb3082a0-cebb-4a64-b662-db7ce2e76837
spec:
apiAddress: 10.255.71.88
clusterNetwork:
- cidr: 21.100.0.0/16
hostPrefix: 24
externalIP:
policy: {}
serviceNetwork:
- 21.101.0.0/16
status: {}
oc get secret service-network-serving-certkey -n openshift-kube-apiserver -oyaml
apiVersion: v1
data:
tls.crt:
tls.key:
kind: Secret
metadata:
annotations:
auth.openshift.io/certificate-hostnames: openshift,openshift.default,openshift.default.svc,openshift.default.svc.cluster.local,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster.local
auth.openshift.io/certificate-issuer: kube-apiserver-service-network-signer
auth.openshift.io/certificate-not-after: "2023-10-20T04:31:27Z"
auth.openshift.io/certificate-not-before: "2023-09-20T04:31:26Z"
creationTimestamp: "2023-09-20T04:31:31Z"
labels:
auth.openshift.io/managed-certificate-type: target
name: service-network-serving-certkey
namespace: openshift-kube-apiserver
resourceVersion: "13561"
uid: 5d551145-1de8-4ff6-a94c-7b511f3ac2bf
I was trying to update the kubeapiserver CR resources to enable KMS plugin at https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/#encrypting-your-data-with-the-kms-provider by using command oc edit kubeapiservers.operator.openshift.io
, but the problem is the new parameter --encryption-provider-config
need to point to a file, so how can I make the parameter point to a new file? I was using OCP 4.1. Thanks.
On GCP 4.0 on CentOS apiserver runs on 3 masters, but only one pod is responding. This breaks openshift-kube-apiserver, which uses api
server to communicate - most of the requests are failing.
$ oc get pods -n openshift-apiserver -o wide
NAME READY STATUS RESTARTS AGE IP NODE
apiserver-dz57l 1/1 Running 0 21m 10.131.0.44 vrutkovs-ig-m-4m83
apiserver-nrc9c 1/1 Running 1 3m 10.129.0.23 vrutkovs-ig-m-23z7
apiserver-zsl6n 1/1 Running 1 3m 10.130.0.23 vrutkovs-ig-m-42bb
# curl -kLvs https://10.129.0.23:8443/apis/apps.openshift.io/v1
<hangs>
# curl -kLvs https://10.130.0.23:8443/apis/apps.openshift.io/v1
<hangs>
# curl -kLvs https://10.131.0.44:8443/apis/apps.openshift.io/v1
<works immediately>
This works fine if I adjust ds/apiserver
to match only one known-to-work master node. All masters are created from the same instance group and get the same firewall rules applied.
This doesn't seem to affect AWS, but should happen on libvirt with 3 masters
as the
flowcontrol.apiserver.k8s.io/v1beta1 FlowSchema is deprecated in v1.23+, unavailable in v1.26+
and
flowcontrol.apiserver.k8s.io/v1beta1 PriorityLevelConfiguration is deprecated in v1.23+,unavailable in v1.26+
we should remove the
https://github.com/openshift/cluster-kube-apiserver-operator/blob/master/bindata/assets/kube-apiserver/storage-version-migration-flowschema.yaml
and
https://github.com/openshift/cluster-kube-apiserver-operator/blob/master/bindata/assets/kube-apiserver/storage-version-migration-prioritylevelconfiguration.yaml
If the installer fails with ImagePullBackOff
status, no further redeployments are done.
Compare #223.
Hi,
I want to edit event-ttl of API Server.
I found official documents of OCP 3.11
https://docs.openshift.com/container-platform/3.11/install_config/master_node_configuration.html
apiServerArguments:
event-ttl:
- "15m"
In OCP 4.3, same as OCP 3.11, I think that I can change event-ttl by editing CRD of "kubeapiservers.operator.openshift.io".
# oc get kubeapiservers.operator.openshift.io/cluster -o yaml
apiVersion: operator.openshift.io/v1
kind: KubeAPIServer
metadata:
annotations:
release.openshift.io/create-only: "true"
creationTimestamp: "2020-03-18T23:46:56Z"
generation: 3
name: cluster
resourceVersion: "428767"
selfLink: /apis/operator.openshift.io/v1/kubeapiservers/cluster
uid: 927aefc8-04e6-437b-9965-5404dd5da7e8
spec:
logLevel: ""
managementState: Managed
observedConfig:
apiServerArguments:
feature-gates:
- RotateKubeletServerCertificate=true
- SupportPodPidsLimit=true
- NodeDisruptionExclusion=true
- ServiceNodeExclusion=true
- SCTPSupport=true
- LegacyNodeRoleBehavior=false
Is my understanding right?
I have a deployment of Openshift 4.3 and I wanted to use Service Account Token Volume Projection however to do this I have to pass these flags to the API server:
--service-account-issuer
--service-account-signing-key-file
--service-account-api-audiences
How can this be accomplished? Is it by editing the resource kubeapiservers.operator.openshift.io/cluster
if so how because looking at the resource and reading what doc was available I was not able to figure it out?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.