GithubHelp home page GithubHelp logo

Comments (7)

kairen avatar kairen commented on August 26, 2024

hosts部分是指这个指令:

$ cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -hostname=10.96.0.1,172.16.35.12,127.0.0.1,kubernetes.default \
  -profile=kubernetes \
  apiserver-csr.json | cfssljson -bare apiserver

-hostname部分等效JSON 的 hosts。

from kairen.github.io.

idevz avatar idevz commented on August 26, 2024

非常感谢,另外 @kairen 我在你的文档里面(https://kairen.github.io/2017/10/27/kubernetes/deploy/manual-v1.8/),没找见 kube-apiserver 可执行文件的部署啊,只看见证书的生成,这是为什么?

from kairen.github.io.

kairen avatar kairen commented on August 26, 2024

我是用 static pod,你會看到下載 apiserver.yml 檔案,並放置到 /etc/kubernetes/manifests 裡面,一旦 kubelet 啟動時,就會建立 Pod 來啟動 kube-apiserver。

from kairen.github.io.

idevz avatar idevz commented on August 26, 2024

请问是/etc/systemd/system/kubelet.service.d/10-kubelet.conf中的这部分配置吗?

Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --anonymous-auth=false"

from kairen.github.io.

idevz avatar idevz commented on August 26, 2024

@kairen 你好,我重试了,还是不ok,麻烦帮忙看下是什么问题啊?

Dec 14 13:58:15 z systemd: Started kubelet: The Kubernetes Node Agent.
Dec 14 13:58:15 z systemd: Starting kubelet: The Kubernetes Node Agent...
Dec 14 13:58:15 z kubelet: I1214 13:58:15.795363   22115 feature_gate.go:156] feature gates: map[]
Dec 14 13:58:15 z kubelet: I1214 13:58:15.795429   22115 controller.go:114] kubelet config controller: starting controller
Dec 14 13:58:15 z kubelet: I1214 13:58:15.795432   22115 controller.go:118] kubelet config controller: validating combination of defaults and flags
Dec 14 13:58:15 z systemd: Started Kubernetes systemd probe.
Dec 14 13:58:15 z systemd: Starting Kubernetes systemd probe.
Dec 14 13:58:15 z kubelet: I1214 13:58:15.798482   22115 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Dec 14 13:58:15 z kubelet: I1214 13:58:15.798515   22115 client.go:95] Start docker client with request timeout=2m0s
Dec 14 13:58:15 z kubelet: I1214 13:58:15.802559   22115 feature_gate.go:156] feature gates: map[]
Dec 14 13:58:15 z kubelet: W1214 13:58:15.802661   22115 server.go:289] --cloud-provider=auto-detect is deprecated. The desired cloud provider should be set explicitly
Dec 14 13:58:15 z kubelet: I1214 13:58:15.813580   22115 manager.go:149] cAdvisor running in container: "/sys/fs/cgroup/cpu,cpuacct/system.slice/kubelet.service"
Dec 14 13:58:15 z kubelet: W1214 13:58:15.823768   22115 manager.go:157] unable to connect to Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp [::1]:15441: getsockopt: connection refused
Dec 14 13:58:15 z kubelet: W1214 13:58:15.823849   22115 manager.go:166] unable to connect to CRI-O api service: Get http://%2Fvar%2Frun%2Fcrio.sock/info: dial unix /var/run/crio.sock: connect: no such file or directory
Dec 14 13:58:15 z kubelet: I1214 13:58:15.832615   22115 fs.go:139] Filesystem UUIDs: map[e34bb25e-b67f-4141-ac30-06d0e0730ddd:/dev/dm-2 e877f2a9-7020-4aa3-a6e5-dba861b04d30:/dev/sda1 258c25fe-fdef-4717-bb2d-2bed1afece2c:/dev/dm-0 b64cddad-5f6d-4f38-af82-345cc3ed36c5:/dev/dm-1]
Dec 14 13:58:15 z kubelet: I1214 13:58:15.832639   22115 fs.go:140] Filesystem partitions: map[tmpfs:{mountpoint:/dev/shm major:0 minor:17 fsType:tmpfs blockSize:0} /dev/mapper/VolGroup-lv_root:{mountpoint:/var/lib/docker/overlay major:253 minor:0 fsType:ext4 blockSize:0} /dev/sda1:{mountpoint:/boot major:8 minor:1 fsType:ext4 blockSize:0} /dev/mapper/VolGroup-lv_home:{mountpoint:/home major:253 minor:2 fsType:ext4 blockSize:0}]
Dec 14 13:58:15 z kubelet: I1214 13:58:15.833717   22115 manager.go:216] Machine: {NumCores:4 CpuFrequency:2904000 MemoryCapacity:1925455872 HugePages:[{PageSize:2048 NumPages:0}] MachineID:c667b86a272242cfa90809f75d5bb700 SystemUUID:C667B86A-2722-42CF-A908-09F75D5BB700 BootID:ce571e96-6100-4f78-b514-5c0c7413b1b7 Filesystems:[{Device:/dev/mapper/VolGroup-lv_home DeviceMajor:253 DeviceMinor:2 Capacity:11999027200 Type:vfs Inodes:753664 HasInodes:true} {Device:tmpfs DeviceMajor:0 DeviceMinor:17 Capacity:962727936 Type:vfs Inodes:235041 HasInodes:true} {Device:/dev/mapper/VolGroup-lv_root DeviceMajor:253 DeviceMinor:0 Capacity:52710309888 Type:vfs Inodes:3276800 HasInodes:true} {Device:/dev/sda1 DeviceMajor:8 DeviceMinor:1 Capacity:499337216 Type:vfs Inodes:128016 HasInodes:true}] DiskMap:map[253:0:{Name:dm-0 Major:253 Minor:0 Size:53687091200 Scheduler:none} 253:1:{Name:dm-1 Major:253 Minor:1 Size:2113929216 Scheduler:none} 253:2:{Name:dm-2 Major:253 Minor:2 Size:12327059456 Scheduler:none} 8:0:{Name:sda Major:8 Minor:0 Size:68719476736 Scheduler:cfq}] NetworkDevices:[{Name:eth0 MacAddress:00:1c:42:cd:ab:56 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:2147000320 Cores:[{Id:0 Threads:[0] Caches:[]} {Id:1 Threads:[1] Caches:[]} {Id:2 Threads:[2] Caches:[]} {Id:3 Threads:[3] Caches:[]}] Caches:[{Size:8388608 Type:Unified Level:3}]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
Dec 14 13:58:15 z kubelet: I1214 13:58:15.834521   22115 manager.go:222] Version: {KernelVersion:3.10.0-327.22.2.el7.x86_64 ContainerOsVersion:CentOS Linux 7 (Core) DockerVersion:17.11.0-ce DockerAPIVersion:1.34 CadvisorVersion: CadvisorRevision:}
Dec 14 13:58:15 z kubelet: I1214 13:58:15.835090   22115 server.go:422] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Dec 14 13:58:15 z kubelet: I1214 13:58:15.836398   22115 container_manager_linux.go:252] container manager verified user specified cgroup-root exists: /
Dec 14 13:58:15 z kubelet: I1214 13:58:15.836428   22115 container_manager_linux.go:257] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} ExperimentalQOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s}
Dec 14 13:58:15 z kubelet: I1214 13:58:15.836522   22115 container_manager_linux.go:288] Creating device plugin handler: false
Dec 14 13:58:15 z kubelet: I1214 13:58:15.836604   22115 kubelet.go:273] Adding manifest file: /etc/kubernetes/manifests
Dec 14 13:58:15 z kubelet: I1214 13:58:15.836660   22115 kubelet.go:283] Watching apiserver
Dec 14 13:58:15 z kubelet: E1214 13:58:15.838967   22115 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://10.211.55.3:6443/api/v1/nodes?fieldSelector=metadata.name%3Dz.shared&resourceVersion=0: dial tcp 10.211.55.3:6443: getsockopt: connection refused

文档里面在启动 master 之前,并没有看到 Rtk 和 CRI-O 的配置和部署啊?
unable to connect to Rkt api service: rktunable to connect to CRI-O api service
另外 在 Adding manifest file: /etc/kubernetes/manifests 后,API Server 并没有启动,6443 端口没有工作,所以都是 refused 的。下面是我的 /etc/kubernetes/manifests/apiserver.yml 配置:

[/home/z]> cat /etc/kubernetes/manifests/apiserver.yml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    scheduler.alpha.kubernetes.io/critical-pod: ""
  labels:
    component: kube-apiserver
    tier: control-plane
  name: kube-apiserver
  namespace: kube-system
spec:
  hostNetwork: true
  containers :
  - name: kube-apiserver
    image: gcr.io/google_containers/kube-apiserver-amd64:v1.8.2
    command:
      - kube-apiserver
      - --v=0
      - --logtostderr=true
      - --allow-privileged=true
      - --bind-address=0.0.0.0
      - --secure-port=6443
      - --insecure-port=0
      - --advertise-address=10.211.55.3
      - --service-cluster-ip-range=10.96.0.0/12
      - --service-node-port-range=30000-32767
      - --etcd-servers=https://10.211.55.3:2379
      - --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem
      - --etcd-certfile=/etc/etcd/ssl/etcd.pem
      - --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem
      - --client-ca-file=/etc/kubernetes/pki/ca.pem
      - --tls-cert-file=/etc/kubernetes/pki/apiserver.pem
      - --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem
      - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem
      - --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem
      - --service-account-key-file=/etc/kubernetes/pki/sa.pub
      - --token-auth-file=/etc/kubernetes/token.csv
      - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
      - --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota
      - --authorization-mode=Node,RBAC
      - --enable-bootstrap-token-auth=true
      - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem
      - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem
      - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem
      - --requestheader-allowed-names=aggregator
      - --requestheader-group-headers=X-Remote-Group
      - --requestheader-extra-headers-prefix=X-Remote-Extra-
      - --requestheader-username-headers=X-Remote-User
      - --audit-log-maxage=30
      - --audit-log-maxbackup=3
      - --audit-log-maxsize=100
      - --audit-log-path=/var/log/kubernetes/audit.log
      - --audit-policy-file=/etc/kubernetes/audit-policy.yml
      - --experimental-encryption-provider-config=/etc/kubernetes/encryption.yml
      - --event-ttl=1h
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 6443
        scheme: HTTPS
      initialDelaySeconds: 15
      timeoutSeconds: 15
    resources:
      requests:
        cpu: 250m
    volumeMounts:
    - mountPath: /var/log/kubernetes
      name: k8s-audit-log
    - mountPath: /etc/kubernetes/pki
      name: k8s-certs
      readOnly: true
    - mountPath: /etc/ssl/certs
      name: ca-certs
      readOnly: true
    - mountPath: /etc/kubernetes/encryption.yml
      name: encryption-config
      readOnly: true
    - mountPath: /etc/kubernetes/audit-policy.yml
      name: audit-config
      readOnly: true
    - mountPath: /etc/kubernetes/token.csv
      name: token-csv
      readOnly: true
    - mountPath: /etc/etcd/ssl
      name: etcd-ca-certs
      readOnly: true
  volumes:
  - hostPath:
      path: /var/log/kubernetes
      type: DirectoryOrCreate
    name: k8s-audit-log
  - hostPath:
      path: /etc/kubernetes/pki
      type: DirectoryOrCreate
    name: k8s-certs
  - hostPath:
      path: /etc/kubernetes/encryption.yml
      type: FileOrCreate
    name: encryption-config
  - hostPath:
      path: /etc/kubernetes/audit-policy.yml
      type: FileOrCreate
    name: audit-config
  - hostPath:
      path: /etc/kubernetes/token.csv
      type: FileOrCreate
    name: token-csv
  - hostPath:
      path: /etc/ssl/certs
      type: DirectoryOrCreate
    name: ca-certs
  - hostPath:
      path: /etc/etcd/ssl
      type: DirectoryOrCreate
    name: etcd-ca-certs

还有最后一个报错 error: failed to run Kubelet: Running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false.
我看这个设置在你的 /etc/systemd/system/kubelet.service.d/10-kubelet.conf 里面已经有了,为啥没生效呢。。非常感谢。

from kairen.github.io.

idevz avatar idevz commented on August 26, 2024

静态 pod是不是得直接去拉取静态镜像?网络不好的话,是不是不能成功?这个超时时间在哪儿设置?是给 kubelet 加个 --runtime-request-timeout 吗?

Dec 14 15:25:48 z dockerd: time="2017-12-14T15:25:48.939028761+08:00" level=warning msg="Error getting v2 registry: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Dec 14 15:25:48 z dockerd: time="2017-12-14T15:25:48.939144583+08:00" level=info msg="Attempting next endpoint for pull after error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Dec 14 15:25:48 z dockerd: time="2017-12-14T15:25:48.939333166+08:00" level=error msg="Handler for POST /v1.31/images/create returned error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Dec 14 15:25:48 z dockerd: time="2017-12-14T15:25:48.941181792+08:00" level=warning msg="Error getting v2 registry: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Dec 14 15:25:48 z dockerd: time="2017-12-14T15:25:48.941231406+08:00" level=info msg="Attempting next endpoint for pull after error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Dec 14 15:25:48 z dockerd: time="2017-12-14T15:25:48.941282481+08:00" level=error msg="Handler for POST /v1.31/images/create returned error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Dec 14 15:25:48 z dockerd: time="2017-12-14T15:25:48.944545613+08:00" level=warning msg="Error getting v2 registry: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Dec 14 15:25:48 z dockerd: time="2017-12-14T15:25:48.944599666+08:00" level=info msg="Attempting next endpoint for pull after error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Dec 14 15:25:48 z dockerd: time="2017-12-14T15:25:48.944627341+08:00" level=error msg="Handler for POST /v1.31/images/create returned error: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"

from kairen.github.io.

idevz avatar idevz commented on August 26, 2024

@kairen 你好,折腾了几天,终于能run了,结果如下:

Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State	PID/Program name
tcp        0	  0 127.0.0.1:10248         0.0.0.0:*               LISTEN	2560/kubelet
tcp        0	  0 127.0.0.1:10251         0.0.0.0:*               LISTEN	2877/kube-scheduler
tcp        0	  0 127.0.0.1:10252         0.0.0.0:*               LISTEN	2828/kube-controlle
tcp        0	  0 0.0.0.0:22              0.0.0.0:*               LISTEN	1260/sshd
tcp        0	  0 127.0.0.1:8118          0.0.0.0:*               LISTEN	1258/privoxy
tcp        0	  0 127.0.0.1:25            0.0.0.0:*               LISTEN	1821/master
tcp6	   0	  0 :::10250                :::*                    LISTEN	2560/kubelet
tcp6	   0	  0 :::6443                 :::*                    LISTEN	2872/kube-apiserver
tcp6	   0	  0 :::2379                 :::*                    LISTEN	1261/etcd
tcp6	   0	  0 :::2380                 :::*                    LISTEN	1261/etcd
tcp6	   0	  0 :::10255                :::*                    LISTEN	2560/kubelet
tcp6	   0	  0 :::22                   :::*                    LISTEN	1260/sshd
tcp6	   0	  0 ::1:25                  :::*                    LISTEN	1821/master
tcp6	   0	  0 :::4194                 :::*                    LISTEN	2560/kubelet

但是,在运行命令验证的时候,还是不ok,执行结果都是 Unable to connect to the server: Forwarding failure

┌[root@master1] [/dev/pts/2]
└[/home/z]> kubectl get csr
Unable to connect to the server: Forwarding failure
┌[root@master1] [/dev/pts/2] [1]
└[/home/z]> hostname
master1
┌[root@master1] [/dev/pts/2]
└[/home/z]> kubectl get node
Unable to connect to the server: Forwarding failure

from kairen.github.io.

Related Issues (4)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.