GithubHelp home page GithubHelp logo

aliyuncontainerservice / minikube Goto Github PK

View Code? Open in Web Editor NEW

This project forked from kubernetes/minikube

1.3K 32.0 150.0 167.64 MB

普大喜奔,官方Minikube提供了完整对国内用户支持,完美支持Addon组件。 建议参考 https://yq.aliyun.com/articles/221687 或 https://github.com/AliyunContainerService/minikube/wiki 最新支持minikube v1.24.0

License: Apache License 2.0

Makefile 1.40% Go 55.85% Shell 4.05% Python 0.01% PowerShell 0.31% NSIS 0.16% Dockerfile 0.38% CSS 0.03% HTML 36.19% JavaScript 0.64% C++ 0.76% NASL 0.05% C 0.01% SCSS 0.10% Lua 0.03% QMake 0.01% Assembly 0.02%

minikube's Introduction

minikube

Actions Status GoReport Widget Github All Releases Latest Release

minikube logo

minikube implements a local Kubernetes cluster on macOS, Linux, and Windows. minikube's primary goals are to be the best tool for local Kubernetes application development and to support all Kubernetes features that fit.

screenshot

Features

minikube runs the latest stable release of Kubernetes, with support for standard Kubernetes features like:

As well as developer-friendly features:

For more information, see the official minikube website

Installation

See the Getting Started Guide

📣 Please fill out our fast 5-question survey so that we can learn how & why you use minikube, and what improvements we should make. Thank you! 👯

Documentation

See https://minikube.sigs.k8s.io/docs/

More Examples

See minikube in action here

Community

minikube is a Kubernetes #sig-cluster-lifecycle project.

Join our meetings:

minikube's People

Contributors

aaron-prindle avatar afbjorklund avatar andriydev avatar azhao155 avatar balopat avatar blueelvis avatar dependabot[bot] avatar dlorenc avatar govargo avatar ilya-zuyev avatar jeffmaury avatar jimmidyson avatar josedonizetti avatar k8s-ci-robot avatar klaases avatar laozc avatar luxas avatar marcosdiez avatar medyagh avatar minikube-bot avatar mschwrz avatar n0npax avatar prasadkatti avatar prezha avatar r2d4 avatar sharifelgamal avatar spowelljr avatar tstromberg avatar woodcockjosh avatar yosshy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

minikube's Issues

failed to get driver ip: getting IP: IPs output should only be one line, got 2 lines

重现问题所需的命令minikube start

失败的命令的完整输出


minikube v1.11.0 on Ubuntu 18.04
✨ Using the docker driver based on user configuration
❗ None of the known repositories in your location are accessible. Using registry.cn-hangzhou.aliyuncs.com/google_containers as fallback.
✅ Using image repository registry.cn-hangzhou.aliyuncs.com/google_containers
👍 Starting control plane node minikube in cluster minikube
🔥 Creating docker container (CPUs=2, Memory=2200MB) ...
✋ Stopping "minikube" in docker ...
🛑 Powering off "minikube" via SSH ...
🔥 Deleting "minikube" in docker ...
🤦 StartHost failed, but will try again: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: IPs output should only be one line, got 2 lines

minikube logs命令的输出

==> Docker <==
-- Logs begin at Mon 2020-06-15 04:22:48 UTC, end at Mon 2020-06-15 04:23:49 UTC. --
Jun 15 04:22:52 minikube systemd[1]: Starting Docker Application Container Engine...
Jun 15 04:22:52 minikube dockerd[119]: time="2020-06-15T04:22:52.479610157Z" level=info msg="Starting up"
Jun 15 04:22:52 minikube dockerd[119]: time="2020-06-15T04:22:52.481400254Z" level=info msg="parsed scheme: "unix"" module=grpc
Jun 15 04:22:52 minikube dockerd[119]: time="2020-06-15T04:22:52.481423637Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
Jun 15 04:22:52 minikube dockerd[119]: time="2020-06-15T04:22:52.481442160Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
Jun 15 04:22:52 minikube dockerd[119]: time="2020-06-15T04:22:52.481449800Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
Jun 15 04:22:52 minikube dockerd[119]: time="2020-06-15T04:22:52.481524881Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0006a5340, CONNECTING" module=grpc
Jun 15 04:22:52 minikube dockerd[119]: time="2020-06-15T04:22:52.481527821Z" level=info msg="blockingPicker: the picked transport is not ready, loop back to repick" module=grpc
Jun 15 04:22:52 minikube dockerd[119]: time="2020-06-15T04:22:52.502535281Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0006a5340, READY" module=grpc
Jun 15 04:22:52 minikube dockerd[119]: time="2020-06-15T04:22:52.503544145Z" level=info msg="parsed scheme: "unix"" module=grpc
Jun 15 04:22:52 minikube dockerd[119]: time="2020-06-15T04:22:52.503562072Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
Jun 15 04:22:52 minikube dockerd[119]: time="2020-06-15T04:22:52.503578074Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
Jun 15 04:22:52 minikube dockerd[119]: time="2020-06-15T04:22:52.503589986Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
Jun 15 04:22:52 minikube dockerd[119]: time="2020-06-15T04:22:52.503630332Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0001f6c60, CONNECTING" module=grpc
Jun 15 04:22:52 minikube dockerd[119]: time="2020-06-15T04:22:52.503648271Z" level=info msg="blockingPicker: the picked transport is not ready, loop back to repick" module=grpc
Jun 15 04:22:52 minikube dockerd[119]: time="2020-06-15T04:22:52.503803884Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0001f6c60, READY" module=grpc
Jun 15 04:22:52 minikube dockerd[119]: time="2020-06-15T04:22:52.507080154Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
Jun 15 04:22:54 minikube dockerd[119]: time="2020-06-15T04:22:54.923696821Z" level=warning msg="Your kernel does not support swap memory limit"
Jun 15 04:22:54 minikube dockerd[119]: time="2020-06-15T04:22:54.923835251Z" level=warning msg="Your kernel does not support cgroup rt period"
Jun 15 04:22:54 minikube dockerd[119]: time="2020-06-15T04:22:54.923911359Z" level=warning msg="Your kernel does not support cgroup rt runtime"
Jun 15 04:22:54 minikube dockerd[119]: time="2020-06-15T04:22:54.923940905Z" level=warning msg="Your kernel does not support cgroup blkio weight"
Jun 15 04:22:54 minikube dockerd[119]: time="2020-06-15T04:22:54.923969297Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Jun 15 04:22:54 minikube dockerd[119]: time="2020-06-15T04:22:54.925252855Z" level=info msg="Loading containers: start."
Jun 15 04:23:01 minikube dockerd[119]: time="2020-06-15T04:23:01.067117896Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jun 15 04:23:03 minikube dockerd[119]: time="2020-06-15T04:23:03.716856997Z" level=info msg="Loading containers: done."
Jun 15 04:23:16 minikube dockerd[119]: time="2020-06-15T04:23:16.203194565Z" level=info msg="Docker daemon" commit=6a30dfca03 graphdriver(s)=overlay2 version=19.03.2
Jun 15 04:23:16 minikube dockerd[119]: time="2020-06-15T04:23:16.203471130Z" level=info msg="Daemon has completed initialization"
Jun 15 04:23:17 minikube dockerd[119]: time="2020-06-15T04:23:17.202335023Z" level=info msg="API listen on /run/docker.sock"
Jun 15 04:23:17 minikube systemd[1]: Started Docker Application Container Engine.

==> container status <==
time="2020-06-15T04:23:52Z" level=fatal msg="failed to connect: failed to connect, make sure you are running as root and the runtime has been started: context deadline exceeded"
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

==> describe nodes <==
E0615 12:23:52.449283 13718 logs.go:178] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:

stderr:
error: stat /var/lib/minikube/kubeconfig: no such file or directory
output: "\n** stderr ** \nerror: stat /var/lib/minikube/kubeconfig: no such file or directory\n\n** /stderr **"

==> dmesg <==
[Jun15 04:08] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
[ +0.000000] #5 #6 #7
[ +0.003959] ENERGY_PERF_BIAS: Set to 'normal', was 'performance'
[ +0.000000] mtrr: your CPUs had inconsistent variable MTRR settings
[ +0.718671] i8042: Warning: Keylock active
[ +0.004538] platform eisa.0: EISA: Cannot allocate resource for mainboard
[ +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 1
[ +0.000000] platform eisa.0: Cannot allocate resource for EISA slot 2
[ +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 3
[ +0.000000] platform eisa.0: Cannot allocate resource for EISA slot 4
[ +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 5
[ +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 6
[ +0.000000] platform eisa.0: Cannot allocate resource for EISA slot 7
[ +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 8
[ +0.176849] acpi PNP0C14:03: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:02)
[ +0.000071] wmi_bus wmi_bus-PNP0C14:04: WQBC data block query control method not found
[ +0.000002] acpi PNP0C14:04: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:02)
[ +0.006855] acpi PNP0C14:05: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:02)
[ +0.005267] acpi PNP0C14:06: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:02)
[ +0.004346] ahci 0000:00:17.0: Found 1 remapped NVMe devices.
[ +0.000001] ahci 0000:00:17.0: Switch your BIOS from RAID to AHCI mode to use them.
[ +0.002037] r8169 0000:03:00.0: can't disable ASPM; OS doesn't have ASPM control
[Jun15 04:09] thermal thermal_zone6: failed to read out thermal zone (-61)
[ +0.573116] nvidia: loading out-of-tree module taints kernel.
[ +0.000006] nvidia: module license 'NVIDIA' taints kernel.
[ +0.000000] Disabling lock debugging due to kernel taint
[ +0.005655] i2c_hid i2c-DELL089D:00: i2c-DELL089D:00 supply vdd not found, using dummy regulator
[ +0.000024] i2c_hid i2c-DELL089D:00: i2c-DELL089D:00 supply vddl not found, using dummy regulator
[ +0.110000] NVRM: loading NVIDIA UNIX x86_64 Kernel Module 435.21 Sun Aug 25 08:17:57 CDT 2019
[ +0.272972] uvcvideo 1-6:1.0: Entity type for entity Extension 4 was not initialized!
[ +0.000001] uvcvideo 1-6:1.0: Entity type for entity Extension 3 was not initialized!
[ +0.000001] uvcvideo 1-6:1.0: Entity type for entity Processing 2 was not initialized!
[ +0.000001] uvcvideo 1-6:1.0: Entity type for entity Camera 1 was not initialized!
[ +1.102435] ACPI Warning: _SB.PCI0.RP05.PEGP._DSM: Argument #4 type mismatch - Found [Buffer], ACPI requires [Package] (20190703/nsarguments-66)
[ +33.431015] Started bpfilter
[ +0.766608] iwlwifi 0000:00:14.3: FW already configured (0) - re-configuring
[ +3.460577] iwlwifi 0000:00:14.3: Unhandled alg: 0x707
[ +0.009419] iwlwifi 0000:00:14.3: Unhandled alg: 0x707
[ +0.009629] iwlwifi 0000:00:14.3: Unhandled alg: 0x707
[ +0.010095] iwlwifi 0000:00:14.3: Unhandled alg: 0x707
[ +0.010279] iwlwifi 0000:00:14.3: Unhandled alg: 0x707
[ +0.010206] iwlwifi 0000:00:14.3: Unhandled alg: 0x707
[ +0.009767] iwlwifi 0000:00:14.3: Unhandled alg: 0x707
[ +0.010013] iwlwifi 0000:00:14.3: Unhandled alg: 0x707
[ +0.010221] iwlwifi 0000:00:14.3: Unhandled alg: 0x707
[ +0.009902] iwlwifi 0000:00:14.3: Unhandled alg: 0x707
[ +0.009951] iwlwifi 0000:00:14.3: Unhandled alg: 0x707
[ +8.892562] kauditd_printk_skb: 53 callbacks suppressed

==> kernel <==
04:23:52 up 14 min, 0 users, load average: 1.02, 1.03, 0.83
Linux minikube 5.3.0-59-generic #53~18.04.1-Ubuntu SMP Thu Jun 4 14:58:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 19.10"

==> kubelet <==
-- Logs begin at Mon 2020-06-15 04:22:48 UTC, end at Mon 2020-06-15 04:23:52 UTC. --
-- No entries --

❗ unable to fetch logs for: describe nodes

使用的操作系统版本:Ubuntu 18.04

minikube报404

重现问题所需的命令
下载:curl -Lo minikube https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/releases/v1.18.1/minikube-linux-arm64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
安装启动:
sudo minikube start --image-mirror-country=cn --registry-mirror=https://kaakiyao.mirror.aliyuncs.com --kubernetes-version=1.18.1 --vm-driver=none

失败的命令的完整输出


image
校验地址直接报404
Exiting due to K8S_INSTALL_FAILED: updating control plane: downloading binaries: downloading kubelet: download failed: https://kubernetes.oss-cn-hangzhou.aliyuncs.com/kubernetes-release/release/v1.18.1/bin/linux/arm64/kubelet?checksum=file:https://kubernetes.oss-cn-hangzhou.aliyuncs.com/kubernetes-release/release/v1.18.1/bin/linux/arm64/kubelet.sha256: getter: &{Ctx:context.Background Src:https://kubernetes.oss-cn-hangzhou.aliyuncs.com/kubernetes-release/release/v1.18.1/bin/linux/arm64/kubelet?checksum=file:https://kubernetes.oss-cn-hangzhou.aliyuncs.com/kubernetes-release/release/v1.18.1/bin/linux/arm64/kubelet.sha256 Dst:/root/.minikube/cache/linux/v1.18.1/kubelet.download Pwd: Mode:2 Umask:---------- Detectors:[0x26bc460 0x26bc460 0x26bc460 0x26bc460 0x26bc460 0x26bc460 0x26bc460] Decompressors:map[bz2:0x26bc460 gz:0x26bc460 tar.bz2:0x26bc460 tar.gz:0x26bc460 tar.xz:0x26bc460 tbz2:0x26bc460 tgz:0x26bc460 txz:0x26bc460 xz:0x26bc460 zip:0x26bc460] Getters:map[file:0x4001074060 http:0x4001038000 https:0x4001038020] Dir:false ProgressListener:0x2681b10 Options:[0xa3cba0]}: invalid checksum: Error downloading checksum file: bad response code: 404

minikube logs命令的输出


使用的操作系统版本

Exiting due to GUEST_START: wait: /bin/bash

重现问题所需的命令:minikube start --image-mirror-country cn

失败的命令的完整输出


😄 Darwin 11.4 上的 minikube v1.18.1
✨ 自动选择 docker 驱动。其他选项:virtualbox, ssh
✅ 正在使用镜像存储库 registry.cn-hangzhou.aliyuncs.com/google_containers
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
🔥 Creating docker container (CPUs=2, Memory=1988MB) ...
🐳 正在 Docker 20.10.3 中准备 Kubernetes v1.20.2…
💢 initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 139 from signal SEGV
stdout:

stderr:
qemu: uncaught target signal 11 (Segmentation fault) - core dumped

💣 开启 cluster 时出错: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 139 from signal SEGV
stdout:

stderr:
SIGSEGV: segmentation violation
PC=0x0 m=0 sigcode=0

goroutine 1 [running]:
qemu: uncaught target signal 11 (Segmentation fault) - core dumped

😿 由于出错 minikube 正在退出。如果以上信息没有帮助,请提交问题反馈:
👉 https://github.com/kubernetes/minikube/issues/new/choose

❌ Exiting due to GUEST_START: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 139 from signal SEGV
stdout:

stderr:
SIGSEGV: segmentation violation
PC=0x0 m=0 sigcode=0

goroutine 1 [running]:
qemu: uncaught target signal 11 (Segmentation fault) - core dumped

😿 If the above advice does not help, please let us know:
👉 https://github.com/kubernetes/minikube/issues/new/choose

minikube logs命令的输出

+ configure_containerd

++ stat -f -c %T /kind

  • [[ overlayfs == \z\f\s ]]

  • configure_proxy

  • mkdir -p /etc/systemd/system.conf.d/

  • [[ ! -z '' ]]

  • cat

  • fix_kmsg

  • [[ ! -e /dev/kmsg ]]

  • fix_mount

  • echo 'INFO: ensuring we can execute mount/umount even with userns-remap'

INFO: ensuring we can execute mount/umount even with userns-remap

++ which mount

++ which umount

  • chown root:root /usr/bin/mount /usr/bin/umount

++ which mount

++ which umount

  • chmod -s /usr/bin/mount /usr/bin/umount

+++ which mount

++ stat -f -c %T /usr/bin/mount

  • [[ overlayfs == \a\u\f\s ]]

  • echo 'INFO: remounting /sys read-only'

INFO: remounting /sys read-only

  • mount -o remount,ro /sys

  • echo 'INFO: making mounts shared'

INFO: making mounts shared

  • mount --make-rshared /

  • retryable_fix_cgroup

++ seq 0 10

  • for i in $(seq 0 10)

  • fix_cgroup

  • [[ -f /sys/fs/cgroup/cgroup.controllers ]]

  • echo 'INFO: detected cgroup v1'

INFO: detected cgroup v1

  • echo 'INFO: fix cgroup mounts for all subsystems'

INFO: fix cgroup mounts for all subsystems

  • local current_cgroup

++ grep systemd /proc/self/cgroup

++ cut -d: -f3

  • current_cgroup=/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • local cgroup_subsystems

++ findmnt -lun -o source,target -t cgroup

++ grep /docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

++ awk '{print $2}'

  • cgroup_subsystems='/sys/fs/cgroup/cpuset

/sys/fs/cgroup/cpu

/sys/fs/cgroup/cpuacct

/sys/fs/cgroup/blkio

/sys/fs/cgroup/memory

/sys/fs/cgroup/devices

/sys/fs/cgroup/freezer

/sys/fs/cgroup/net_cls

/sys/fs/cgroup/perf_event

/sys/fs/cgroup/net_prio

/sys/fs/cgroup/hugetlb

/sys/fs/cgroup/pids

/sys/fs/cgroup/systemd'

  • local cgroup_mounts

++ grep -E -o '/[[:alnum:]].* /sys/fs/cgroup.*.*cgroup' /proc/self/mountinfo

  • cgroup_mounts='/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:167 master:19 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/cpu rw,nosuid,nodev,noexec,relatime shared:168 master:20 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/cpuacct rw,nosuid,nodev,noexec,relatime shared:169 master:21 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:170 master:22 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:171 master:23 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:172 master:24 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:173 master:25 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/net_cls rw,nosuid,nodev,noexec,relatime shared:174 master:26 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:175 master:27 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/net_prio rw,nosuid,nodev,noexec,relatime shared:176 master:28 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:177 master:29 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:178 master:30 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:180 master:32 - cgroup cgroup'

  • [[ -n /docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:167 master:19 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/cpu rw,nosuid,nodev,noexec,relatime shared:168 master:20 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/cpuacct rw,nosuid,nodev,noexec,relatime shared:169 master:21 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:170 master:22 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:171 master:23 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:172 master:24 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:173 master:25 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/net_cls rw,nosuid,nodev,noexec,relatime shared:174 master:26 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:175 master:27 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/net_prio rw,nosuid,nodev,noexec,relatime shared:176 master:28 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:177 master:29 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:178 master:30 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:180 master:32 - cgroup cgroup ]]

  • local mount_root

++ echo '/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:167 master:19 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/cpu rw,nosuid,nodev,noexec,relatime shared:168 master:20 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/cpuacct rw,nosuid,nodev,noexec,relatime shared:169 master:21 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:170 master:22 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:171 master:23 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:172 master:24 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:173 master:25 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/net_cls rw,nosuid,nodev,noexec,relatime shared:174 master:26 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:175 master:27 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/net_prio rw,nosuid,nodev,noexec,relatime shared:176 master:28 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:177 master:29 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:178 master:30 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:180 master:32 - cgroup cgroup'

++ cut '-d ' -f1

++ head -n 1

  • mount_root=/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

++ echo '/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:167 master:19 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/cpu rw,nosuid,nodev,noexec,relatime shared:168 master:20 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/cpuacct rw,nosuid,nodev,noexec,relatime shared:169 master:21 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:170 master:22 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:171 master:23 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:172 master:24 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:173 master:25 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/net_cls rw,nosuid,nodev,noexec,relatime shared:174 master:26 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:175 master:27 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/net_prio rw,nosuid,nodev,noexec,relatime shared:176 master:28 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:177 master:29 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:178 master:30 - cgroup

/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:180 master:32 - cgroup cgroup'

++ cut '-d ' -f 2

  • for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)

  • local target=/sys/fs/cgroup/cpuset/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • findmnt /sys/fs/cgroup/cpuset/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • mkdir -p /sys/fs/cgroup/cpuset/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • mount --bind /sys/fs/cgroup/cpuset /sys/fs/cgroup/cpuset/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)

  • local target=/sys/fs/cgroup/cpu/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • findmnt /sys/fs/cgroup/cpu/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • mkdir -p /sys/fs/cgroup/cpu/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • mount --bind /sys/fs/cgroup/cpu /sys/fs/cgroup/cpu/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)

  • local target=/sys/fs/cgroup/cpuacct/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • findmnt /sys/fs/cgroup/cpuacct/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • mkdir -p /sys/fs/cgroup/cpuacct/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • mount --bind /sys/fs/cgroup/cpuacct /sys/fs/cgroup/cpuacct/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)

  • local target=/sys/fs/cgroup/blkio/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • findmnt /sys/fs/cgroup/blkio/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • mkdir -p /sys/fs/cgroup/blkio/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • mount --bind /sys/fs/cgroup/blkio /sys/fs/cgroup/blkio/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)

  • local target=/sys/fs/cgroup/memory/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • findmnt /sys/fs/cgroup/memory/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • mkdir -p /sys/fs/cgroup/memory/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • mount --bind /sys/fs/cgroup/memory /sys/fs/cgroup/memory/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)

  • local target=/sys/fs/cgroup/devices/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • findmnt /sys/fs/cgroup/devices/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • mkdir -p /sys/fs/cgroup/devices/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • mount --bind /sys/fs/cgroup/devices /sys/fs/cgroup/devices/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)

  • local target=/sys/fs/cgroup/freezer/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • findmnt /sys/fs/cgroup/freezer/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • mkdir -p /sys/fs/cgroup/freezer/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • mount --bind /sys/fs/cgroup/freezer /sys/fs/cgroup/freezer/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)

  • local target=/sys/fs/cgroup/net_cls/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • findmnt /sys/fs/cgroup/net_cls/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • mkdir -p /sys/fs/cgroup/net_cls/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • mount --bind /sys/fs/cgroup/net_cls /sys/fs/cgroup/net_cls/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)

  • local target=/sys/fs/cgroup/perf_event/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • findmnt /sys/fs/cgroup/perf_event/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • mkdir -p /sys/fs/cgroup/perf_event/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • mount --bind /sys/fs/cgroup/perf_event /sys/fs/cgroup/perf_event/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)

  • local target=/sys/fs/cgroup/net_prio/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • findmnt /sys/fs/cgroup/net_prio/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • mkdir -p /sys/fs/cgroup/net_prio/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • mount --bind /sys/fs/cgroup/net_prio /sys/fs/cgroup/net_prio/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)

  • local target=/sys/fs/cgroup/hugetlb/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • findmnt /sys/fs/cgroup/hugetlb/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • mkdir -p /sys/fs/cgroup/hugetlb/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • mount --bind /sys/fs/cgroup/hugetlb /sys/fs/cgroup/hugetlb/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)

  • local target=/sys/fs/cgroup/pids/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • findmnt /sys/fs/cgroup/pids/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • mkdir -p /sys/fs/cgroup/pids/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • mount --bind /sys/fs/cgroup/pids /sys/fs/cgroup/pids/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)

  • local target=/sys/fs/cgroup/systemd/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • findmnt /sys/fs/cgroup/systemd/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • mkdir -p /sys/fs/cgroup/systemd/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • mount --bind /sys/fs/cgroup/systemd /sys/fs/cgroup/systemd/docker/a1db410eeecf3fd790ef7460f2facd744e2477b1eece50402e7bc9c1e289c40a

  • mount --make-rprivate /sys/fs/cgroup

  • echo '/sys/fs/cgroup/cpuset

/sys/fs/cgroup/cpu

/sys/fs/cgroup/cpuacct

/sys/fs/cgroup/blkio

/sys/fs/cgroup/memory

/sys/fs/cgroup/devices

/sys/fs/cgroup/freezer

/sys/fs/cgroup/net_cls

/sys/fs/cgroup/perf_event

/sys/fs/cgroup/net_prio

/sys/fs/cgroup/hugetlb

/sys/fs/cgroup/pids

/sys/fs/cgroup/systemd'

  • IFS=

  • read -r subsystem

  • mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/cpuset

  • local cgroup_root=/kubelet

  • local subsystem=/sys/fs/cgroup/cpuset

  • '[' -z /kubelet ']'

  • mkdir -p /sys/fs/cgroup/cpuset//kubelet

  • '[' /sys/fs/cgroup/cpuset == /sys/fs/cgroup/cpuset ']'

  • cat /sys/fs/cgroup/cpuset/cpuset.cpus

  • cat /sys/fs/cgroup/cpuset/cpuset.mems

  • mount --bind /sys/fs/cgroup/cpuset//kubelet /sys/fs/cgroup/cpuset//kubelet

  • IFS=

  • read -r subsystem

  • mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/cpu

  • local cgroup_root=/kubelet

  • local subsystem=/sys/fs/cgroup/cpu

  • '[' -z /kubelet ']'

  • mkdir -p /sys/fs/cgroup/cpu//kubelet

  • '[' /sys/fs/cgroup/cpu == /sys/fs/cgroup/cpuset ']'

  • mount --bind /sys/fs/cgroup/cpu//kubelet /sys/fs/cgroup/cpu//kubelet

  • IFS=

  • read -r subsystem

  • mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/cpuacct

  • local cgroup_root=/kubelet

  • local subsystem=/sys/fs/cgroup/cpuacct

  • '[' -z /kubelet ']'

  • mkdir -p /sys/fs/cgroup/cpuacct//kubelet

  • '[' /sys/fs/cgroup/cpuacct == /sys/fs/cgroup/cpuset ']'

  • mount --bind /sys/fs/cgroup/cpuacct//kubelet /sys/fs/cgroup/cpuacct//kubelet

  • IFS=

  • read -r subsystem

  • mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/blkio

  • local cgroup_root=/kubelet

  • local subsystem=/sys/fs/cgroup/blkio

  • '[' -z /kubelet ']'

  • mkdir -p /sys/fs/cgroup/blkio//kubelet

  • '[' /sys/fs/cgroup/blkio == /sys/fs/cgroup/cpuset ']'

  • mount --bind /sys/fs/cgroup/blkio//kubelet /sys/fs/cgroup/blkio//kubelet

  • IFS=

  • read -r subsystem

  • mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/memory

  • local cgroup_root=/kubelet

  • local subsystem=/sys/fs/cgroup/memory

  • '[' -z /kubelet ']'

  • mkdir -p /sys/fs/cgroup/memory//kubelet

  • '[' /sys/fs/cgroup/memory == /sys/fs/cgroup/cpuset ']'

  • mount --bind /sys/fs/cgroup/memory//kubelet /sys/fs/cgroup/memory//kubelet

  • IFS=

  • read -r subsystem

  • mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/devices

  • local cgroup_root=/kubelet

  • local subsystem=/sys/fs/cgroup/devices

  • '[' -z /kubelet ']'

  • mkdir -p /sys/fs/cgroup/devices//kubelet

  • '[' /sys/fs/cgroup/devices == /sys/fs/cgroup/cpuset ']'

  • mount --bind /sys/fs/cgroup/devices//kubelet /sys/fs/cgroup/devices//kubelet

  • IFS=

  • read -r subsystem

  • mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/freezer

  • local cgroup_root=/kubelet

  • local subsystem=/sys/fs/cgroup/freezer

  • '[' -z /kubelet ']'

  • mkdir -p /sys/fs/cgroup/freezer//kubelet

  • '[' /sys/fs/cgroup/freezer == /sys/fs/cgroup/cpuset ']'

  • mount --bind /sys/fs/cgroup/freezer//kubelet /sys/fs/cgroup/freezer//kubelet

  • IFS=

  • read -r subsystem

  • mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/net_cls

  • local cgroup_root=/kubelet

  • local subsystem=/sys/fs/cgroup/net_cls

  • '[' -z /kubelet ']'

  • mkdir -p /sys/fs/cgroup/net_cls//kubelet

  • '[' /sys/fs/cgroup/net_cls == /sys/fs/cgroup/cpuset ']'

  • mount --bind /sys/fs/cgroup/net_cls//kubelet /sys/fs/cgroup/net_cls//kubelet

  • IFS=

  • read -r subsystem

  • mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/perf_event

  • local cgroup_root=/kubelet

  • local subsystem=/sys/fs/cgroup/perf_event

  • '[' -z /kubelet ']'

  • mkdir -p /sys/fs/cgroup/perf_event//kubelet

  • '[' /sys/fs/cgroup/perf_event == /sys/fs/cgroup/cpuset ']'

  • mount --bind /sys/fs/cgroup/perf_event//kubelet /sys/fs/cgroup/perf_event//kubelet

  • IFS=

  • read -r subsystem

  • mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/net_prio

  • local cgroup_root=/kubelet

  • local subsystem=/sys/fs/cgroup/net_prio

  • '[' -z /kubelet ']'

  • mkdir -p /sys/fs/cgroup/net_prio//kubelet

  • '[' /sys/fs/cgroup/net_prio == /sys/fs/cgroup/cpuset ']'

  • mount --bind /sys/fs/cgroup/net_prio//kubelet /sys/fs/cgroup/net_prio//kubelet

  • IFS=

  • read -r subsystem

  • mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/hugetlb

  • local cgroup_root=/kubelet

  • local subsystem=/sys/fs/cgroup/hugetlb

  • '[' -z /kubelet ']'

  • mkdir -p /sys/fs/cgroup/hugetlb//kubelet

  • '[' /sys/fs/cgroup/hugetlb == /sys/fs/cgroup/cpuset ']'

  • mount --bind /sys/fs/cgroup/hugetlb//kubelet /sys/fs/cgroup/hugetlb//kubelet

  • IFS=

  • read -r subsystem

  • mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/pids

  • local cgroup_root=/kubelet

  • local subsystem=/sys/fs/cgroup/pids

  • '[' -z /kubelet ']'

  • mkdir -p /sys/fs/cgroup/pids//kubelet

  • '[' /sys/fs/cgroup/pids == /sys/fs/cgroup/cpuset ']'

  • mount --bind /sys/fs/cgroup/pids//kubelet /sys/fs/cgroup/pids//kubelet

  • IFS=

  • read -r subsystem

  • mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/systemd

  • local cgroup_root=/kubelet

  • local subsystem=/sys/fs/cgroup/systemd

  • '[' -z /kubelet ']'

  • mkdir -p /sys/fs/cgroup/systemd//kubelet

  • '[' /sys/fs/cgroup/systemd == /sys/fs/cgroup/cpuset ']'

  • mount --bind /sys/fs/cgroup/systemd//kubelet /sys/fs/cgroup/systemd//kubelet

  • IFS=

  • read -r subsystem

  • return

  • fix_machine_id

  • echo 'INFO: clearing and regenerating /etc/machine-id'

INFO: clearing and regenerating /etc/machine-id

  • rm -f /etc/machine-id

  • systemd-machine-id-setup

Initializing machine ID from D-Bus machine ID.

  • fix_product_name

  • [[ -f /sys/class/dmi/id/product_name ]]

  • fix_product_uuid

  • [[ ! -f /kind/product_uuid ]]

  • cat /proc/sys/kernel/random/uuid

  • [[ -f /sys/class/dmi/id/product_uuid ]]

  • [[ -f /sys/devices/virtual/dmi/id/product_uuid ]]

  • select_iptables

  • local mode=nft

++ wc -l

++ grep '^-'

  • num_legacy_lines=6

  • '[' 6 -ge 10 ']'

++ grep '^-'

++ wc -l

++ true

  • num_nft_lines=0

  • '[' 6 -ge 0 ']'

  • mode=legacy

  • echo 'INFO: setting iptables to detected mode: legacy'

INFO: setting iptables to detected mode: legacy

  • update-alternatives --set iptables /usr/sbin/iptables-legacy

  • echo 'retryable update-alternatives: --set iptables /usr/sbin/iptables-legacy'

  • local 'args=--set iptables /usr/sbin/iptables-legacy'

++ seq 0 15

  • for i in $(seq 0 15)

  • /usr/bin/update-alternatives --set iptables /usr/sbin/iptables-legacy

  • return

  • update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy

  • echo 'retryable update-alternatives: --set ip6tables /usr/sbin/ip6tables-legacy'

  • local 'args=--set ip6tables /usr/sbin/ip6tables-legacy'

++ seq 0 15

  • for i in $(seq 0 15)

  • /usr/bin/update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy

  • return

  • enable_network_magic

  • local docker_embedded_dns_ip=127.0.0.11

  • local docker_host_ip

++ getent ahostsv4 host.docker.internal

++ cut '-d ' -f1

++ head -n1

  • docker_host_ip=192.168.65.2

  • [[ -z 192.168.65.2 ]]

  • iptables-restore

  • iptables-save

  • sed -e 's/-d 127.0.0.11/-d 192.168.65.2/g' -e 's/-A OUTPUT (.*) -j DOCKER_OUTPUT/\0\n-A PREROUTING \1 -j DOCKER_OUTPUT/' -e 's/--to-source :53/--to-source 192.168.65.2:53/g'

  • cp /etc/resolv.conf /etc/resolv.conf.original

  • sed -e s/127.0.0.11/192.168.65.2/g /etc/resolv.conf.original

++ head -n1

++ cut '-d ' -f1

+++ hostname

++ getent ahostsv4 minikube

  • curr_ipv4=192.168.49.2

  • echo 'INFO: Detected IPv4 address: 192.168.49.2'

INFO: Detected IPv4 address: 192.168.49.2

  • '[' -f /kind/old-ipv4 ']'

  • [[ -n 192.168.49.2 ]]

  • echo -n 192.168.49.2

++ head -n1

++ cut '-d ' -f1

+++ hostname

++ getent ahostsv6 minikube

++ true

  • curr_ipv6=

  • echo 'INFO: Detected IPv6 address: '

INFO: Detected IPv6 address:

  • '[' -f /kind/old-ipv6 ']'

  • [[ -n '' ]]

++ uname -a

  • echo 'entrypoint completed: Linux minikube 5.10.47-linuxkit #1 SMP PREEMPT Sat Jul 3 21:50:16 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux'

entrypoint completed: Linux minikube 5.10.47-linuxkit #1 SMP PREEMPT Sat Jul 3 21:50:16 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux

  • exec /sbin/init

Failed to find module 'autofs4'

systemd 245.4-4ubuntu3.4 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid)

Detected virtualization docker.

Detected architecture arm64.

Failed to create symlink /sys/fs/cgroup/cpuacct: File exists

Failed to create symlink /sys/fs/cgroup/cpu: File exists

Failed to create symlink /sys/fs/cgroup/net_prio: File exists

Failed to create symlink /sys/fs/cgroup/net_cls: File exists

Welcome to Ubuntu 20.04.1 LTS!

Set hostname to .

/lib/systemd/system/docker.socket:5: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.

/lib/systemd/system/dbus.socket:5: ListenStream= references a path below legacy directory /var/run/, updating /var/run/dbus/system_bus_socket → /run/dbus/system_bus_socket; please update the unit file accordingly.

[ OK ] Started Dispatch Password …ts to Console Directory Watch.

[UNSUPP] Starting of Arbitrary Exec…Automount Point not supported.

[ OK ] Reached target Local Encrypted Volumes.

[ OK ] Reached target Network is Online.

[ OK ] Reached target Paths.

[ OK ] Reached target Slices.

[ OK ] Reached target Swap.

[ OK ] Listening on Journal Audit Socket.

[ OK ] Listening on Journal Socket (/dev/log).

[ OK ] Listening on Journal Socket.

     Mounting Huge Pages File System...

     Mounting Kernel Debug File System...

     Mounting Kernel Trace File System...

     Starting Journal Service...

     Starting Create list of st…odes for the current kernel...

     Mounting FUSE Control File System...

     Starting Remount Root and Kernel File Systems...

     Starting Apply Kernel Variables...

[ OK ] Mounted Huge Pages File System.

[ OK ] Mounted Kernel Debug File System.

[ OK ] Mounted Kernel Trace File System.

[ OK ] Finished Create list of st… nodes for the current kernel.

[ OK ] Finished Remount Root and Kernel File Systems.

     Starting Create System Users...

     Starting Update UTMP about System Boot/Shutdown...

[ OK ] Mounted FUSE Control File System.

[ OK ] Finished Update UTMP about System Boot/Shutdown.

[ OK ] Started Journal Service.

     Starting Flush Journal to Persistent Storage...

[ OK ] Finished Apply Kernel Variables.

[ OK ] Finished Flush Journal to Persistent Storage.

[ OK ] Finished Create System Users.

     Starting Create Static Device Nodes in /dev...

[ OK ] Finished Create Static Device Nodes in /dev.

[ OK ] Reached target Local File Systems (Pre).

[ OK ] Reached target Local File Systems.

[ OK ] Reached target System Initialization.

[ OK ] Started Daily Cleanup of Temporary Directories.

[ OK ] Reached target Timers.

[ OK ] Listening on D-Bus System Message Bus Socket.

     Starting Docker Socket for the API.

     Starting Podman API Socket.

[ OK ] Listening on Docker Socket for th

e API.

[ OK ] Listening on Podman API Socket.

[ OK ] Reached target Sockets.

[ OK ] Reached target Basic System.

     Starting containerd container runtime...

[ OK ] Started D-Bus System Message Bus.

     Starting minikube automount...

     Starting OpenBSD Secure Shell server...

[ OK ] Finished minikube automount.

[ OK ] Started OpenBSD Secure Shell server.

[ OK ] Started containerd container runtime.

     Starting Docker Application Container Engine...

[ OK ] Started Docker Application Container Engine.

[ OK ] Reached target Multi-User System.

[ OK ] Reached target Graphical Interface.

     Starting Update UTMP about System Runlevel Changes...

[ OK ] Finished Update UTMP about System Runlevel Changes.

使用的操作系统版本:macOS Big Sur. 版本11.4 芯片M1

1.11.5版本无法获取

由于和线上环境保持一致,所以在本地1.11.5的集群 但是报错无法kubeadm 1.11.5版本无法找到

[Install]
💾  Downloading kubelet v1.11.5
💾  Downloading kubeadm v1.11.5
W0323 23:50:36.197415   28965 exit.go:87] Failed to update cluster: downloading binaries: downloading kubeadm: Error downloading kubeadm v1.11.5: failed to download: failed to download to temp file: download failed: 1 error(s) occurred:

* received invalid status code: 404 (expected 200)
💣  Failed to update cluster: downloading binaries: downloading kubeadm: Error downloading kubeadm v1.11.5: failed to download: failed to download to temp file: download failed: 1 error(s) occurred:

* received invalid status code: 404 (expected 200)

😿  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
👉  https://github.com/kubernetes/minikube/issues/new

minikube版本

> minikube version
minikube version: v0.35.0

这个url报404了

重现问题所需的命令
./out/minikube start --image-mirror-country cn --iso-url=https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.5.1.iso --registry-mirror=https://xxxx.mirror.aliyuncs.com --kubernetes-version=v1.16.2 --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --vm-driver=none --log_dir='/var/log/minikube' --alsologtostderr >/tmp/1.txt 2>&1
失败的命令的完整输出

minikube logs命令的输出

W0701 16:53:10.062060 83159 preload.go:112] http://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v3-v1.16.2-docker-overlay2-amd64.tar.lz4 status code: 404

使用的操作系统版本
Distributor ID: Ubuntu
Description: Ubuntu 18.04.1 LTS
Release: 18.04
Codename: bionic

minikube config 无法获取 cpus 及 memory 配置

通过 minikube 启动k8s 集群命令如下:
minikube start --driver=docker --cpus=2 --memory='4g' --kubernetes-version=v1.19.8 --docker-env http_proxy=http://127.0.0.1:1081 --docker-env https_proxy=http://127.0.0.1:1081 --docker-env no_proxy=localhost,127.0.0.1,::1,192.168.31.0/24,192.168.99.0/24 --alsologtostderr --registry-mirror="https://jfnjj5zp.mirror.aliyuncs.com"
重现问题所需的命令
minikube config get cpus && minikube config get memory

失败的命令的完整输出


Error: specified key could not be found in config

minikube logs命令的输出


minikube-logs.txt

使用的操作系统版本
go version go1.15.10 darwin/amd64
macOS mojave 10.14.1
minikube version: v1.17.1

Sorry that minikube crashed. If this was unexpected, we would love to hear from you:

Mac with minikube version: v1.1.0
after running qinikube start, then showing the info below

😄 minikube v1.1.0 on darwin (amd64)
✅ using image repository registry.cn-hangzhou.aliyuncs.com/google_containers
🔥 Creating virtualbox VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
E0612 19:40:14.748221 9489 start.go:529] StartHost: create: precreate: exit status 126

💣 Unable to start VM: create: precreate: exit status 126

😿 Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
👉 https://github.com/kubernetes/minikube/issues/new

So what is wrong ? Is there any setting should be done on Mac ?

aliyun-v0.35.0 docker 编译 运行异常

C:\Users\Administrator>minikube.exe start --registry-mirror=https://XXxxx.mirror.aliyuncs.com --vm-driver=hyperv --memory=2048 --hyperv-virtual-switch=net
o minikube v0.35.0 on windows (amd64)

Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
@ Downloading Minikube ISO ...
184.42 MB / 184.42 MB [============================================] 100.00% 0s

  • "minikube" IP address is fe80::215:5dff:fe01:6818
  • Configuring Docker as the container runtime ...
  • Preparing Kubernetes environment ...
  • Pulling images required by Kubernetes v1.13.4 ...
    X Unable to pull images, which may be OK: running cmd: sudo kubeadm config images pull --config /var/lib/kubeadm.yaml: command failed: sudo kubeadm config images pull --config /var/lib/kubeadm.yaml
    stdout:
    stderr: sudo: kubeadm: command not found
    : Process exited with status 1
  • Launching Kubernetes v1.13.4 using kubeadm ...
    ! Error starting cluster: kubeadm init:
    sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI

sudo: /usr/bin/kubeadm: command not found

: Process exited with status 1

  • Sorry that minikube crashed. If this was unexpected, we would love to hear from you:

window 10 环境
操作:
1.git clone
2.切换到 aliyun-v0.35.0 分支 pull
3.运行 docker run --rm -v "D:\minikube":/go/src/k8s.io/minikube -w /go/src/k8s.io/minikube golang:stretch make cross
4.复制 minikube-windows-amd64.exe 到环境变量改名为 minikube.exe
5.运行 minikube.exe start --registry-mirror=https://XXxxx.mirror.aliyuncs.com --vm-driver=hyperv --memory=2048 --hyperv-virtual-switch=net

准备重新创建 switch 试试
是不是目前还不支持 用 docker 进行编译?

minikube start 启动不正常

Steps to reproduce the issue:

1.https://github.com/AliyunContainerService/minikube/wiki,按照阿里云minikube镜像安装后
2.minikube start 启动老是过不去报错,见details

Full output of minikube logs command:

initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.

Unfortunately, an error has occurred:
	timed out waiting for the condition

This error is likely caused by:
	- The kubelet is not running
	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	- 'systemctl status kubelet'
	- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.

Here is one example how you may list all Kubernetes containers running in docker:
	- 'docker ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'docker logs CONTAINERID'

stderr:
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.6. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

▪ Generating certificates and keys ...
▪ Booting up control plane ...\ 

/

💣 Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.

Unfortunately, an error has occurred:
	timed out waiting for the condition

This error is likely caused by:
	- The kubelet is not running
	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	- 'systemctl status kubelet'
	- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.

Here is one example how you may list all Kubernetes containers running in docker:
	- 'docker ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'docker logs CONTAINERID'

stderr:
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.6. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

╭────────────────────────────────────────────────────────────────────╮
│ │
│ 😿 If the above advice does not help, please let us know: │
│ 👉 https://github.com/kubernetes/minikube/issues/new/choose
│ │
│ Please attach the following file to the GitHub issue: │
│ - /home/hadoop/.minikube/logs/lastStart.txt │
│ │
╰────────────────────────────────────────────────────────────────────╯

❌ Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.

Unfortunately, an error has occurred:
	timed out waiting for the condition

This error is likely caused by:
	- The kubelet is not running
	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	- 'systemctl status kubelet'
	- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.

Here is one example how you may list all Kubernetes containers running in docker:
	- 'docker ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'docker logs CONTAINERID'

stderr:
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.6. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

💡 Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
🍿 Related issue: kubernetes#4172

Full output of failed command:

aliyun-v0.35.0 centos7 minikube start 异常

测试安装minikube遇到了问题,aliyun-v0.35.0 编译后,执行minikube start报错,SSH: sudo kubeadm config images pull --config /var/lib/kubeadm.yaml 执行到这里,镜像拉取还是使用的https://k8s.gcr.io,具体执行过程如下:
[root@bi_data_back_server ~]# rm -rf /root/.minikube
[root@bi_data_back_server ~]# minikube start -p minikube_server_003 --registry-mirror=https://registry.docker-cn.com –-vm-driver=virtualbox --memory=2096 --v=3 --alsologtostderr
W0319 14:10:50.512052 15668 root.go:145] Error reading config file at /root/.minikube/config/config.json: open /root/.minikube/config/config.json: no such file or directory
I0319 14:10:50.512980 15668 notify.go:121] Checking for updates...
o minikube v0.35.0 on linux (amd64)
I0319 14:10:51.245030 15668 start.go:582] Saving config:
{
"MachineConfig": {
"MinikubeISO": "https://storage.googleapis.com/minikube/iso/minikube-v0.35.0.iso",
"Memory": 2096,
"CPUs": 2,
"DiskSize": 20000,
"VMDriver": "virtualbox",
"ContainerRuntime": "docker",
"HyperkitVpnKitSock": "",
"HyperkitVSockPorts": [],
"XhyveDiskDriver": "ahci-hd",
"DockerEnv": null,
"InsecureRegistry": null,
"RegistryMirror": [
"https://registry.docker-cn.com"
],
"HostOnlyCIDR": "192.168.99.1/24",
"HypervVirtualSwitch": "",
"KvmNetwork": "default",
"DockerOpt": null,
"DisableDriverMounts": false,
"NFSShare": [],
"NFSSharesRoot": "/nfsshares",
"UUID": "",
"GPU": false,
"NoVTXCheck": false
},
"KubernetesConfig": {
"KubernetesVersion": "v1.13.4",
"NodeIP": "",
"NodePort": 8443,
"NodeName": "minikube",
"APIServerName": "minikubeCA",
"APIServerNames": null,
"APIServerIPs": null,
"DNSDomain": "cluster.local",
"ContainerRuntime": "docker",
"CRISocket": "",
"NetworkPlugin": "",
"FeatureGates": "",
"ServiceCIDR": "10.96.0.0/12",
"ExtraOptions": null,
"ShouldLoadCachedImages": false,
"EnableDefaultCNI": false
}
}
I0319 14:10:51.246068 15668 cluster.go:70] Machine does not exist... provisioning new machine
I0319 14:10:51.246110 15668 cluster.go:71] Provisioning machine with config: {MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v0.35.0.iso Memory:2096 CPUs:2 DiskSize:20000 VMDriver:virtualbox ContainerRuntime:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] XhyveDiskDriver:ahci-hd DockerEnv:[] InsecureRegistry:[] RegistryMirror:[https://registry.docker-cn.com] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: KvmNetwork:default Downloader:{} DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: GPU:false NoVTXCheck:false}

Creating virtualbox VM (CPUs=2, Memory=2096MB, Disk=20000MB) ...
@ Downloading Minikube ISO ...
184.42 MB / 184.42 MB [============================================] 100.00% 0s
Creating CA: /root/.minikube/certs/ca.pem
Creating client certificate: /root/.minikube/certs/cert.pem
Downloading /root/.minikube/cache/boot2docker.iso from file:///root/.minikube/cache/iso/minikube-v0.35.0.iso...
Creating VirtualBox VM...
Creating SSH key...
Starting the VM...
Check network to re-create if needed...
Waiting for an IP...
I0319 14:12:11.440965 15668 ssh_runner.go:101] SSH: sudo rm -f /etc/docker/server-key.pem
I0319 14:12:11.446728 15668 ssh_runner.go:101] SSH: sudo mkdir -p /etc/docker
I0319 14:12:11.458184 15668 ssh_runner.go:101] SSH: sudo rm -f /etc/docker/ca.pem
I0319 14:12:11.463265 15668 ssh_runner.go:101] SSH: sudo mkdir -p /etc/docker
I0319 14:12:11.474916 15668 ssh_runner.go:101] SSH: sudo rm -f /etc/docker/server.pem
I0319 14:12:11.479571 15668 ssh_runner.go:101] SSH: sudo mkdir -p /etc/docker
Setting Docker configuration on the remote daemon...

  • "minikube_server_003" IP address is 192.168.99.102
    I0319 14:12:12.010546 15668 start.go:582] Saving config:
    {
    "MachineConfig": {
    "MinikubeISO": "https://storage.googleapis.com/minikube/iso/minikube-v0.35.0.iso",
    "Memory": 2096,
    "CPUs": 2,
    "DiskSize": 20000,
    "VMDriver": "virtualbox",
    "ContainerRuntime": "docker",
    "HyperkitVpnKitSock": "",
    "HyperkitVSockPorts": [],
    "XhyveDiskDriver": "ahci-hd",
    "DockerEnv": null,
    "InsecureRegistry": null,
    "RegistryMirror": [
    "https://registry.docker-cn.com"
    ],
    "HostOnlyCIDR": "192.168.99.1/24",
    "HypervVirtualSwitch": "",
    "KvmNetwork": "default",
    "DockerOpt": null,
    "DisableDriverMounts": false,
    "NFSShare": [],
    "NFSSharesRoot": "/nfsshares",
    "UUID": "",
    "GPU": false,
    "NoVTXCheck": false
    },
    "KubernetesConfig": {
    "KubernetesVersion": "v1.13.4",
    "NodeIP": "192.168.99.102",
    "NodePort": 8443,
    "NodeName": "minikube",
    "APIServerName": "minikubeCA",
    "APIServerNames": null,
    "APIServerIPs": null,
    "DNSDomain": "cluster.local",
    "ContainerRuntime": "docker",
    "CRISocket": "",
    "NetworkPlugin": "",
    "FeatureGates": "",
    "ServiceCIDR": "10.96.0.0/12",
    "ExtraOptions": null,
    "ShouldLoadCachedImages": false,
    "EnableDefaultCNI": false
    }
    }
  • Configuring Docker as the container runtime ...
    I0319 14:12:12.025357 15668 ssh_runner.go:101] SSH: systemctl is-active --quiet service containerd
    I0319 14:12:12.031278 15668 ssh_runner.go:101] SSH: sudo systemctl stop containerd
    I0319 14:12:12.041675 15668 ssh_runner.go:101] SSH: systemctl is-active --quiet service containerd
    I0319 14:12:12.049553 15668 ssh_runner.go:101] SSH: systemctl is-active --quiet service crio
    I0319 14:12:12.054823 15668 ssh_runner.go:101] SSH: sudo systemctl stop crio
    I0319 14:12:12.071620 15668 ssh_runner.go:101] SSH: systemctl is-active --quiet service crio
    I0319 14:12:12.077367 15668 ssh_runner.go:101] SSH: systemctl is-active --quiet service rkt-api
    I0319 14:12:12.083119 15668 ssh_runner.go:101] SSH: sudo systemctl stop rkt-api
    I0319 14:12:12.094244 15668 ssh_runner.go:101] SSH: sudo systemctl stop rkt-metadata
    I0319 14:12:12.105508 15668 ssh_runner.go:101] SSH: systemctl is-active --quiet service rkt-api
    I0319 14:12:12.112980 15668 ssh_runner.go:101] SSH: sudo systemctl restart docker
  • Preparing Kubernetes environment ...
    I0319 14:12:12.465137 15668 kubeadm.go:394] kubelet v1.13.4 config:

[Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/usr/bin/kubelet --allow-privileged=true --cluster-dns=10.96.0.10 --authorization-mode=Webhook --client-ca-file=/var/lib/minikube/certs/ca.crt --container-runtime=docker --fail-swap-on=false --pod-manifest-path=/etc/kubernetes/manifests --hostname-override=minikube --cluster-domain=cluster.local --cgroup-driver=cgroupfs --kubeconfig=/etc/kubernetes/kubelet.conf --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf

[Install]
@ Downloading kubelet v1.13.4
@ Downloading kubeadm v1.13.4
I0319 14:12:17.211215 15668 ssh_runner.go:101] SSH: sudo rm -f /usr/bin/kubeadm
I0319 14:12:17.219408 15668 ssh_runner.go:101] SSH: sudo mkdir -p /usr/bin
I0319 14:12:22.611228 15668 ssh_runner.go:101] SSH: sudo rm -f /usr/bin/kubelet
I0319 14:12:22.619981 15668 ssh_runner.go:101] SSH: sudo mkdir -p /usr/bin
I0319 14:12:24.619605 15668 ssh_runner.go:101] SSH: sudo rm -f /lib/systemd/system/kubelet.service
I0319 14:12:24.624250 15668 ssh_runner.go:101] SSH: sudo mkdir -p /lib/systemd/system
I0319 14:12:24.633962 15668 ssh_runner.go:101] SSH: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0319 14:12:24.638796 15668 ssh_runner.go:101] SSH: sudo mkdir -p /etc/systemd/system/kubelet.service.d
I0319 14:12:24.648379 15668 ssh_runner.go:101] SSH: sudo rm -f /var/lib/kubeadm.yaml
I0319 14:12:24.653603 15668 ssh_runner.go:101] SSH: sudo mkdir -p /var/lib
I0319 14:12:24.665162 15668 ssh_runner.go:101] SSH: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
I0319 14:12:24.669881 15668 ssh_runner.go:101] SSH: sudo mkdir -p /etc/kubernetes/addons
I0319 14:12:24.679714 15668 ssh_runner.go:101] SSH: sudo rm -f /etc/kubernetes/manifests/addon-manager.yaml
I0319 14:12:24.684295 15668 ssh_runner.go:101] SSH: sudo mkdir -p /etc/kubernetes/manifests/
I0319 14:12:24.698077 15668 ssh_runner.go:101] SSH: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
I0319 14:12:24.702692 15668 ssh_runner.go:101] SSH: sudo mkdir -p /etc/kubernetes/addons
I0319 14:12:24.712282 15668 ssh_runner.go:101] SSH:
sudo systemctl daemon-reload &&
sudo systemctl enable kubelet &&
sudo systemctl start kubelet
I0319 14:12:24.780915 15668 utils.go:224] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
I0319 14:12:24.854858 15668 certs.go:47] Setting up certificates for IP: 192.168.99.102
I0319 14:12:26.212371 15668 ssh_runner.go:101] SSH: sudo rm -f /var/lib/minikube/certs/ca.crt
I0319 14:12:26.220249 15668 ssh_runner.go:101] SSH: sudo mkdir -p /var/lib/minikube/certs/
I0319 14:12:26.230756 15668 ssh_runner.go:101] SSH: sudo rm -f /var/lib/minikube/certs/ca.key
I0319 14:12:26.235811 15668 ssh_runner.go:101] SSH: sudo mkdir -p /var/lib/minikube/certs/
I0319 14:12:26.246103 15668 ssh_runner.go:101] SSH: sudo rm -f /var/lib/minikube/certs/apiserver.crt
I0319 14:12:26.251145 15668 ssh_runner.go:101] SSH: sudo mkdir -p /var/lib/minikube/certs/
I0319 14:12:26.262103 15668 ssh_runner.go:101] SSH: sudo rm -f /var/lib/minikube/certs/apiserver.key
I0319 14:12:26.266796 15668 ssh_runner.go:101] SSH: sudo mkdir -p /var/lib/minikube/certs/
I0319 14:12:26.277463 15668 ssh_runner.go:101] SSH: sudo rm -f /var/lib/minikube/certs/proxy-client-ca.crt
I0319 14:12:26.282585 15668 ssh_runner.go:101] SSH: sudo mkdir -p /var/lib/minikube/certs/
I0319 14:12:26.293006 15668 ssh_runner.go:101] SSH: sudo rm -f /var/lib/minikube/certs/proxy-client-ca.key
I0319 14:12:26.297587 15668 ssh_runner.go:101] SSH: sudo mkdir -p /var/lib/minikube/certs/
I0319 14:12:26.309248 15668 ssh_runner.go:101] SSH: sudo rm -f /var/lib/minikube/certs/proxy-client.crt
I0319 14:12:26.314110 15668 ssh_runner.go:101] SSH: sudo mkdir -p /var/lib/minikube/certs/
I0319 14:12:26.324054 15668 ssh_runner.go:101] SSH: sudo rm -f /var/lib/minikube/certs/proxy-client.key
I0319 14:12:26.328820 15668 ssh_runner.go:101] SSH: sudo mkdir -p /var/lib/minikube/certs/
I0319 14:12:26.339172 15668 ssh_runner.go:101] SSH: sudo rm -f /var/lib/minikube/kubeconfig
I0319 14:12:26.344304 15668 ssh_runner.go:101] SSH: sudo mkdir -p /var/lib/minikube
I0319 14:12:26.704772 15668 config.go:125] Using kubeconfig: /root/.kube/config

  • Pulling images required by Kubernetes v1.13.4 ...
    I0319 14:12:26.706655 15668 ssh_runner.go:101] SSH: sudo kubeadm config images pull --config /var/lib/kubeadm.yaml
    I0319 14:12:41.876841 15668 utils.go:224] ! failed to pull image "k8s.gcr.io/kube-apiserver:v1.13.4": output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
    I0319 14:12:41.876908 15668 utils.go:224] ! , error: exit status 1
    X Unable to pull images, which may be OK: running cmd: sudo kubeadm config images pull --config /var/lib/kubeadm.yaml: command failed: sudo kubeadm config images pull --config /var/lib/kubeadm.yaml
    stdout:
    stderr: failed to pull image "k8s.gcr.io/kube-apiserver:v1.13.4": output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
    , error: exit status 1
    : Process exited with status 1
  • Launching Kubernetes v1.13.4 using kubeadm ...
    I0319 14:12:41.878862 15668 ssh_runner.go:137] Run with output:
    sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI

无法启动minikube

重现问题所需的命令
minikube start --registry-mirror=https://fgi18ddn.mirror.aliyuncs.com --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --vm-driver=none --base-image registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.10

失败的命令的完整输出


[root@localhost ~]# minikube start --registry-mirror=https://fgi18ddn.mirror.aliyuncs.com --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --vm-driver=none --base-image registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.10

  • Centos 7.6.1810 上的 minikube v1.11.0
  • 根据现有的配置文件使用 none 驱动程序
  • Starting control plane node minikube in cluster minikube
  • Restarting existing none bare metal machine for "minikube" ...
  • OS release is CentOS Linux 7 (Core)
  • 正在 Docker 19.03.12 中准备 Kubernetes v1.18.3…

    kubelet.sha256: 65 B / 65 B [--------------------------] 100.00% ? p/s 0s
    kubectl.sha256: 65 B / 65 B [--------------------------] 100.00% ? p/s 0s
    kubeadm.sha256: 65 B / 65 B [--------------------------] 100.00% ? p/s 0s
    kubeadm: 37.97 MiB / 37.97 MiB [-----------] 100.00% 297.79 KiB p/s 2m11s
    kubelet: 108.04 MiB / 108.04 MiB [---------] 100.00% 762.23 KiB p/s 2m25s

X 更新 cluster 失败: updating node: downloading binaries: downloading kubectl: download failed: https://storage.googleapis.com/kubernetes-release/release/v1.18.3/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.18.3/bin/linux/amd64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://storage.googleapis.com/kubernetes-release/release/v1.18.3/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.18.3/bin/linux/amd64/kubectl.sha256 Dst:/root/.minikube/cache/linux/v1.18.3/kubectl.download Pwd: Mode:2 Detectors:[0x28a9e40 0x28a9e40 0x28a9e40 0x28a9e40 0x28a9e40 0x28a9e40] Decompressors:map[bz2:0x28a9e40 gz:0x28a9e40 tar.bz2:0x28a9e40 tar.gz:0x28a9e40 tar.xz:0x28a9e40 tbz2:0x28a9e40 tgz:0x28a9e40 txz:0x28a9e40 xz:0x28a9e40 zip:0x28a9e40] Getters:map[file:0xc0005d32f0 http:0xc000583da0 https:0xc000583dc0] Dir:false ProgressListener:0x2882f30 Options:[0xc160f0]}: Checksums did not match for /root/.minikube/cache/linux/v1.18.3/kubectl.download.
Expected: 6fcf70aae5bc64870c358fac153cdfdc93f55d8bae010741ecce06bb14c083ea
Got: 94105a5eca0ccd4de3b27221f806694fd56f439c60f7aa8e715a8f29304686f8
*sha256.digest
*

minikube logs命令的输出

  • The control plane node must be running for this command
    • To fix this, run: "minikube start"

使用的操作系统版本
centos 7.6.1810

如何在minikube start 中正常执行kubeadm init phase certs all --config /var/lib/kubeadm.yaml

如何在minikube start 中正常执行kubeadm init

执行 sudo minikube start --vm-driver=none --registry-mirror=https://registry.docker-cn.com

mecm@ubuntu:/home/ub$ sudo minikube start --vm-driver=none --registry-mirror=https://registry.docker-cn.com

😄 minikube v0.35.0 on linux (amd64)

🤹 Configuring local host environment ...

⚠️ The 'none' driver provides limited isolation and may reduce system security and reliability.
⚠️ For more information, see:
👉 https://github.com/kubernetes/minikube/blob/master/docs/vmdriver-none.md

⚠️ kubectl and minikube configuration will be stored in /home/mecm
⚠️ To use kubectl or minikube commands as your own user, you may
⚠️ need to relocate them. For example, to overwrite your own settings:

▪ sudo mv /home/mecm/.kube /home/mecm/.minikube $HOME
▪ sudo chown -R $USER $HOME/.kube $HOME/.minikube

💡 This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
💡 Tip: Use 'minikube start -p ' to create a new cluster, or 'minikube delete' to delete this one.
🏃 Re-using the currently running none VM for "minikube" ...
⌛ Waiting for SSH access ...
📶 "minikube" IP address is 192.168.230.128
🐳 Configuring Docker as the container runtime ...
✨ Preparing Kubernetes environment ...
🚜 Pulling images required by Kubernetes v1.13.4 ...
🔄 Relaunching Kubernetes v1.13.4 using kubeadm ...

============此处报错
💣 Error restarting cluster: running cmd: sudo kubeadm init phase certs all --config /var/lib/kubeadm.yaml: running command: sudo kubeadm init phase certs all --config /var/lib/kubeadm.yaml: exit status 1

😿 Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
👉 https://github.com/kubernetes/minikube/issues/new

报错为执行 sudo kubeadm init phase certs all --config /var/lib/kubeadm.yaml

mecm@ubuntu:/home/ub$ sudo kubeadm init phase certs all --config /var/lib/kubeadm.yaml
W0321 12:15:28.120910 38639 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1alpha3", Kind:"InitConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "imageRepository"
[certs] Using certificateDir folder "/var/lib/minikube/certs/"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
error execution phase certs/apiserver-kubelet-client: [certs] certificate apiserver-kubelet-client not signed by CA certificate ca: crypto/rsa: verification error

删除此 --config /var/lib/kubeadm.yaml 正常执行.

mecm@ubuntu:/home/ub$ sudo kubeadm init phase certs all
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using the existing "sa" key

期待您的答复。谢谢。

Exiting due to GUEST_START

重现问题所需的命令
minikube start
失败的命令的完整输出
😄 minikube v1.17.1 on Darwin 11.2.3
✨ Automatically selected the docker driver. Other choices: hyperkit, vmware, virtualbox, ssh
👍 Starting control plane node minikube in cluster minikube
🔥 Creating docker container (CPUs=2, Memory=1988MB) ...
🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.2 ...
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔎 Verifying Kubernetes components...
❗ Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Unauthorized]
🌟 Enabled addons: storage-provisioner

❌ Exiting due to GUEST_START: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.2

😿 If the above advice does not help, please let us know:
👉 https://github.com/kubernetes/minikube/issues/new/choose

Docker version:
v20.10.5

使用的操作系统版本
MacOS 11.2.3

error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1alpha3", Kind:"InitConfiguration"}

start cluster, got below errors

minikube start --vm-driver=hyperkit --registry-mirror=https://registry.docker-cn.com
😄  minikube v0.35.0 on darwin (amd64)
💡  Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
🏃  Re-using the currently running hyperkit VM for "minikube" ...
⌛  Waiting for SSH access ...
📶  "minikube" IP address is 192.168.64.2
🐳  Configuring Docker as the container runtime ...
✨  Preparing Kubernetes environment ...
🚜  Pulling images required by Kubernetes v1.13.4 ...
❌  Unable to pull images, which may be OK: running cmd: sudo kubeadm config images pull --config /var/lib/kubeadm.yaml: command failed: sudo kubeadm config images pull --config /var/lib/kubeadm.yaml
stdout:
stderr: W0313 11:35:37.149909    9133 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1alpha3", Kind:"InitConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "imageRepository"
failed to pull image "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.13.4": output: Error response from daemon: Get https://registry.cn-hangzhou.aliyuncs.com/v2/: dial tcp: lookup registry.cn-hangzhou.aliyuncs.com on 192.168.64.1:53: read udp 192.168.64.2:36381->192.168.64.1:53: read: connection refused
, error: exit status 1
: Process exited with status 1
🔄  Relaunching Kubernetes v1.13.4 using kubeadm ...
⌛  Waiting for pods: apiserver^C

minikube v0.28.1 kube-dns 拉取镜像失败

Failed to pull image "k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8": rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
似乎是没有用阿里的源

Ubuntu 18.04 桌面版,操作步骤:

  1. curl -Lo minikube http://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/releases/v0.28.1/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

  2. minikube start --registry-mirror=https://registry.docker-cn.com --logtostderr --loglevel 0

  3. minikube dashboard

发现 kube-dns 没有部署成功

dashboard is not work after 'minikube start' use v0.28.1

Environment:

  • Minikube version : v0.28.1
  • OS : Darwin 17.7.0 Darwin Kernel Version 17.7.0 RELEASE_X86_64 x86_64
  • VM Driver : virtualbox
  • ISO version : v0.28.1

Issue
dashboard is not work

host$ minikube dashboard
Waiting, endpoint for service is not ready yet...
Waiting, endpoint for service is not ready yet...

What you expected to happen:
get it to work.

How to reproduce it (as minimally and precisely as possible):

  1. git clone https://github.com/AliyunContainerService/minikube
  2. mv ./minikube $GOPATH/src/k8s.io/minikube
  3. git checkout aliyun-v0.28.1
  4. make
  5. minikube start

Output of minikube logs (if applicable):

Sep 17 08:11:41 minikube kubelet[2830]: E0917 08:11:41.106207    2830 pod_workers.go:186] 
Error syncing pod 0eee2be7-ba1f-11e8-8cad-080027e3272d ("kubernetes-dashboard-685cfbd9f6-px2t4_kube-system(0eee2be7-ba1f-11e8-8cad-080027e3272d)"), 
skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed 
container=kubernetes-dashboard pod=kubernetes-dashboard-685cfbd9f6-px2t4_kube-system(0eee2be7-ba1f-11e8-8cad-080027e3272d)"

Anything else do we need to know:

host$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                    READY     STATUS             RESTARTS   AGE
default       nginx-6778d667d5-4mxpj                  1/1       Running            0          32m
default       nginx-6778d667d5-bmhxl                  1/1       Running            0          14m
default       nginx-6778d667d5-pc5hz                  1/1       Running            0          14m
kube-system   etcd-minikube                           1/1       Running            0          5h
kube-system   kube-addon-manager-minikube             1/1       Running            0          5h
kube-system   kube-apiserver-minikube                 1/1       Running            0          5h
kube-system   kube-controller-manager-minikube        1/1       Running            0          6h
kube-system   kube-dns-b4bd9576-rn772                 3/3       Running            0          6h
kube-system   kube-proxy-n2vsz                        1/1       Running            0          6h
kube-system   kube-scheduler-minikube                 1/1       Running            0          5h
kube-system   kubernetes-dashboard-685cfbd9f6-px2t4   0/1       CrashLoopBackOff   26         6h
kube-system   storage-provisioner                     1/1       Running            0          6h


host$ kubectl get deployments --all-namespaces
NAMESPACE     NAME                   DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
default       nginx                  3         3         3            3           41m
kube-system   kube-dns               1         1         1            1           6h
kube-system   kubernetes-dashboard   1         1         1            0           6h


host$ kubectl describe deployments/kubernetes-dashboard -n kube-system
Name:                   kubernetes-dashboard
Namespace:              kube-system
CreationTimestamp:      Mon, 17 Sep 2018 10:11:59 +0800
Labels:                 addonmanager.kubernetes.io/mode=Reconcile
                        kubernetes.io/minikube-addons=dashboard
                        version=v1.8.1
Annotations:            deployment.kubernetes.io/revision=1
                        kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","kubernetes.io/minikub...
Selector:               addonmanager.kubernetes.io/mode=Reconcile,app=kubernetes-dashboard,version=v1.8.1
Replicas:               1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  addonmanager.kubernetes.io/mode=Reconcile
           app=kubernetes-dashboard
           version=v1.8.1
  Containers:
   kubernetes-dashboard:
    Image:        registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.8.1
    Port:         9090/TCP
    Host Port:    0/TCP
    Liveness:     http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Progressing    True    NewReplicaSetAvailable
  Available      False   MinimumReplicasUnavailable
OldReplicaSets:  <none>
NewReplicaSet:   kubernetes-dashboard-685cfbd9f6 (1/1 replicas created)
Events:          <none>

Unable to run alternate versions using '--kubernetes-version'

Currently, the Aliyun mirror does not support the structure, or is missing the files required to run alternate versions of kubernetes.

minikube start --kubernetes-version v1.13.1
Starting local Kubernetes v1.13.1 cluster...
Starting VM...
Getting VM IP address...
E0110 13:31:01.728673   10549 start.go:198] Error parsing version semver:  No Major.Minor.Patch elements found
Moving files into cluster...
Downloading kubeadm v1.13.1
Downloading kubelet v1.13.1
E0110 13:31:02.018690   10549 start.go:254] Error updating cluster:  downloading binaries: downloading kubelet: Error downloading kubelet v1.13.1: failed to download: failed to download to temp file: download failed: 1 error(s) occurred:

* received invalid status code: 404 (expected 200)

path, err := maybeDownloadAndCache(bin, cfg.KubernetesVersion)

return fmt.Sprintf("https://kubernetes.oss-cn-hangzhou.aliyuncs.com/kubernetes-release/release/%s/bin/linux/%s/%s", version, runtime.GOARCH, binaryName)

添加节点失败

registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd 不存在

minikube addon registry 中image digest错误导致无法拉取镜像

重现问题所需的命令
minikube addons enable registry

失败的命令的完整输出

Failed to pull image "registry.cn-hangzhou.aliyuncs.com/google_containers/registry:2.7.1@sha256:d5459fcb27aecc752520df4b492b08358a1912fcdfa454f7d2101d4b09991daa": rpc error: code = Unknown desc = Error response from daemon: manifest for registry.cn-hangzhou.aliyuncs.com/google_containers/registry@sha256:d5459fcb27aecc752520df4b492b08358a1912fcdfa454f7d2101d4b09991daa not found: manifest unknown: manifest unknown
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/registry:2.7.1 2.7.1: Pulling from google_containers/registry 6a428f9f83b0: Pull complete 90cad49de35d: Pull complete b215d0b40846: Pull complete 429305b6c15c: Pull complete 6f7e10a4e907: Pull complete Digest: sha256:265d4a5ed8bf0df27d1107edb00b70e658ee9aa5acb3f37336c5a17db634481e Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/google_containers/registry:2.7.1 registry.cn-hangzhou.aliyuncs.com/google_containers/registry:2.7.1

minikube logs命令的输出

Sep 23 15:02:51 minikube kubelet[3081]: E0923 15:02:51.937514 3081 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.cn-hangzhou.aliyuncs.com/google_containers/registry:2.7.1@sha256:d5459fcb27aecc752520df4b492b08358a1912fcdfa454f7d2101d4b09991daa\\\"\"" pod="kube-system/registry-8fxq4" podUID=612adaf0-19c8-488f-a240-9a69c4b14c08

使用的操作系统版本
MacOS 11.6

minikube start 失败

当执行 minikube start 命令时出现:

  • Creating docker container (CPUs=2, Memory=7900MB) ...
    ! StartHost failed, but will try again: recreate: creating host: create: creating: prepare kic ssh: copying pub key: docker copy /tmp/tmpf-memory-asset730871237 into minikube:/home/docker/.ssh/authorized_keys, output: unknown shorthand flag: 'a' in -a
    See 'docker cp --help'.
    : exit status 125

我的主机上没有/home/docker 目录

aliyun-v0.26.1 是不是没有Build成功?

What happened:
RT.
What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Output of minikube logs (if applicable):

Anything else do we need to know:

`minikube start`时, `save image` 报错:`write: unexpected EOF`

重现问题所需的命令

minikube start --memory='4g' --kubernetes-version=v1.16.5 --image-mirror-country='cn' --image-repository='registry.cn-hangzhou.aliyuncs.com/google_containers' 

看了 #26 这个,尝试 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.16.5,是可以成功下载的。

2020-08-24 14:06:08
又试了下 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.16.5
不稳定重现:
image

2020-08-24 14:24:07
换了 aliyun 的版本 ( https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/releases/v1.12.3/minikube-darwin-amd64),还是一样的报错。

失败的命令的完整输出

😄  minikube v1.12.3 on Darwin 10.13.6
🆕  Kubernetes 1.18.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.18.3
✨  Using the docker driver based on existing profile
❗  You cannot change the memory size for an exiting minikube cluster. Please first delete the cluster.
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
E0824 11:33:45.325384   85399 cache.go:63] save image to file "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.16.5" -> "/Users/mac/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler_v1.16.5" failed: write: unexpected EOF
E0824 11:34:39.549663   85399 cache.go:63] save image to file "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.16.5" -> "/Users/mac/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy_v1.16.5" failed: write: unexpected EOF

minikube logs命令的输出

使用的操作系统版本
macOS 10.13.6

minikube version: v1.12.3
(minikube repo release 里面下载的)

start 时一直卡住在创建虚拟机上

重现问题所需的命令
使用 minikube start --registry-mirror=https://registry.docker-cn.com 命令启动minikube
失败的命令的完整输出

* minikube v1.2.0 on windows (amd64) * using image repository registry.cn-hangzhou.aliyuncs.com/google_containers * Downloading Minikube ISO ... 129.33 MB / 129.33 MB [============================================] 100.00% 0s * Creating virtualbox VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...

minikube logs命令的输出

X command runner: getting ssh client for bootstrapper: Error dialing tcp via ssh client: dial tcp 127.0.0.1:22: connectex: No connection could be made because the target machine actively refused it.

使用的操作系统版本
win10

wiki里面的Linux下载地址写错了哦

wiki里面的命令:

curl -Lo minikube https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/releases/download/v1.11.0/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

正确的地址是:

https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/releases/v1.11.0/minikube-linux-amd64

minikube start 拉取镜像失败

重现问题所需的命令
minikube start --image-mirror-country='cn' --driver=podman

失败的命令的完整输出

Trying to pull registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase:v0.0.28...
Getting image source signatures
Copying blob sha256:78327423e17ada490d201b803a590dfa5b0f4a9ea730b508e28c1cfc8196af19
Copying blob sha256:54cd0eafb8eb9b16a65f6c8e5d3696eb42b23713b8b2b47296dce7016a486c0e
Copying blob sha256:4d69c9b228bc3c15b8ca7fbba18ea7f1304b737c69659c520b12e8b5592d6272
Copying blob sha256:c4394a92d1f8760cf7d17fee0bcee732c94c5b858dd8d19c7ff02beecf3b4e83
Copying blob sha256:a70d879fa5984474288d52009479054b8bb2993de2a1859f43b5480600cecb24
Copying blob sha256:10e6159c56c084c858f5de2416454ac0a49ddda47b764e4379c5d5a147c9bf5f
Copying blob sha256:bb94cc19c43e6ff41c532b98462b93b6faf88c8db335cc75f16b1f4eee07314c
Copying blob sha256:da72846de1786721bfcf7920d0b8871a9e51f152ea10e6be0616b7ceda8d7352
Copying blob sha256:a70a31eeb24bc586b2a5c0af89c33a903c10546b7345477115a470bdf4ccdc66
Copying blob sha256:c2b2c3cf93ceefc03b38003fe52fa32d6bcb5da2a0efce39daa38473b0950f16
Copying blob sha256:3b5bd227582d9cf02e269c89d3009c79d51a38149dcb0a6ebec9b467b67cfbe5
Copying blob sha256:84457157061ada400f198e9bf66f56f388eb3c700b6e71b63252c187bc74ead3
Copying blob sha256:493515fdd83a4c71ccb08129be7ade6b8f0185b8544861f92ec11b3dcd3a0ca0
Copying blob sha256:6d480f34e2eaa59018c8143b4fe250da8332976ed3054a24996981e97c9c73c9
Copying blob sha256:cea744d39bec70062323eb7c4e6b6b2b12814c324a59f46e152f85c91d19a3bc
Copying blob sha256:479f8a181f701080f02aeeae0e407761f6ca9aa0f39746e8f27535d4f2c8c398
Copying blob sha256:e044092236e0d9b9200a4b9659d3bac35a3f9621d6293c3ed036f22b4c7d80b1
Copying blob sha256:9281d569d74d68a2baebaae82bce2bbdce9f6da836f6abd1226a18a2fd781576
Copying blob sha256:47cd91998bdcf456a634ac4f504cc40dd987e82009593e5a39cb9f60fdc10902
Copying blob sha256:d8b3475e38ea0f3f002a8ed2068730a5cfbecec515d59859cbdce926c14ca3a7
Copying blob sha256:3eb2cf2b1b500c66c283f39c79a36b59648ec5109967d478be6af080bac353e6
Copying blob sha256:574120e8eb7e6e9811497b310d767efcdbc14efa7e403806add530ddc9820cc2
Copying blob sha256:2245101666e7bf65e7e96992a02bc1e15addd32c39f75334a1439c042c73bc1e
Copying blob sha256:1d95937ba9480946f6ef602954cd09c3ba2dfee8569ab4e32575ce099dd70874
Copying blob sha256:3d24f5e8febe91de28b644fc4cadb5f75d24b6a223addbd6b554762794b47d6f
Copying blob sha256:29b6a4e477fdd03e5c2f2bedcc3a62a98589bab48ad352af9c43327775ee2572
Copying blob sha256:594c9927e9615ac8d558f307555b393dc6bc9f995ad285d38aba1c04e85fa4f4
Copying blob sha256:728ba94c5dcdfe535e73e9c934186bece1213306a86fa0ebbb0ffe4f83fb72e3
Copying blob sha256:06b7872c50f0421776c0ee2367deb12f66c82b8e24878398c64967c864d45fdb
Copying blob sha256:f24b73fa9b26a8fa3da7cac63169c16b6f08273cbc2a1b473779762f3eace107
Copying blob sha256:61df2ca17b0b7d1184930de6ff50434806d267f88f227cbfb70e243dba35e064
Copying blob sha256:745a2226880d56bed1527a39278274514ca1043b0afcc1ba0b22315b302d3583
Copying blob sha256:bcfb06a95d5d08bc83e21a0f55e8a373ee12a05e1080ea432ba8bfd314f262c0
Copying blob sha256:b6b0e088166b238d22eab3241b5b538505082e91644c3829c313ae5dd05932a5
Copying blob sha256:d0cbdcb4fd47c9f6a8264bfa1b151c7767e64282b12829e93bcf896c4c8855d5
Copying blob sha256:23514efdde86cb524d56a6e69ed1c7369f1965316d3faeef429dd62b5bd1fe15
Copying blob sha256:d33daeab2d597cd2f16919e37837c40ff4f453291774f7e39e52399b96d47c60
Copying blob sha256:cb5c9d48926ce59be517d1cbf8a27fc179981bbfa2c4fedf2365bea2aceda152
Copying blob sha256:d38db1d98a663db4406ded96b5618fcbee002790a7f923de305b63e1bcfdbf34
time="2022-01-10T15:25:37+08:00" level=warning msg="failed, retrying in 1s ... (1/3). Error: Error writing blob: error storing blob to file "/var/tmp/storage483482029/7": error happened during read: read tcp 10.10.22.152:54168->183.131.227.249:80: read: connection reset by peer"
Getting image source signatures
Copying blob sha256:54cd0eafb8eb9b16a65f6c8e5d3696eb42b23713b8b2b47296dce7016a486c0e
Copying blob sha256:10e6159c56c084c858f5de2416454ac0a49ddda47b764e4379c5d5a147c9bf5f
Copying blob sha256:4d69c9b228bc3c15b8ca7fbba18ea7f1304b737c69659c520b12e8b5592d6272
Copying blob sha256:78327423e17ada490d201b803a590dfa5b0f4a9ea730b508e28c1cfc8196af19
Copying blob sha256:c4394a92d1f8760cf7d17fee0bcee732c94c5b858dd8d19c7ff02beecf3b4e83
Copying blob sha256:a70d879fa5984474288d52009479054b8bb2993de2a1859f43b5480600cecb24
Copying blob sha256:da72846de1786721bfcf7920d0b8871a9e51f152ea10e6be0616b7ceda8d7352
Copying blob sha256:a70a31eeb24bc586b2a5c0af89c33a903c10546b7345477115a470bdf4ccdc66
Copying blob sha256:3b5bd227582d9cf02e269c89d3009c79d51a38149dcb0a6ebec9b467b67cfbe5
Copying blob sha256:c2b2c3cf93ceefc03b38003fe52fa32d6bcb5da2a0efce39daa38473b0950f16
Copying blob sha256:bb94cc19c43e6ff41c532b98462b93b6faf88c8db335cc75f16b1f4eee07314c
Copying blob sha256:84457157061ada400f198e9bf66f56f388eb3c700b6e71b63252c187bc74ead3
Copying blob sha256:6d480f34e2eaa59018c8143b4fe250da8332976ed3054a24996981e97c9c73c9
Copying blob sha256:493515fdd83a4c71ccb08129be7ade6b8f0185b8544861f92ec11b3dcd3a0ca0
Copying blob sha256:cea744d39bec70062323eb7c4e6b6b2b12814c324a59f46e152f85c91d19a3bc
Copying blob sha256:479f8a181f701080f02aeeae0e407761f6ca9aa0f39746e8f27535d4f2c8c398
Copying blob sha256:e044092236e0d9b9200a4b9659d3bac35a3f9621d6293c3ed036f22b4c7d80b1
Copying blob sha256:9281d569d74d68a2baebaae82bce2bbdce9f6da836f6abd1226a18a2fd781576
Copying blob sha256:47cd91998bdcf456a634ac4f504cc40dd987e82009593e5a39cb9f60fdc10902
Copying blob sha256:3eb2cf2b1b500c66c283f39c79a36b59648ec5109967d478be6af080bac353e6
Copying blob sha256:d8b3475e38ea0f3f002a8ed2068730a5cfbecec515d59859cbdce926c14ca3a7
Copying blob sha256:1d95937ba9480946f6ef602954cd09c3ba2dfee8569ab4e32575ce099dd70874
Copying blob sha256:2245101666e7bf65e7e96992a02bc1e15addd32c39f75334a1439c042c73bc1e
Copying blob sha256:574120e8eb7e6e9811497b310d767efcdbc14efa7e403806add530ddc9820cc2
Copying blob sha256:29b6a4e477fdd03e5c2f2bedcc3a62a98589bab48ad352af9c43327775ee2572
Copying blob sha256:594c9927e9615ac8d558f307555b393dc6bc9f995ad285d38aba1c04e85fa4f4
Copying blob sha256:3d24f5e8febe91de28b644fc4cadb5f75d24b6a223addbd6b554762794b47d6f
Copying blob sha256:06b7872c50f0421776c0ee2367deb12f66c82b8e24878398c64967c864d45fdb
Copying blob sha256:728ba94c5dcdfe535e73e9c934186bece1213306a86fa0ebbb0ffe4f83fb72e3
Copying blob sha256:f24b73fa9b26a8fa3da7cac63169c16b6f08273cbc2a1b473779762f3eace107
Copying blob sha256:745a2226880d56bed1527a39278274514ca1043b0afcc1ba0b22315b302d3583
Copying blob sha256:61df2ca17b0b7d1184930de6ff50434806d267f88f227cbfb70e243dba35e064
Copying blob sha256:b6b0e088166b238d22eab3241b5b538505082e91644c3829c313ae5dd05932a5
Copying blob sha256:bcfb06a95d5d08bc83e21a0f55e8a373ee12a05e1080ea432ba8bfd314f262c0
Copying blob sha256:d0cbdcb4fd47c9f6a8264bfa1b151c7767e64282b12829e93bcf896c4c8855d5
Copying blob sha256:d33daeab2d597cd2f16919e37837c40ff4f453291774f7e39e52399b96d47c60
Copying blob sha256:23514efdde86cb524d56a6e69ed1c7369f1965316d3faeef429dd62b5bd1fe15
Copying blob sha256:cb5c9d48926ce59be517d1cbf8a27fc179981bbfa2c4fedf2365bea2aceda152
Copying blob sha256:d38db1d98a663db4406ded96b5618fcbee002790a7f923de305b63e1bcfdbf34
time="2022-01-10T15:25:46+08:00" level=warning msg="failed, retrying in 1s ... (2/3). Error: Error writing blob: error storing blob to file "/var/tmp/storage736104744/11": error happened during read: read tcp 10.10.22.152:54350->183.131.227.249:80: read: connection reset by peer"
Getting image source signatures
Copying blob sha256:78327423e17ada490d201b803a590dfa5b0f4a9ea730b508e28c1cfc8196af19
Copying blob sha256:4d69c9b228bc3c15b8ca7fbba18ea7f1304b737c69659c520b12e8b5592d6272
Copying blob sha256:c4394a92d1f8760cf7d17fee0bcee732c94c5b858dd8d19c7ff02beecf3b4e83
Copying blob sha256:54cd0eafb8eb9b16a65f6c8e5d3696eb42b23713b8b2b47296dce7016a486c0e
Copying blob sha256:10e6159c56c084c858f5de2416454ac0a49ddda47b764e4379c5d5a147c9bf5f
Copying blob sha256:a70d879fa5984474288d52009479054b8bb2993de2a1859f43b5480600cecb24
Copying blob sha256:da72846de1786721bfcf7920d0b8871a9e51f152ea10e6be0616b7ceda8d7352
Copying blob sha256:3b5bd227582d9cf02e269c89d3009c79d51a38149dcb0a6ebec9b467b67cfbe5
Copying blob sha256:a70a31eeb24bc586b2a5c0af89c33a903c10546b7345477115a470bdf4ccdc66
Copying blob sha256:bb94cc19c43e6ff41c532b98462b93b6faf88c8db335cc75f16b1f4eee07314c
Copying blob sha256:c2b2c3cf93ceefc03b38003fe52fa32d6bcb5da2a0efce39daa38473b0950f16
Copying blob sha256:84457157061ada400f198e9bf66f56f388eb3c700b6e71b63252c187bc74ead3
Copying blob sha256:493515fdd83a4c71ccb08129be7ade6b8f0185b8544861f92ec11b3dcd3a0ca0
Copying blob sha256:6d480f34e2eaa59018c8143b4fe250da8332976ed3054a24996981e97c9c73c9
Copying blob sha256:cea744d39bec70062323eb7c4e6b6b2b12814c324a59f46e152f85c91d19a3bc
Copying blob sha256:479f8a181f701080f02aeeae0e407761f6ca9aa0f39746e8f27535d4f2c8c398
Copying blob sha256:e044092236e0d9b9200a4b9659d3bac35a3f9621d6293c3ed036f22b4c7d80b1
Copying blob sha256:9281d569d74d68a2baebaae82bce2bbdce9f6da836f6abd1226a18a2fd781576
Copying blob sha256:47cd91998bdcf456a634ac4f504cc40dd987e82009593e5a39cb9f60fdc10902
Copying blob sha256:d8b3475e38ea0f3f002a8ed2068730a5cfbecec515d59859cbdce926c14ca3a7
Copying blob sha256:574120e8eb7e6e9811497b310d767efcdbc14efa7e403806add530ddc9820cc2
Copying blob sha256:3eb2cf2b1b500c66c283f39c79a36b59648ec5109967d478be6af080bac353e6
Copying blob sha256:2245101666e7bf65e7e96992a02bc1e15addd32c39f75334a1439c042c73bc1e
Copying blob sha256:1d95937ba9480946f6ef602954cd09c3ba2dfee8569ab4e32575ce099dd70874
Copying blob sha256:3d24f5e8febe91de28b644fc4cadb5f75d24b6a223addbd6b554762794b47d6f
Copying blob sha256:594c9927e9615ac8d558f307555b393dc6bc9f995ad285d38aba1c04e85fa4f4
Copying blob sha256:29b6a4e477fdd03e5c2f2bedcc3a62a98589bab48ad352af9c43327775ee2572
Copying blob sha256:728ba94c5dcdfe535e73e9c934186bece1213306a86fa0ebbb0ffe4f83fb72e3
Copying blob sha256:06b7872c50f0421776c0ee2367deb12f66c82b8e24878398c64967c864d45fdb
Copying blob sha256:f24b73fa9b26a8fa3da7cac63169c16b6f08273cbc2a1b473779762f3eace107
Copying blob sha256:745a2226880d56bed1527a39278274514ca1043b0afcc1ba0b22315b302d3583
Copying blob sha256:61df2ca17b0b7d1184930de6ff50434806d267f88f227cbfb70e243dba35e064
Copying blob sha256:bcfb06a95d5d08bc83e21a0f55e8a373ee12a05e1080ea432ba8bfd314f262c0
Copying blob sha256:b6b0e088166b238d22eab3241b5b538505082e91644c3829c313ae5dd05932a5
Copying blob sha256:d0cbdcb4fd47c9f6a8264bfa1b151c7767e64282b12829e93bcf896c4c8855d5
Copying blob sha256:23514efdde86cb524d56a6e69ed1c7369f1965316d3faeef429dd62b5bd1fe15
Copying blob sha256:d33daeab2d597cd2f16919e37837c40ff4f453291774f7e39e52399b96d47c60
Copying blob sha256:cb5c9d48926ce59be517d1cbf8a27fc179981bbfa2c4fedf2365bea2aceda152
Copying blob sha256:d38db1d98a663db4406ded96b5618fcbee002790a7f923de305b63e1bcfdbf34
time="2022-01-10T15:25:57+08:00" level=warning msg="failed, retrying in 1s ... (3/3). Error: Error writing blob: error storing blob to file "/var/tmp/storage996209767/10": error happened during read: read tcp 10.10.22.152:54526->183.131.227.249:80: read: connection reset by peer"
Getting image source signatures
Copying blob sha256:10e6159c56c084c858f5de2416454ac0a49ddda47b764e4379c5d5a147c9bf5f
Copying blob sha256:78327423e17ada490d201b803a590dfa5b0f4a9ea730b508e28c1cfc8196af19
Copying blob sha256:4d69c9b228bc3c15b8ca7fbba18ea7f1304b737c69659c520b12e8b5592d6272
Copying blob sha256:54cd0eafb8eb9b16a65f6c8e5d3696eb42b23713b8b2b47296dce7016a486c0e
Copying blob sha256:a70d879fa5984474288d52009479054b8bb2993de2a1859f43b5480600cecb24
Copying blob sha256:c4394a92d1f8760cf7d17fee0bcee732c94c5b858dd8d19c7ff02beecf3b4e83
Copying blob sha256:3b5bd227582d9cf02e269c89d3009c79d51a38149dcb0a6ebec9b467b67cfbe5
Copying blob sha256:a70a31eeb24bc586b2a5c0af89c33a903c10546b7345477115a470bdf4ccdc66
Copying blob sha256:da72846de1786721bfcf7920d0b8871a9e51f152ea10e6be0616b7ceda8d7352
Copying blob sha256:bb94cc19c43e6ff41c532b98462b93b6faf88c8db335cc75f16b1f4eee07314c
Copying blob sha256:c2b2c3cf93ceefc03b38003fe52fa32d6bcb5da2a0efce39daa38473b0950f16
Copying blob sha256:493515fdd83a4c71ccb08129be7ade6b8f0185b8544861f92ec11b3dcd3a0ca0
Copying blob sha256:6d480f34e2eaa59018c8143b4fe250da8332976ed3054a24996981e97c9c73c9
Copying blob sha256:84457157061ada400f198e9bf66f56f388eb3c700b6e71b63252c187bc74ead3
Copying blob sha256:cea744d39bec70062323eb7c4e6b6b2b12814c324a59f46e152f85c91d19a3bc
Copying blob sha256:479f8a181f701080f02aeeae0e407761f6ca9aa0f39746e8f27535d4f2c8c398
Copying blob sha256:e044092236e0d9b9200a4b9659d3bac35a3f9621d6293c3ed036f22b4c7d80b1
Copying blob sha256:9281d569d74d68a2baebaae82bce2bbdce9f6da836f6abd1226a18a2fd781576
Copying blob sha256:47cd91998bdcf456a634ac4f504cc40dd987e82009593e5a39cb9f60fdc10902
Copying blob sha256:d8b3475e38ea0f3f002a8ed2068730a5cfbecec515d59859cbdce926c14ca3a7
Copying blob sha256:3eb2cf2b1b500c66c283f39c79a36b59648ec5109967d478be6af080bac353e6
Copying blob sha256:574120e8eb7e6e9811497b310d767efcdbc14efa7e403806add530ddc9820cc2
Copying blob sha256:2245101666e7bf65e7e96992a02bc1e15addd32c39f75334a1439c042c73bc1e
Copying blob sha256:3d24f5e8febe91de28b644fc4cadb5f75d24b6a223addbd6b554762794b47d6f
Copying blob sha256:1d95937ba9480946f6ef602954cd09c3ba2dfee8569ab4e32575ce099dd70874
Copying blob sha256:29b6a4e477fdd03e5c2f2bedcc3a62a98589bab48ad352af9c43327775ee2572
Copying blob sha256:728ba94c5dcdfe535e73e9c934186bece1213306a86fa0ebbb0ffe4f83fb72e3
Copying blob sha256:594c9927e9615ac8d558f307555b393dc6bc9f995ad285d38aba1c04e85fa4f4
Copying blob sha256:06b7872c50f0421776c0ee2367deb12f66c82b8e24878398c64967c864d45fdb
Copying blob sha256:f24b73fa9b26a8fa3da7cac63169c16b6f08273cbc2a1b473779762f3eace107
Copying blob sha256:745a2226880d56bed1527a39278274514ca1043b0afcc1ba0b22315b302d3583
Copying blob sha256:61df2ca17b0b7d1184930de6ff50434806d267f88f227cbfb70e243dba35e064
Copying blob sha256:bcfb06a95d5d08bc83e21a0f55e8a373ee12a05e1080ea432ba8bfd314f262c0
Copying blob sha256:b6b0e088166b238d22eab3241b5b538505082e91644c3829c313ae5dd05932a5
Copying blob sha256:d0cbdcb4fd47c9f6a8264bfa1b151c7767e64282b12829e93bcf896c4c8855d5
Copying blob sha256:23514efdde86cb524d56a6e69ed1c7369f1965316d3faeef429dd62b5bd1fe15
Copying blob sha256:d33daeab2d597cd2f16919e37837c40ff4f453291774f7e39e52399b96d47c60
Copying blob sha256:d38db1d98a663db4406ded96b5618fcbee002790a7f923de305b63e1bcfdbf34
Copying blob sha256:cb5c9d48926ce59be517d1cbf8a27fc179981bbfa2c4fedf2365bea2aceda152
Error: Error writing blob: error storing blob to file "/var/tmp/storage591812250/10": error happened during read: read tcp 10.10.22.152:54704->183.131.227.249:80: read: connection reset by peer

I0110 15:26:16.333102 733043 start.go:547] Will try again in 5 seconds ...
I0110 15:26:21.335783 733043 start.go:313] acquiring machines lock for minikube: {Name:mk9c71132c72764b8c584f48d5173de0da73e2a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0110 15:26:21.336622 733043 start.go:317] acquired machines lock for "minikube" in 775.068µs
I0110 15:26:21.336715 733043 start.go:93] Skipping create...Using existing machine configuration
I0110 15:26:21.336729 733043 fix.go:55] fixHost starting:
I0110 15:26:21.337278 733043 cli_runner.go:115] Run: sudo -n podman container inspect minikube --format={{.State.Status}}
W0110 15:26:21.510845 733043 cli_runner.go:162] sudo -n podman container inspect minikube --format={{.State.Status}} returned with exit code 125

minikube logs命令的输出:(同上)

使用的操作系统版本

$ cat /etc/redhat-release 
CentOS Linux release 8.4.2105
$ uname -a
Linux localhost.localdomain 4.18.0-305.12.1.el8_4.x86_64 #1 SMP Wed Aug 11 01:59:55 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

kube-system 的 pod 运行失败

请看截图:

列表:
列表

kube-proxy:

kube-proxy

coredns :

coredns

现新创建的资源,无法通过内部的网络进行访问

请问我有什么办法解决这个问题么?

非常感谢

请问minikube start错误,多个版本都无法运行

重现问题所需的命令:minikube start --registry-mirror=https://zw6f10y7.mirror.aliyuncs.com --driver=none

失败的命令的完整输出


😄 Centos 7.4.1708 上的 minikube v1.13.0
✨ 根据用户配置使用 none 驱动程序
👍 Starting control plane node minikube in cluster minikube
🤹 Running on localhost (CPUs=4, Memory=3790MB, Disk=44698MB) ...
ℹ️ OS release is CentOS Linux 7 (Core)
🐳 正在 Docker 19.03.12 中准备 Kubernetes v1.19.0…
> kubeadm.sha256: 65 B / 65 B [--------------------------] 100.00% ? p/s 0s
> kubelet.sha256: 65 B / 65 B [--------------------------] 100.00% ? p/s 0s
> kubectl.sha256: 65 B / 65 B [--------------------------] 100.00% ? p/s 0s
> kubectl: 10.85 MiB / 41.01 MiB [--->_________] 26.46% 3.65 MiB p/s ETA 8s❗ This bare metal machine is having trouble accessing https://registry.cn-hangzhou.aliyuncs.com/google_containers
💡 To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
> kubectl: 41.01 MiB / 41.01 MiB [---------------] 100.00% 1.07 MiB p/s 39s
> kubeadm: 37.30 MiB / 37.30 MiB [-------------] 100.00% 848.16 KiB p/s 46s
> kubelet: 104.88 MiB / 104.88 MiB [-----------] 100.00% 1.43 MiB p/s 1m13s

❌ Exiting due to K8S_INSTALL_FAILED: updating control plane: downloading binaries: downloading kubelet: download failed: https://kubernetes.oss-cn-hangzhou.aliyuncs.com/kubernetes-release/release/v1.19.0/bin/linux/amd64/kubelet?checksum=file:https://kubernetes.oss-cn-hangzhou.aliyuncs.com/kubernetes-release/release/v1.19.0/bin/linux/amd64/kubelet.sha256: getter: &{Ctx:context.Background Src:https://kubernetes.oss-cn-hangzhou.aliyuncs.com/kubernetes-release/release/v1.19.0/bin/linux/amd64/kubelet?checksum=file:https://kubernetes.oss-cn-hangzhou.aliyuncs.com/kubernetes-release/release/v1.19.0/bin/linux/amd64/kubelet.sha256 Dst:/root/.minikube/cache/linux/v1.19.0/kubelet.download Pwd: Mode:2 Detectors:[0x2ac4bc0 0x2ac4bc0 0x2ac4bc0 0x2ac4bc0 0x2ac4bc0 0x2ac4bc0] Decompressors:map[bz2:0x2ac4bc0 gz:0x2ac4bc0 tar.bz2:0x2ac4bc0 tar.gz:0x2ac4bc0 tar.xz:0x2ac4bc0 tbz2:0x2ac4bc0 tgz:0x2ac4bc0 txz:0x2ac4bc0 xz:0x2ac4bc0 zip:0x2ac4bc0] Getters:map[file:0xc00067e4b0 http:0xc000503060 https:0xc000503080] Dir:false ProgressListener:0x2a8f4e0 Options:[0xc7baf0]}: Checksums did not match for /root/.minikube/cache/linux/v1.19.0/kubelet.download.
Expected: 3f03e5c160a8b658d30b34824a1c00abadbac96e62c4d01bf5c9271a2debc3ab
Got: 856c24d83e8a374a03e61c6e006462943936c17dda9459d8caf4024e27ff4304
*sha256.digest

😿 If the above advice does not help, please let us know:
👉 https://github.com/kubernetes/minikube/issues/new/choose

minikube logs命令的输出

🤷 The control plane node must be running for this command
👉 To fix this, run: "minikube start"

使用的操作系统版本:CentOS 7

update-check输出的版本不对

重现问题所需的命令
$ minikube update-check

失败的命令的完整输出

CurrentVersion: v1.14.2
LatestVersion: v1.2.0

最新版本输出不对

使用的操作系统版本
centos8

minikube addon registry-aliases 中image digest错误导致无法拉取镜像

重现问题所需的命令
minikube addons enable registry-aliases

失败的命令的完整输出
注意虽然minikube显示addon registry-aliases enabled但是实际上image没有被拉取下来,无法启动。原版minikube无此问题。
*** registry-aliases-patch-core-dns

Failed to apply default image tag "registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9": couldn't parse image reference "registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9": invalid reference format
*** registry-aliases-hosts-update
Failed to pull image "registry.cn-hangzhou.aliyuncs.com/google_containers/alpine:3.11@sha256:0bd0e9e03a022c3b0226667621da84fc9bf562a9056130424b5bfbd8bcb0397f": rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/alpine, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/alpine:3.11 Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/alpine, repository does not exist or may require 'docker login': denied: requested access to the resource is denied

minikube logs命令的输出

Sep 23 15:11:18 minikube kubelet[3081]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:REGISTRY_ALIASES,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:registry-aliases,},Key:registryAliases,Optional:nil,},SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etchosts,ReadOnly:false,MountPath:/host-etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-lsvzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod registry-aliases-hosts-update-8hhtn_kube-system(a69c9e2f-c598-45d7-a826-b4ef09605a86): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/alpine, repository does not exist or may require 'docker login': denied: requested access to the resource is denied Sep 23 15:11:18 minikube kubelet[3081]: E0923 15:11:18.080417 3081 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"update\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/alpine, repository does not exist or may require 'docker login': denied: requested access to the resource is denied\"" pod="kube-system/registry-aliases-hosts-update-8hhtn" podUID=a69c9e2f-c598-45d7-a826-b4ef09605a86 Sep 23 15:11:29 minikube kubelet[3081]: E0923 15:11:29.790123 3081 kuberuntime_manager.go:895] container &Container{Name:core-dns-patcher,Image:registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:minikube,ReadOnly:true,MountPath:/var/lib/minikube/binaries,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-7jjx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod registry-aliases-patch-core-dns--1-vjd5f_kube-system(6b264922-b3e5-4997-be10-7d83489b95ff): InvalidImageName: Failed to apply default image tag "registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9": couldn't parse image reference "registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9": invalid reference format Sep 23 15:11:29 minikube kubelet[3081]: E0923 15:11:29.790272 3081 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"core-dns-patcher\" with InvalidImageName: \"Failed to apply default image tag \\\"registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9\\\": couldn't parse image reference \\\"registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9\\\": invalid reference format\"" pod="kube-system/registry-aliases-patch-core-dns--1-vjd5f" podUID=6b264922-b3e5-4997-be10-7d83489b95ff Sep 23 15:11:29 minikube kubelet[3081]: E0923 15:11:29.792566 3081 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"update\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.cn-hangzhou.aliyuncs.com/google_containers/alpine:3.11@sha256:0bd0e9e03a022c3b0226667621da84fc9bf562a9056130424b5bfbd8bcb0397f\\\"\"" pod="kube-system/registry-aliases-hosts-update-8hhtn" podUID=a69c9e2f-c598-45d7-a826-b4ef09605a86 Sep 23 15:11:40 minikube kubelet[3081]: E0923 15:11:40.790221 3081 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"update\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.cn-hangzhou.aliyuncs.com/google_containers/alpine:3.11@sha256:0bd0e9e03a022c3b0226667621da84fc9bf562a9056130424b5bfbd8bcb0397f\\\"\"" pod="kube-system/registry-aliases-hosts-update-8hhtn" podUID=a69c9e2f-c598-45d7-a826-b4ef09605a86 Sep 23 15:11:43 minikube kubelet[3081]: E0923 15:11:43.788772 3081 kuberuntime_manager.go:895] container &Container{Name:core-dns-patcher,Image:registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:minikube,ReadOnly:true,MountPath:/var/lib/minikube/binaries,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-7jjx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod registry-aliases-patch-core-dns--1-vjd5f_kube-system(6b264922-b3e5-4997-be10-7d83489b95ff): InvalidImageName: Failed to apply default image tag "registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9": couldn't parse image reference "registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9": invalid reference format Sep 23 15:11:43 minikube kubelet[3081]: E0923 15:11:43.788956 3081 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"core-dns-patcher\" with InvalidImageName: \"Failed to apply default image tag \\\"registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9\\\": couldn't parse image reference \\\"registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9\\\": invalid reference format\"" pod="kube-system/registry-aliases-patch-core-dns--1-vjd5f" podUID=6b264922-b3e5-4997-be10-7d83489b95ff Sep 23 15:11:48 minikube kubelet[3081]: W0923 15:11:48.026322 3081 sysinfo.go:203] Nodes topology is not available, providing CPU topology Sep 23 15:11:55 minikube kubelet[3081]: E0923 15:11:55.792045 3081 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"update\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.cn-hangzhou.aliyuncs.com/google_containers/alpine:3.11@sha256:0bd0e9e03a022c3b0226667621da84fc9bf562a9056130424b5bfbd8bcb0397f\\\"\"" pod="kube-system/registry-aliases-hosts-update-8hhtn" podUID=a69c9e2f-c598-45d7-a826-b4ef09605a86 Sep 23 15:11:56 minikube kubelet[3081]: E0923 15:11:56.788154 3081 kuberuntime_manager.go:895] container &Container{Name:core-dns-patcher,Image:registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:minikube,ReadOnly:true,MountPath:/var/lib/minikube/binaries,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-7jjx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod registry-aliases-patch-core-dns--1-vjd5f_kube-system(6b264922-b3e5-4997-be10-7d83489b95ff): InvalidImageName: Failed to apply default image tag "registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9": couldn't parse image reference "registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9": invalid reference format Sep 23 15:11:56 minikube kubelet[3081]: E0923 15:11:56.788328 3081 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"core-dns-patcher\" with InvalidImageName: \"Failed to apply default image tag \\\"registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9\\\": couldn't parse image reference \\\"registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9\\\": invalid reference format\"" pod="kube-system/registry-aliases-patch-core-dns--1-vjd5f" podUID=6b264922-b3e5-4997-be10-7d83489b95ff Sep 23 15:12:09 minikube kubelet[3081]: E0923 15:12:09.773448 3081 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"update\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.cn-hangzhou.aliyuncs.com/google_containers/alpine:3.11@sha256:0bd0e9e03a022c3b0226667621da84fc9bf562a9056130424b5bfbd8bcb0397f\\\"\"" pod="kube-system/registry-aliases-hosts-update-8hhtn" podUID=a69c9e2f-c598-45d7-a826-b4ef09605a86 Sep 23 15:12:10 minikube kubelet[3081]: E0923 15:12:10.768485 3081 kuberuntime_manager.go:895] container &Container{Name:core-dns-patcher,Image:registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:minikube,ReadOnly:true,MountPath:/var/lib/minikube/binaries,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-7jjx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod registry-aliases-patch-core-dns--1-vjd5f_kube-system(6b264922-b3e5-4997-be10-7d83489b95ff): InvalidImageName: Failed to apply default image tag "registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9": couldn't parse image reference "registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9": invalid reference format Sep 23 15:12:10 minikube kubelet[3081]: E0923 15:12:10.768670 3081 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"core-dns-patcher\" with InvalidImageName: \"Failed to apply default image tag \\\"registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9\\\": couldn't parse image reference \\\"registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9\\\": invalid reference format\"" pod="kube-system/registry-aliases-patch-core-dns--1-vjd5f" podUID=6b264922-b3e5-4997-be10-7d83489b95ff Sep 23 15:12:20 minikube kubelet[3081]: E0923 15:12:20.771658 3081 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"update\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.cn-hangzhou.aliyuncs.com/google_containers/alpine:3.11@sha256:0bd0e9e03a022c3b0226667621da84fc9bf562a9056130424b5bfbd8bcb0397f\\\"\"" pod="kube-system/registry-aliases-hosts-update-8hhtn" podUID=a69c9e2f-c598-45d7-a826-b4ef09605a86 Sep 23 15:12:24 minikube kubelet[3081]: E0923 15:12:24.768805 3081 kuberuntime_manager.go:895] container &Container{Name:core-dns-patcher,Image:registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:minikube,ReadOnly:true,MountPath:/var/lib/minikube/binaries,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-7jjx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod registry-aliases-patch-core-dns--1-vjd5f_kube-system(6b264922-b3e5-4997-be10-7d83489b95ff): InvalidImageName: Failed to apply default image tag "registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9": couldn't parse image reference "registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9": invalid reference format Sep 23 15:12:24 minikube kubelet[3081]: E0923 15:12:24.769291 3081 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"core-dns-patcher\" with InvalidImageName: \"Failed to apply default image tag \\\"registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9\\\": couldn't parse image reference \\\"registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9\\\": invalid reference format\"" pod="kube-system/registry-aliases-patch-core-dns--1-vjd5f" podUID=6b264922-b3e5-4997-be10-7d83489b95ff Sep 23 15:12:31 minikube kubelet[3081]: E0923 15:12:31.750138 3081 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"update\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.cn-hangzhou.aliyuncs.com/google_containers/alpine:3.11@sha256:0bd0e9e03a022c3b0226667621da84fc9bf562a9056130424b5bfbd8bcb0397f\\\"\"" pod="kube-system/registry-aliases-hosts-update-8hhtn" podUID=a69c9e2f-c598-45d7-a826-b4ef09605a86 Sep 23 15:12:37 minikube kubelet[3081]: E0923 15:12:37.747352 3081 kuberuntime_manager.go:895] container &Container{Name:core-dns-patcher,Image:registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:minikube,ReadOnly:true,MountPath:/var/lib/minikube/binaries,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-7jjx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod registry-aliases-patch-core-dns--1-vjd5f_kube-system(6b264922-b3e5-4997-be10-7d83489b95ff): InvalidImageName: Failed to apply default image tag "registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9": couldn't parse image reference "registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9": invalid reference format Sep 23 15:12:37 minikube kubelet[3081]: E0923 15:12:37.747432 3081 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"core-dns-patcher\" with InvalidImageName: \"Failed to apply default image tag \\\"registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9\\\": couldn't parse image reference \\\"registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9\\\": invalid reference format\"" pod="kube-system/registry-aliases-patch-core-dns--1-vjd5f" podUID=6b264922-b3e5-4997-be10-7d83489b95ff Sep 23 15:12:45 minikube kubelet[3081]: E0923 15:12:45.750551 3081 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"update\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.cn-hangzhou.aliyuncs.com/google_containers/alpine:3.11@sha256:0bd0e9e03a022c3b0226667621da84fc9bf562a9056130424b5bfbd8bcb0397f\\\"\"" pod="kube-system/registry-aliases-hosts-update-8hhtn" podUID=a69c9e2f-c598-45d7-a826-b4ef09605a86 Sep 23 15:12:50 minikube kubelet[3081]: E0923 15:12:50.748005 3081 kuberuntime_manager.go:895] container &Container{Name:core-dns-patcher,Image:registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:minikube,ReadOnly:true,MountPath:/var/lib/minikube/binaries,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-7jjx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod registry-aliases-patch-core-dns--1-vjd5f_kube-system(6b264922-b3e5-4997-be10-7d83489b95ff): InvalidImageName: Failed to apply default image tag "registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9": couldn't parse image reference "registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9": invalid reference format Sep 23 15:12:50 minikube kubelet[3081]: E0923 15:12:50.748174 3081 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"core-dns-patcher\" with InvalidImageName: \"Failed to apply default image tag \\\"registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9\\\": couldn't parse image reference \\\"registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9\\\": invalid reference format\"" pod="kube-system/registry-aliases-patch-core-dns--1-vjd5f" podUID=6b264922-b3e5-4997-be10-7d83489b95ff Sep 23 15:12:59 minikube kubelet[3081]: E0923 15:12:59.728387 3081 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"update\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.cn-hangzhou.aliyuncs.com/google_containers/alpine:3.11@sha256:0bd0e9e03a022c3b0226667621da84fc9bf562a9056130424b5bfbd8bcb0397f\\\"\"" pod="kube-system/registry-aliases-hosts-update-8hhtn" podUID=a69c9e2f-c598-45d7-a826-b4ef09605a86 Sep 23 15:13:01 minikube kubelet[3081]: E0923 15:13:01.726308 3081 kuberuntime_manager.go:895] container &Container{Name:core-dns-patcher,Image:registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:minikube,ReadOnly:true,MountPath:/var/lib/minikube/binaries,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-7jjx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod registry-aliases-patch-core-dns--1-vjd5f_kube-system(6b264922-b3e5-4997-be10-7d83489b95ff): InvalidImageName: Failed to apply default image tag "registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9": couldn't parse image reference "registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9": invalid reference format Sep 23 15:13:01 minikube kubelet[3081]: E0923 15:13:01.726445 3081 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"core-dns-patcher\" with InvalidImageName: \"Failed to apply default image tag \\\"registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9\\\": couldn't parse image reference \\\"registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9\\\": invalid reference format\"" pod="kube-system/registry-aliases-patch-core-dns--1-vjd5f" podUID=6b264922-b3e5-4997-be10-7d83489b95ff Sep 23 15:13:12 minikube kubelet[3081]: E0923 15:13:12.727243 3081 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"update\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.cn-hangzhou.aliyuncs.com/google_containers/alpine:3.11@sha256:0bd0e9e03a022c3b0226667621da84fc9bf562a9056130424b5bfbd8bcb0397f\\\"\"" pod="kube-system/registry-aliases-hosts-update-8hhtn" podUID=a69c9e2f-c598-45d7-a826-b4ef09605a86 Sep 23 15:13:16 minikube kubelet[3081]: E0923 15:13:16.727074 3081 kuberuntime_manager.go:895] container &Container{Name:core-dns-patcher,Image:registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:minikube,ReadOnly:true,MountPath:/var/lib/minikube/binaries,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-7jjx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod registry-aliases-patch-core-dns--1-vjd5f_kube-system(6b264922-b3e5-4997-be10-7d83489b95ff): InvalidImageName: Failed to apply default image tag "registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9": couldn't parse image reference "registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9": invalid reference format Sep 23 15:13:16 minikube kubelet[3081]: E0923 15:13:16.728475 3081 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"core-dns-patcher\" with InvalidImageName: \"Failed to apply default image tag \\\"registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9\\\": couldn't parse image reference \\\"registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9\\\": invalid reference format\"" pod="kube-system/registry-aliases-patch-core-dns--1-vjd5f" podUID=6b264922-b3e5-4997-be10-7d83489b95ff Sep 23 15:13:24 minikube kubelet[3081]: E0923 15:13:24.731664 3081 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"update\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.cn-hangzhou.aliyuncs.com/google_containers/alpine:3.11@sha256:0bd0e9e03a022c3b0226667621da84fc9bf562a9056130424b5bfbd8bcb0397f\\\"\"" pod="kube-system/registry-aliases-hosts-update-8hhtn" podUID=a69c9e2f-c598-45d7-a826-b4ef09605a86 Sep 23 15:13:29 minikube kubelet[3081]: E0923 15:13:29.706344 3081 kuberuntime_manager.go:895] container &Container{Name:core-dns-patcher,Image:registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:minikube,ReadOnly:true,MountPath:/var/lib/minikube/binaries,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-7jjx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod registry-aliases-patch-core-dns--1-vjd5f_kube-system(6b264922-b3e5-4997-be10-7d83489b95ff): InvalidImageName: Failed to apply default image tag "registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9": couldn't parse image reference "registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9": invalid reference format Sep 23 15:13:29 minikube kubelet[3081]: E0923 15:13:29.707226 3081 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"core-dns-patcher\" with InvalidImageName: \"Failed to apply default image tag \\\"registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9\\\": couldn't parse image reference \\\"registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9\\\": invalid reference format\"" pod="kube-system/registry-aliases-patch-core-dns--1-vjd5f" podUID=6b264922-b3e5-4997-be10-7d83489b95ff Sep 23 15:13:37 minikube kubelet[3081]: E0923 15:13:37.707959 3081 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"update\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.cn-hangzhou.aliyuncs.com/google_containers/alpine:3.11@sha256:0bd0e9e03a022c3b0226667621da84fc9bf562a9056130424b5bfbd8bcb0397f\\\"\"" pod="kube-system/registry-aliases-hosts-update-8hhtn" podUID=a69c9e2f-c598-45d7-a826-b4ef09605a86 Sep 23 15:13:43 minikube kubelet[3081]: E0923 15:13:43.706395 3081 kuberuntime_manager.go:895] container &Container{Name:core-dns-patcher,Image:registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:minikube,ReadOnly:true,MountPath:/var/lib/minikube/binaries,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-7jjx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod registry-aliases-patch-core-dns--1-vjd5f_kube-system(6b264922-b3e5-4997-be10-7d83489b95ff): InvalidImageName: Failed to apply default image tag "registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9": couldn't parse image reference "registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9": invalid reference format Sep 23 15:13:43 minikube kubelet[3081]: E0923 15:13:43.706521 3081 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"core-dns-patcher\" with InvalidImageName: \"Failed to apply default image tag \\\"registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9\\\": couldn't parse image reference \\\"registry.cn-hangzhou.aliyuncs.com/google_containers//rhdevelopers/core-dns-patcher@sha256:9220ff32f690c3d889a52afb59ca6fcbbdbd99e5370550cc6fd249adea8ed0a9\\\": invalid reference format\"" pod="kube-system/registry-aliases-patch-core-dns--1-vjd5f" podUID=6b264922-b3e5-4997-be10-7d83489b95ff Sep 23 15:13:52 minikube kubelet[3081]: E0923 15:13:52.709052 3081 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"update\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.cn-hangzhou.aliyuncs.com/google_containers/alpine:3.11@sha256:0bd0e9e03a022c3b0226667621da84fc9bf562a9056130424b5bfbd8bcb0397f\\\"\"" pod="kube-system/registry-aliases-hosts-update-8hhtn" podUID=a69c9e2f-c598-45d7-a826-b4ef09605a86

使用的操作系统版本
MacOS 11.6

coredns can not pull

minikube start --registry-mirror=https://registry.docker-cn.com

error log: Unable to load cached images: loading cached images: stat /Users/wanglimiao/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/coredns/coredns_v1.8.4: no such file or directory

指定docker的insecure-regsitry配置失败

启动命令如下:
minikube start
--driver='virtualbox'
--image-mirror-country='cn'
--image-repository='registry.cn-hangzhou.aliyuncs.com/google_containers'
--registry-mirror=https://registry.docker-cn.com
--insecure-registry=['0.0.0.0/0']

最后一行 insecure-registry 的值试了很多,如 0.0.0.0/0,'0.0.0.0/0'等等,都不成功,最终查看到 /usr/lib/systemd/system/docker.service 文件中的 insecure-registry 配置一直是 10.96.0.0/12,导致minikube无法从我主机部署的私有registry pull 镜像

启动时 save image 失败

重现问题所需的命令
minikube start --driver=docker
失败的命令的完整输出


save image to file "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.3" -> "/Users/matrix/.minikube/cache/images/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy_v1.18.3" failed: write: unexpected EOF

minikube logs命令的输出

使用的操作系统版本
macOS

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.