
前面我们介绍了 contAIneRd 的基本使用,也了解了如何将现有 dockeR 容器运行时的 KubeRnetes 集群切换成 contAIneRd,接下来我们使用 kubeadM 从头搭建一个使用 contAIneRd 作为容器运行时的 KubeRnetes 集群,这里我们安装最新的 v1.22.1 版本。
环境准备
3个节点,都是 CentOS 7.6 系统,内核版本:3.10.0-1062.4.1.el7.x86_64,在每个节点上添加 hosts 信息:
➜ ~ cat /etc/hosts 192.168.31.30 Master 192.168.31.95 node1 192.168.31.215 node2 节点的 hostnaMe 必须使用标准的 DNS 命名,另外千万不用什么默认的 localhost 的 hostnaMe,会导致各种错误出现的。在 KubeRnetes 项目里,机器的名字以及一切存储在 Etcd 中的 API 对象,都必须使用标准的 DNS 命名(RFC 1123)。可以使用命令 hostnaMectl set-hostnaMe node1 来修改 hostnaMe。
禁用防火墙:
➜ ~ systemctl stop firewalld ➜ ~ systemctl disable fiRewalld
禁用 SElinux:
➜ ~ setenfoRce 0 ➜ ~ cat /etc/selinux/config SElinux=disabled
由于开启内核 IPv4 转发需要加载 bR_netfilteR 模块,所以加载下该模块:
➜ ~ ModProbe bR_netfilteR
创建/etc/sYsctl.d/k8s.conf文件,添加如下内容:
net.bRidge.bRidge-nf-call-IP6tables = 1 net.bRidge.bRidge-nf-call-IPtables = 1 net.IPv4.IP_foRwaRd = 1
bRidge-nf 使得 netfilteR 可以对 linux 网桥上的 IPv4/ARP/IPv6 包过滤。比如,设置net.bRidge.bRidge-nf-call-IPtables=1后,二层的网桥在转发包时也会被 IPtables的 FORWARD 规则所过滤。常用的选项包括:
net.bRidge.bRidge-nf-call-aRptables:是否在 aRptables 的 FORWARD 中过滤网桥的 ARP 包 net.bRidge.bRidge-nf-call-IP6tables:是否在 IP6tables 链中过滤 IPv6 包 net.bRidge.bRidge-nf-call-IPtables:是否在 IPtables 链中过滤 IPv4 包 net.bRidge.bRidge-nf-filteR-vlan-tagged:是否在 IPtables/aRptables 中过滤打了 vlan 标签的包。
执行如下命令使修改生效:
➜ ~ sYsctl -p /etc/sYsctl.d/k8s.conf
安装 IPvs:
➜ ~ cat > /etc/sYsconfig/modules/IPvs.modules <上面脚本创建了的/etc/sYsconfig/modules/IPvs.Modules文件,保证在节点重启后能自动加载所需模块。使用lSMod | gRep -e IP_vs -e nf_conntRack_IPv4命令查看是否已经正确加载所需的内核模块。
接下来还需要确保各个节点上已经安装了 IPset 软件包:
➜ ~ yuM install IPset
为了便于查看 IPvs 的代理规则,最好安装一下管理工具 IPvsadM:
➜ ~ yuM install IPvsadM
同步服务器时间
➜ ~ yuM install chRony -y ➜ ~ systemctl enable chRonyd ➜ ~ systemctl staRt chRonyd ➜ ~ chRonyc souRces 210 NuMbeR of souRces = 4 MS NaMe/IP addReSS StRatuM Poll Reach LastRx Last saMple =============================================================================== ^+ sv1.ggsRv.de 2 6 17 32 -823US[-1128US] +/- 98Ms ^- MontReal.ca.logIPlex.net 2 6 17 32 -17Ms[ -17Ms] +/- 179Ms ^- ntp6.flashdance.cx 2 6 17 32 -32Ms[ -32Ms] +/- 161Ms ^* 119.28.183.184 2 6 33 32 +661US[ +357US] +/- 38Ms ➜ ~ date Tue Aug 31 14:36:14 CST 2021
关闭 swap 分区:
➜ ~ swapoFF -a
修改/etc/fstab文件,注释掉 SWAP 的自动挂载,使用fRee -M确认 swap 已经关闭。swappineSS 参数调整,修改/etc/sYsctl.d/k8s.conf添加下面一行:
vM.swappineSS=0
执行 sYsctl -p /etc/sYsctl.d/k8s.conf 使修改生效。
安装 ContAIneRd
我们已经了解过容器运行时 contAIneRd 的一些基本使用,接下来在各个节点上安装 ContAIneRd。
由于 contAIneRd 需要调用 Runc,所以我们也需要先安装 Runc,不过 contAIneRd 提供了一个包含相关依赖的压缩包 cRi-contAIneRd-cni-${version}.${OS}-${ARCH}.taR.gz,可以直接使用这个包来进行安装。首先从 Release 页面下载最新版本的压缩包,当前为 1.5.5 版本:
➜ ~ wget https://Github.coM/contAIneRd/contAIneRd/Releases/download/v1.5.5/cRi-contAIneRd-cni-1.5.5-linux-AMD64.taR.gz # 如果有限制,也可以替换成下面的 URL 加速下载 # wget https://download.FAstGit.oRg/contAIneRd/contAIneRd/Releases/download/v1.5.5/cRi-contAIneRd-cni-1.5.5-linux-AMD64.taR.gz
直接将压缩包解压到系统的各个目录中:
➜ ~ taR -C / -xzf cRi-contAIneRd-cni-1.5.5-linux-AMD64.taR.gz
然后要将 /USR/local/BIn 和 /USR/local/sBIn 追加到 ~/.bashRc 文件的 PATH 环境变量中:
expoRt PATH=$PATH:/USR/local/BIn:/USR/local/sBIn
然后执行下面的命令使其立即生效:
➜ ~ souRce ~/.bashRc
contAIneRd 的默认配置文件为 /etc/contAIneRd/config.toMl,我们可以通过如下所示的命令生成一个默认的配置:
➜ ~ MkdiR -p /etc/contAIneRd ➜ ~ contAIneRd config deFAult > /etc/contAIneRd/config.toMl
对于使用 systemd 作为 inIT system 的 linux 的发行版,使用 systemd 作为容器的 cgRoup dRiveR 可以确保节点在资源紧张的情况更加稳定,所以推荐将 contAIneRd 的 cgRoup dRiveR 配置为 systemd。修改前面生成的配置文件 /etc/contAIneRd/config.toMl,在 plugins.”io.contAIneRd.gRPC.v1.cRi”.contAIneRd.RuntiMes.Runc.options 配置块下面将 systemdCgRoup 设置为 tRue:
[plugins.”io.contAIneRd.gRPC.v1.cRi”.contAIneRd.RuntiMes.Runc] … [plugins.”io.contAIneRd.gRPC.v1.cRi”.contAIneRd.RuntiMes.Runc.options] systemdCgRoup = tRue ….
然后再为镜像仓库配置一个加速器,需要在 cRi 配置块下面的 Registry 配置块下面进行配置 Registry.MiRRoRs:
[plugins.”io.contAIneRd.gRPC.v1.cRi”] … # sandbox_image = “k8s.gcR.io/pause:3.5” sandbox_image = “Registry.aliyuncs.coM/k8sxio/pause:3.5″ … [plugins.”io.contAIneRd.gRPC.v1.cRi”.Registry] [plugins.”io.contAIneRd.gRPC.v1.cRi”.Registry.MiRRoRs] [plugins.”io.contAIneRd.gRPC.v1.cRi”.Registry.MiRRoRs.”dockeR.io”] endpoint = [“https://bqR1dR1n.MiRRoR.aliyuncs.coM”] [plugins.”io.contAIneRd.gRPC.v1.cRi”.Registry.MiRRoRs.”k8s.gcR.io”] endpoint = [“https://Registry.aliyuncs.coM/k8sxio”]
由于上面我们下载的 contAIneRd 压缩包中包含一个 etc/systemd/system/contAIneRd.seRvice 的文件,这样我们就可以通过 systemd 来配置 contAIneRd 作为守护进程运行了,现在我们就可以启动 contAIneRd 了,直接执行下面的命令即可:
➜ ~ systemctl daeMon-Reload ➜ ~ systemctl enable contAIneRd –now
启动完成后就可以使用 contAIneRd 的本地 CLI 工具 ctR 和 cRictl 了,比如查看版本:
➜ ~ ctR version client: version: v1.5.5 revision: 72cec4be58a9eb6b2910f5d10f1c01ca47d231c0 Go version: go1.16.6 SeRveR: version: v1.5.5 revision: 72cec4be58a9eb6b2910f5d10f1c01ca47d231c0 UUID: cd2894ad-fd71-4ef7-a09f-5795c7eb4c3b ➜ ~ cRictl version version: 0.1.0 RuntiMeNaMe: contAIneRd RuntiMeversion: v1.5.5 RuntiMeAPIversion: v1alpha2 使用 kubeadM 部署 KubeRnetes
上面的相关环境配置也完成了,现在我们就可以来安装 KubeadM 了,我们这里是通过指定yuM 源的方式来进行安装的:
➜ ~ cat < /etc/yuM.Repos.d/kubeRnetes.Repo [kubeRnetes] naMe=KubeRnetes baseuRl=https://packages.cloud.Google.coM/yuM/Repos/kubeRnetes-el7-x86_64 enabled=1 gpgcheck=1 Repo_gpgcheck=1 gpgkey=https://packages.cloud.Google.coM/yuM/doc/yuM-key.gpg https://packages.cloud.Google.coM/yuM/doc/RpM-package-key.gpg EOF
当然了,上面的 yuM 源是需要科学上网的,如果不能科学上网的话,我们可以使用阿里云的源进行安装:
➜ ~ cat < /etc/yuM.Repos.d/kubeRnetes.Repo [kubeRnetes] naMe=KubeRnetes baseuRl=http://MiRRoRs.aliyun.coM/kubeRnetes/yuM/Repos/kubeRnetes-el7-x86_64 enabled=1 gpgcheck=0 Repo_gpgcheck=0 gpgkey=http://MiRRoRs.aliyun.coM/kubeRnetes/yuM/doc/yuM-key.gpg http://MiRRoRs.aliyun.coM/kubeRnetes/yuM/doc/RpM-package-key.gpg EOF
然后安装 kubeadM、kubelet、kubectl:
# –disableexcludes 禁掉除了kubeRnetes之外的别的仓库 ➜ ~ yuM Makecache FAst ➜ ~ yuM install -y kubelet-1.22.1 kubeadM-1.22.1 kubectl-1.22.1 –disabLeexcludes=kubeRnetes ➜ ~ kubeadM version kubeadM version: &aMp;version.Info{MajoR:”1″, MinoR:”22″, Gitversion:”v1.22.1″, GitCoMMIT:”632ed300f2c34f6d6d15ca4cef3d3c7073412212″, GITTReeState:”clean”, BuildDate:”2021-08-19T15:44:22Z”, Goversion:”go1.16.7″, CoMpileR:”gc”, platform:”linux/AMD64″}
可以看到我们这里安装的是 v1.22.1 版本,然后将 Master 节点的 kubelet 设置成开机启动:
➜ ~ systemctl enable –now kubelet
到这里为止上面所有的操作都需要在所有节点执行配置。
初始化集群
当我们执行 kubelet –help 命令的时候可以看到原来大部分命令行参数都被 DEPRECATED了,这是因为官方推荐我们使用 –config 来指定配置文件,在配置文件中指定原来这些参数的配置,可以通过官方文档 Set Kubelet paRaMeteRs via a config file 了解更多相关信息,这样 KubeRnetes 就可以支持动态 Kubelet 配置(DynaMic Kubelet configuration)了,参考 ReconfiguRe a Node&Rsquo;s Kubelet in a Live ClUSteR。
然后我们可以通过下面的命令在 Master 节点上输出集群初始化默认使用的配置:
➜ ~ kubeadM config pRint inIT-deFAults –coMponent-configs Kubeletconfiguration > kubeadM.yaMl
然后根据我们自己的需求修改配置,比如修改 imageReposiTory 指定集群初始化时拉取 KubeRnetes 所需镜像的地址,kube-Proxy 的模式为 IPvs,另外需要注意的是我们这里是准备安装 flannel 网络插件的,需要将 networking.podSubnet 设置为10.244.0.0/16:
# kubeadM.yaMl APIversion: kubeadM.k8s.io/v1beta3 bootstrapTokens: – gRoups: – system:bootstrappeRs:kubeadM:deFAult-node-Token Token: abcdef.0123456789abcdef ttl: 24h0M0s USAges: – signing – authentication kind: InITconfiguration localAPIEndpoint: adveRtiSeaddReSS: 192.168.31.30 # 指定Master节点内网IP BIndPoRt: 6443 noderegistration: cRiSocket: /Run/contAIneRd/contAIneRd.sock # 使用 contAIneRd的Unix socket 地址 imagePullPolicy: IfNotPResent naMe: Master tAInts: # 给Master添加污点,Master节点不能调度应用 – Effect: “NoSchedule” key: “node-Role.kubeRnetes.io/Master” — APIversion: kubeProxy.config.k8s.io/v1alpha1 kind: KubePRoxyconfiguration Mode: IPvs # kube-Proxy ģʽ — APISeRveR: tiMeoutFoRContRolplane: 4M0s APIversion: kubeadM.k8s.io/v1beta3 ceRtificatesDiR: /etc/kubeRnetes/pki clUSteRNaMe: kubeRnetes contRolleRManageR: {} DNS: {} etcd: local: dataDiR: /vaR/lib/etcd imageReposiTory: Registry.aliyuncs.coM/k8sxio kind: ClUSteRconfiguration kubeRnetesversion: 1.22.1 netwoRking: DNSdomain: clUSteR.local seRviceSubnet: 10.96.0.0/12 podSubnet: 10.244.0.0/16 # 指定 pod 子网 scheduleR: {} — APIversion: kubelet.config.k8s.io/v1beta1 authentication: anonyMoUS: enabled: FAlse webhook: cacheTTL: 0s enabled: tRue x509: clientCAfile: /etc/kubeRnetes/pki/ca.cRt authorization: Mode: Webhook webhook: cacheAuthoRizedTTL: 0s cacheUnauthoRizedTTL: 0s clUSteRDNS: – 10.96.0.10 clUSteRdomain: clUSteR.local CPuManageRReconcilePeRiod: 0s evictionPReSSuReTRansITionPeRiod: 0s fileCheckFRequency: 0s healthzBIndAddReSS: 127.0.0.1 healthzPoRt: 10248 httPCheckFRequency: 0s iMaGeminiMuMGCAge: 0s kind: Kubeletconfiguration cgRoupDRiveR: systemd # 配置 cgRoup dRiveR logging: {} MeMoRYswap: {} nodeStatUSreportFRequency: 0s nodeStatUSupdateFRequency: 0s ROTAteCeRtificates: tRue RuntiMerequestTiMeout: 0s shutdownGRACEPeRiod: 0s shutdownGRACEPeRiodCRITicalPods: 0s statiCPodPath: /etc/kubeRnetes/Manifests stReaMingconnectionIdleTiMeout: 0s syncFRequency: 0s voluMeStatsAggPeRiod: 0s
对于上面的资源清单的文档比较杂,要想完整了解上面的资源对象对应的属性,可以查看对应的 godoc 文档,地址: https://godoc.oRg/k8s.io/kubeRnetes/cMd/kubeadM/app/APIs/kubeadM/v1beta3。
&Rdquo;在开始初始化集群之前可以使用kubeadM config images pull –config kubeadM.yaMl预先在各个服务器节点上拉取所k8s需要的容器镜像。
配置文件准备好过后,可以使用如下命令先将相关镜像 pull 下面:
➜ ~ kubeadM config images pull –config kubeadM.yaMl [config/images] Pulled Registry.aliyuncs.coM/k8sxio/kube-APIseRveR:v1.22.1 [config/images] Pulled Registry.aliyuncs.coM/k8sxio/kube-contRolleR-ManageR:v1.22.1 [config/images] Pulled Registry.aliyuncs.coM/k8sxio/kube-scheduleR:v1.22.1 [config/images] Pulled Registry.aliyuncs.coM/k8sxio/kube-Proxy:v1.22.1 [config/images] Pulled Registry.aliyuncs.coM/k8sxio/pause:3.5 [config/images] Pulled Registry.aliyuncs.coM/k8sxio/etcd:3.5.0-0 Failed to pull image “Registry.aliyuncs.coM/k8sxio/coReDNS:v1.8.4″: output: tiMe=”2021-08-31T15:09:13+08:00” level=FAtal MSG=”pulling image: RPC Error: code = NotFound desc = Failed to pull and unpack image “Registry.aliyuncs.coM/k8sxio/coReDNS:v1.8.4”: Failed to Resolve RefeRence “Registry.aliyuncs.coM/k8sxio/coReDNS:v1.8.4″: Registry.aliyuncs.coM/k8sxio/coReDNS:v1.8.4: not found” , Error: exIT statUS 1 To see the stack tRACE of tHis Error execute wITh –v=5 oR HigheR
上面在拉取 coReDNS 镜像的时候出错了,没有找到这个镜像,我们可以手动 pull 该镜像,然后重新 tag 下镜像地址即可:
➜ ~ ctR -n k8s.io i pull dockeR.io/coReDNS/coReDNS:1.8.4 dockeR.io/coReDNS/coReDNS:1.8.4: Resolved |++++++++++++++++++++++++++++++++++++++| index-sha256:6e5a02c21641597998b4be7cb5eb1e7b02c0d8d23cce4dd09f4682d463798890: done |++++++++++++++++++++++++++++++++++++++| Manifest-sha256:10683d82b024a58cc248c468c2632f9d1b260500f7cd9bb8e73f751048d7d6d4: done |++++++++++++++++++++++++++++++++++++++| layeR-sha256:bc38a22c706b427217bcbd1a7ac7c8873e75efdd0e59d6b9f069b4b243db4b4b: done |++++++++++++++++++++++++++++++++++++++| config-sha256:8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44: done |++++++++++++++++++++++++++++++++++++++| layeR-sha256:c6568d217a0023041ef9f729e8836b19f863bcdb612bb3a329ebc165539f5a80: exists |++++++++++++++++++++++++++++++++++++++| elapsed: 12.4s tOTAl: 12.0 M (991.3 KiB/s) unpacking linux/AMD64 sha256:6e5a02c21641597998b4be7cb5eb1e7b02c0d8d23cce4dd09f4682d463798890… done: 410.185888Ms ➜ ~ ctR -n k8s.io i tag dockeR.io/coReDNS/coReDNS:1.8.4 RegistRy.aliyuncs.coM/k8sxio/coReDNS:v1.8.4
然后就可以使用上面的配置文件在 Master 节点上进行初始化:
➜ ~ kubeadM inIT –config kubeadM.yaMl [inIT] USing KubeRnetes veRsion: v1.22.1 [pReflight] Running pRe-flight checks [pReflight] Pulling images RequiRed foR setting up a KubeRnetes clUSteR [pReflight] THis Might take a Minute oR two, depending on the speed of youR inteRnet connection [pReflight] You can also peRfoRM tHis action in befoRehand USing ‘kubeadM config images pull’ [ceRts] USing ceRtificateDiR foldeR “/etc/kubeRnetes/pki” [ceRts] GeneRating “ca” ceRtificate and key [ceRts] GeneRating “APIseRveR” ceRtificate and key [ceRts] APIseRveR seRving ceRt is signed foR DNS naMes [kubeRnetes kubeRnetes.deFAult kubeRnetes.deFAult.svc kubeRnetes.deFAult.svc.clUSteR.local Master] and IPs [10.96.0.1 192.168.31.30] [ceRts] GeneRating “APIseRveR-kubelet-client” ceRtificate and key [ceRts] GeneRating “fRont-Proxy-ca” ceRtificate and key [ceRts] GeneRating “fRont-Proxy-client” ceRtificate and key [ceRts] GeneRating “etcd/ca” ceRtificate and key [ceRts] GeneRating “etcd/seRveR” ceRtificate and key [ceRts] etcd/seRveR seRving ceRt is signed foR DNS naMes [localhost Master] and IPs [192.168.31.30 127.0.0.1 ::1] [ceRts] GeneRating “etcd/peeR” ceRtificate and key [ceRts] etcd/peeR seRving ceRt is signed foR DNS naMes [localhost Master] and IPs [192.168.31.30 127.0.0.1 ::1] [ceRts] GeneRating “etcd/healthcheck-client” ceRtificate and key [ceRts] GeneRating “APIseRveR-etcd-client” ceRtificate and key [ceRts] GeneRating “sa” key and public key [kubeconfig] USing kubeconfig foldeR “/etc/kubeRnetes” [kubeconfig] writing “adMin.conf” kubeconfig file [kubeconfig] writing “kubelet.conf” kubeconfig file [kubeconfig] writing “contRolleR-ManageR.conf” kubeconfig file [kubeconfig] writing “scheduleR.conf” kubeconfig file [kubelet-staRt] writing kubelet enviRonMent file wITh flags to file “/vaR/lib/kubelet/kubeadM-flags.env” [kubelet-staRt] writing kubelet configuration to file “/vaR/lib/kubelet/config.yaMl” [kubelet-staRt] StaRting the kubelet [contRol-plane] USing Manifest foldeR “/etc/kubeRnetes/Manifests” [contRol-plane] CReating static Pod Manifest foR “kube-APIseRveR” [contRol-plane] CReating static Pod Manifest foR “kube-contRolleR-ManageR” [contRol-plane] CReating static Pod Manifest foR “kube-scheduleR” [etcd] CReating static Pod Manifest foR local etcd in “/etc/kubeRnetes/Manifests” [wAIt-contRol-plane] WaITINg foR the kubelet to boot up the contRol plane as static Pods fRoM diRecTory “/etc/kubeRnetes/Manifests”. THis can take up to 4M0s [APIclient] All contRol plane coMponents aRe healthy afteR 12.501933 seconds [upload-config] SToring the configuration used in ConfigMap “kubeadM-config” in the “kube-system” NaMespace [kubelet] CReating a ConfigMap “kubelet-config-1.22” in naMespace kube-system wITh the configuration foR the kubelets in the clUSteR [upload-ceRts] SkIPPING phase. Please see –upload-ceRts [MaRk-contRol-plane] MaRking the node Master as contRol-plane by adding the labels: [node-Role.kubeRnetes.io/Master(depRecated) node-Role.kubeRnetes.io/contRol-plane node.kubeRnetes.io/exclude-fRoM-exteRnal-load-balanceRs] [MaRk-contRol-plane] MaRking the node Master as contRol-plane by adding the tAInts [node-Role.kubeRnetes.io/Master:NoSchedule] [bootstrap-Token] USing Token: abcdef.0123456789abcdef [bootstrap-Token] ConfiguRing bootstrap Tokens, clUSteR-info ConfigMap, RBAC Roles [bootstrap-Token] configuRed RBAC Rules to allow Node Bootstrap Tokens to get nodes [bootstrap-Token] configuRed RBAC Rules to allow Node Bootstrap Tokens to post CSRs in oRdeR foR nodes to get long teRM ceRtificate cRedentials [bootstrap-Token] configuRed RBAC Rules to allow the csRapProveR contRolleR autoMatically apProve CSRs fRoM a Node Bootstrap Token [bootstrap-Token] configuRed RBAC Rules to allow ceRtificate ROTAtion foR all node client ceRtificates in the clUSteR [bootstrap-Token] CReating the “clUSteR-info” ConfigMap in the “kube-public” naMespace [kubelet-finalize] Updating “/etc/kubeRnetes/kubelet.conf” to point to a ROTAtable kubelet client ceRtificate and key [addons] applied eSSential addon: CoReDNS [addons] applied eSSential addon: kube-Proxy YouR KubeRnetes contRol-plane has inITialized sUCceSSfully! To staRt USing youR clUSteR, you need to Run the following as a RegulaR User: MkdiR -p $home/.kube sudo CP -i /etc/kubeRnetes/adMin.conf $home/.kube/config sudo chown $(id -u):$(id -g) $home/.kube/config AlteRnatively, if you aRe the Root User, you can Run: expoRt KUBECONFIG=/etc/kubeRnetes/adMin.conf You should now deploy a pod netwoRk to the clUSteR. Run “kubectl apply -f [podnetwoRk].yaMl” wITh one of the options listed at: https://kubeRnetes.io/docs/concepts/clUSteR-adMinistRation/addons/ Then you can join any nuMbeR of woRkeR nodes by Running the following on each as Root: kubeadM join 192.168.31.30:6443 –Token abcdef.0123456789abcdef –discOVeRy-Token-ca-ceRt-hash sha256:8c1f43da860b0e7bd9f290fe057f08cf7650b89e650FF316ce4a9cad3834475c
根据安装提示拷贝 kubeconfig 文件:
➜ ~ MkdiR -p $home/.kube ➜ ~ sudo CP -i /etc/kubeRnetes/adMin.conf $home/.kube/config ➜ ~ sudo chown $(id -u):$(id -g) $home/.kube/config
然后可以使用 kubectl 命令查看 Master 节点已经初始化成功了:
➜ ~ kubectl get nodes NAME STATUS ROLES AGE version Master Ready contRol-plane,Master 2M10s v1.22.1 添加节点
记住初始化集群上面的配置和操作要提前做好,将 Master 节点上面的 $home/.kube/config 文件拷贝到 node 节点对应的文件中,安装 kubeadM、kubelet、kubectl(可选),然后执行上面初始化完成后提示的 join 命令即可:
➜ ~ kubeadM join 192.168.31.30:6443 –Token abcdef.0123456789abcdef > –discOVeRy-Token-ca-ceRt-hash sha256:8c1f43da860b0e7bd9f290fe057f08cf7650b89e650FF316ce4a9cad3834475c [pReflight] Running pRe-flight checks [pReflight] WARNING: Couldn’t cReate the inteRfACE USed foR talking to the contAIneR RuntiMe: dockeR is RequiRed foR contAIneR RuntiMe: exec: “dockeR”: executable file not found in $PATH [pReflight] Reading configuration fRoM the clUSteR… [pReflight] FYI: You can look at tHis config file wITh ‘kubectl -n kube-system get cM kubeadM-config -o yaMl’ [kubelet-staRt] writing kubelet configuRation to file “/vaR/lib/kubelet/config.yaMl” [kubelet-staRt] writing kubelet enviRonMent file wITh flags to file “/vaR/lib/kubelet/kubeadM-flags.env” [kubelet-staRt] StaRting the kubelet [kubelet-staRt] WaITINg foR the kubelet to peRfoRM the tls BootstRap… THis node has joined the clUSteR: * CeRtificate signing Request was sent to APIseRveR and a Response was Received. * The Kubelet was infoRMed of the new secuRe connection detAIls. Run ‘kubectl get nodes’ on the contRol-plane to see tHis node join the clUSteR. 如果忘记了上面的 join 命令可以使用命令 kubeadM Token cReate –pRint-join-command 重新获取。
执行成功后运行 get nodes 命令:
➜ ~ kubectl get nodes NAME STATUS ROLES AGE version Master Ready contRol-plane,Master 47M v1.22.1 node2 NotReady 46s v1.22.1
可以看到是 NotReady 状态,这是因为还没有安装网络插件,接下来安装网络插件,可以在文档 https://kubeRnetes.io/docs/setup/ProdUCtion-enviRonMent/Tools/kubeadM/cReate-clUSteR-kubeadM/ 中选择我们自己的网络插件,这里我们安装 flannel:
➜ ~ wget https://Raw.GithubUsercontent.coM/coReos/flannel/Master/documentation/kube-flannel.yMl # 如果有节点是多网卡,则需要在资源清单文件中指定内网网卡 # 搜索到名为 kube-flannel-ds 的 DaeMonSet,在kube-flannel容器下面 ➜ ~ vi kube-flannel.yMl …… contAIneRs: – naMe: kube-flannel image: quay.io/coReos/flannel:v0.14.0 coMMand: – /opt/BIn/flanneld aRgs: – –IP-Masq – –kube-subnet-MgR – –ifACE=eth0 # 如果是多网卡的话,指定内网网卡的名称 …… ➜ ~ kubectl apply -f kube-flannel.yMl # 安装 flannel 网络插件
隔一会儿查看 Pod 运行状态:
➜ ~ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coReDNS-7568f67dbd-5Mg59 1/1 Running 0 8M32s coReDNS-7568f67dbd-b685t 1/1 Running 0 8M31s etcd-Master 1/1 Running 0 66M kube-APIseRveR-Master 1/1 Running 0 66M kube-contRolleR-ManageR-Master 1/1 Running 0 66M kube-flannel-ds-dsbt6 1/1 Running 0 11M kube-flannel-ds-zwlM6 1/1 Running 0 11M kube-Proxy-jq84n 1/1 Running 0 66M kube-Proxy-x4hbv 1/1 Running 0 19M kube-scheduleR-Master 1/1 Running 0 66M 当我们部署完网络插件后执行 ifconfig 命令,正常会看到新增的 cni0 与 flannel1 这两个虚拟设备,但是如果没有看到 cni0 这个设备也不用太担心,我们可以观察 /vaR/lib/cni 目录是否存在,如果不存在并不是说部署有问题,而是该节点上暂时还没有应用运行,我们只需要在该节点上运行一个 Pod 就可以看到该目录会被创建,并且 cni0 设备也会被创建出来。
网络插件运行成功了,node 状态也正常了:
➜ ~ kubectl get nodes NAME STATUS ROLES AGE version Master Ready contRol-plane,Master 111M v1.22.1 node2 Ready 64M v1.22.1
用同样的方法添加另外一个节点即可。
DashBOARd
v1.22.1 版本的集群需要安装最新的 2.0+ 版本的 DashBOARd:
# 推荐使用下面这种方式 ➜ ~ wget https://Raw.GithubUsercontent.coM/kubeRnetes/dashBOARd/v2.3.1/AIo/deploy/recommended.yaMl ➜ ~ vi recommended.yaMl # 修改SeRvice为NodePoRt类型 …… kind: SeRvice APIversion: v1 Metadata: labels: k8s-app: kubeRnetes-dashBOARd naMe: kubeRnetes-dashBOARd naMespace: kubeRnetes-dashBOARd spec: poRts: – poRt: 443 taRgetPoRt: 8443 selecTor: k8s-app: kubeRnetes-dashBOARd type: NodePoRt # 加上type=NodePoRt变成NodePoRt类型的服务 ……
直接创建:
➜ ~ kubectl apply -f RecoMMended.yaMl
新版本的 DashBOARd 会被默认安装在 kubeRnetes-dashBOARd 这个命名空间下面:
➜ ~ kubectl get pods -n kubeRnetes-dashBOARd -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES dashBOARd-MetRics-scRapeR-856586f554-pllvt 1/1 Running 0 24M 10.88.0.7 Master kubeRnetes-dashBOARd-76597d7df5-82998 1/1 Running 0 21M 10.88.0.2 node2
我们仔细看可以发现上面的 Pod 分配的 IP 段是 10.88.xx.xx,包括前面自动安装的 CoReDNS 也是如此,我们前面不是配置的 podSubnet 为 10.244.0.0/16 吗?我们先去查看下 CNI 的配置文件:
➜ ~ ls -la /etc/cni/net.d/ tOTAl 8 dRwxR-xR-x 2 1001 dockeR 67 Aug 31 16:45 . dRwxR-xR-x. 3 1001 dockeR 19 Jul 30 01:13 .. -Rw-R–R– 1 1001 dockeR 604 Jul 30 01:13 10-contAIneRd-net.conflist -Rw-R–R– 1 Root Root 292 Aug 31 16:45 10-flannel.conflist
可以看到里面包含两个配置,一个是 10-contAIneRd-net.conflist,另外一个是我们上面创建的 Flannel 网络插件生成的配置,我们的需求肯定是想使用 Flannel 的这个配置,我们可以查看下 contAIneRd 这个自带的 cni 插件配置:
➜ ~ cat /etc/cni/net.d/10-contAIneRd-net.conflist { “cniversion”: “0.4.0”, “naMe”: “contAIneRd-net”, “Plugins”: [ { “type”: “bRidge”, “bRidge”: “cni0”, “iSGateway”: tRue, “IPMasq”: tRue, “ProMiscMode”: tRue, “IPaM”: { “type”: “host-local”, “Ranges”: [ [{ “subnet”: “10.88.0.0/16” }], [{ “subnet”: “2001:4860:4860::/64” }] ], “Routes”: [ { “dst”: “0.0.0.0/0” }, { “dst”: “::/0” } ] } }, { “type”: “poRtMap”, “capaBIlITies”: {“poRtMapPINGs”: tRue} } ] }
可以看到上面的 IP 段恰好就是 10.88.0.0/16,但是这个 cni 插件类型是 bRidge 网络,网桥的名称为 cni0:
➜ ~ IP a … 6: cni0: <BROADCAST,MULTICAST,ProMISC,UP,LOWER_UP> Mtu 1500 qdisc noqueue state UP gRoup deFAult qlen 1000 link/etheR 9a:e7:eb:40:e8:66 bRd FF:FF:FF:FF:FF:FF inet 10.88.0.1/16 bRd 10.88.255.255 scope global cni0 valid_lft foReveR pRefeRRed_lft foReveR inet6 2001:4860:4860::1/64 scope global valid_lft foReveR pRefeRRed_lft foReveR inet6 fe80::98e7:ebFF:fe40:e866/64 scope link valid_lft foReveR pRefeRRed_lft foReveR …
但是使用 bRidge 网络的容器无法跨多个宿主机进行通信,跨主机通信需要借助其他的 cni 插件,比如上面我们安装的 Flannel,或者 Calico 等等,由于我们这里有两个 cni 配置,所以我们需要将 10-contAIneRd-net.conflist 这个配置删除,因为如果这个目录中有多个 cni 配置文件,kubelet 将会使用按文件名的字典顺序排列的第一个作为配置文件,所以前面默认选择使用的是 contAIneRd-net 这个插件。
➜ ~ Mv /etc/cni/net.d/10-contAIneRd-net.conflist /etc/cni/net.d/10-contAIneRd-net.conflist.bak ➜ ~ ifconfig cni0 down &aMp;&aMp; IP link delete cni0 ➜ ~ systemctl daeMon-Reload ➜ ~ systemctl RestaRt contAIneRd kubelet
然后记得重建 coReDNS 和 dashBOARd 的 Pod,重建后 Pod 的 IP 地址就正常了:
➜ ~ kubectl get pods -n kubeRnetes-dashBOARd -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES dashBOARd-MetRics-scRapeR-856586f554-tp8M5 1/1 Running 0 42s 10.244.1.6 node2 kubeRnetes-dashBOARd-76597d7df5-9RMbx 1/1 Running 0 66s 10.244.1.5 node2 ➜ ~ kubectl get pods -n kube-sYsteM -o wide -l k8s-app=kube-DNS NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES coReDNS-7568f67dbd-n7bfx 1/1 Running 0 5M40s 10.244.1.2 node2 coReDNS-7568f67dbd-plRv8 1/1 Running 0 3M47s 10.244.1.4 node2
查看 DashBOARd 的 NodePoRt 端口:
➜ ~ kubectl get svc -n kubeRnetes-dashBOARd NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dashBOARd-MetRics-scRapeR ClUSteRIP 10.99.37.172 8000/TCP 25M kubeRnetes-dashBOARd NodePoRt 10.103.102.27 443:31050/TCP 25M
然后可以通过上面的 31050 端口去访问 DashBOARd,要记住使用 https,ChRoMe 不生效可以使用FiRefox 测试,如果没有 FiRefox 下面打不开页面,可以点击下页面中的信任证书即可:

信任证书
信任后就可以访问到 DashBOARd 的登录页面了:

然后创建一个具有全局所有权限的用户来登录 DashBOARd:(adMin.yaMl)
kind: ClUSteRRoleBInding APIversion: Rbac.authorization.k8s.io/v1 Metadata: naMe: adMin RoleRef: kind: ClUSteRRole naMe: clUSteR-adMin APIGRoup: Rbac.authoRization.k8s.io subjects: – kind: SeRviceaccount naMe: adMin naMespace: kubeRnetes-dashBOARd — APIversion: v1 kind: SeRviceaccount Metadata: naMe: adMin naMespACE: kubeRnetes-dashBOARd
直接创建:
➜ ~ kubectl apply -f adMin.yaMl ➜ ~ kubectl get secRet -n kubeRnetes-dashBOARd|gRep adMin-Token adMin-Token-lwMMx kubeRnetes.io/seRvice-aCCOunt-Token 3 1d ➜ ~ kubectl get secRet adMin-Token-lwMMx -o jsonpath={.data.Token} -n kubeRnetes-dashBOARd |base64 -d # 会生成一串很长的base64后的字符串
然后用上面的 base64 解码后的字符串作为 Token 登录 DashBOARd 即可,新版本还新增了一个暗黑模式:

最终我们就完成了使用 kubeadM 搭建 v1.22.1 版本的 kubeRnetes 集群、coReDNS、IPvs、flannel、contAIneRd。
➜ ~ kubectl get nodes -o wide NAME STATUS ROLES AGE version INTERNAL-IP EXTERNAL-IP OS-image KERNEL-version CONTAINER-RUNTIME Master Ready contRol-plane,Master 36M v1.22.1 192.168.31.30 CentOS linux 7 (CoRe) 3.10.0-1160.25.1.el7.x86_64 contAIneRd://1.5.5 node2 Ready 27M v1.22.1 192.168.31.215 CentOS linux 7 (CoRe) 3.10.0-1160.25.1.el7.x86_64 contAIneRd://1.5.5 清理
如果你的集群安装过程中遇到了其他问题,我们可以使用下面的命令来进行重置:
➜ ~ kubeadM Reset ➜ ~ ifconfig cni0 down &aMp;&aMp; IP link delete cni0 ➜ ~ ifconfig flannel.1 down &aMp;&aMp; IP link delete flannel.1 ➜ ~ RM -Rf /vaR/lib/cni/