0%

从零开始搭建k8s集群环境(四)——添加Node节点

从零开始搭建 k8s 集群环境系列笔记:

经过一天的捣腾,终于是把 Master 初始化出来了,现在咱们来把 Node 节点孵化出来!

环境说明:

  • Master:VBox虚拟机/Centos_7.5-1804/192.168.56.100
  • Images:VBox虚拟机/Centos_7.5-1804/192.168.56.101
  • Node01:VBox虚拟机/Centos_7.5-1804/192.168.56.102

软件环境说明:

  • docker-ce-17.03.2
  • kubernetes-v1.9.0
  • harbor-v1.4.0

这次的重心当然是 Node 节点了。

一、环境准备

NodeMaster 环境类似,我们也需要准备一些东西:

关闭交换分区

交换分区影响性能,k8s 这么说的。。。

1.暂时关闭SWAP,重启后恢复

swapoff   -a

2.永久关闭SWAP

vim /etc/fstab

# 注释掉SWAP分区项,即可
# swap was on /dev/sda11 during installation
# UUID=0a55fdb5-a9d8-4215-80f7-f42f75644f69 none  swap    sw      0       0

Docker & Docker仓库镜像

参照上一篇,我们也是安装 Docker-ce-17.03.2,仓库镜像一致。

wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm
wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm

# 将安装 Docker 和依赖包
yum install -y docker-ce-*.rpm

# 开机启动
[root@localhost ~]# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

# 启动 docker 服务
[root@localhost ~]# systemctl restart docker

这里说得仓库镜像是指 docker pull 镜像的镜像站,并不是 k8s 的镜像。k8s 的镜像咱们先不部署,等在后面需要的时候再部署。

kubeadm、kubectl、kubelet、kubernetes-cni

没有这些,那就玩蛇蛇了~

统一 Docker 和 Kubelet 的 Cgroup Driver

和上一篇一致,不解释。

关闭防火墙

关闭防火墙命令:

service iptables stop
service firewalld stop

# 取消开机启动
systemctl disable iptables
systemctl disable firewalld

在搭建 Master 过程中被防火墙坑惨了,这次学乖点把 Node 的防火墙也早点关了~

二、载入镜像

在部署每个节点的时候,都需要载入以下两个镜像,如果本地没有就会去 Google 服务器 pull。所以我们提前拉下来:

gcr.io/google_containers/pause-amd64:3.0
gcr.io/google_containers/kube-proxy-amd64:v1.9.8

这里我也已经提交到阿里云镜像服务了,详情见第三篇。我们使用以下命令来获取:

#!/usr/bin/env bash
images=(
pause-amd64:3.0
kube-proxy-amd64:v1.9.8
)
for imageName in ${images[@]} ; do
    docker pull registry.cn-shenzhen.aliyuncs.com/lx0758/$imageName
    docker tag registry.cn-shenzhen.aliyuncs.com/lx0758/$imageName gcr.io/google_containers/$imageName
    docker rmi registry.cn-shenzhen.aliyuncs.com/lx0758/$imageName
done

来看下拉取后的结果:

[root@localhost ~]# docker images
REPOSITORY                                 TAG     IMAGE ID      CREATED       SIZE
gcr.io/google_containers/kube-proxy-amd64  v1.9.8  0e938a71ef0c  13 hours ago  109 MB
gcr.io/google_containers/pause-amd64       3.0     ec38a2020e09  18 hours ago  747 kB

OK,Docker 镜像就处理完了,下一步就可以把 Node 加入集群了。

三、加入集群

启动 kubectl

kubectl 是每台机器上的节点控制服务,所以 join 前要保证它在工作:

systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet

正式 join

上个步骤我们成功在 Master 上面初始化成功,得到了加入集群的命令和钥匙,所以直接调用就可以了:

[root@localhost ~]# kubeadm join --token 61dcb1.71255fe1915d0048 192.168.56.101:6443 --discovery-token-ca-cert-hash sha256:28116d25a952e58d4ad94c68a5190e17bebac4f7d066fe3a9c8dc78f93eb4108
[preflight] Running pre-flight checks.
	[WARNING Hostname]: hostname "localhost.node" could not be reached
	[WARNING Hostname]: hostname "localhost.node" lookup localhost.node on 192.168.7.2:53: no such host
	[WARNING FileExisting-crictl]: crictl not found in system path
[discovery] Trying to connect to API Server "192.168.56.101:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.56.101:6443"
[discovery] Requesting info from "https://192.168.56.101:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.56.101:6443"
[discovery] Successfully established connection with API Server "192.168.56.101:6443"

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

可以看到,Node 反馈已经加入成功,我们到 Master 确认一下:

[root@localhost ~]# kubectl get nodes
NAME               STATUS     ROLES     AGE       VERSION
localhost.master   NotReady   master    23m       v1.9.0
localhost.node     NotReady   <none>    12m       v1.9.0

四、问题汇总

join token 过期

默认情况下,通过 kubeadm initkubeadm create token 创建的 token 过期时间是 24小时,过期之后就不能使用 kube join 将其他节点加入集群,查询秘钥也不存在记录:

[root@localhost ~]# kubeadm token list
TOKEN     TTL       EXPIRES   USAGES    DESCRIPTION   EXTRA GROUPS

创建新的 token

Master 直接运行 kubeadm token create 就可以创建出 token,也可以运行 kubeadm token create --ttl 0 生成一个永不过期的 token

[root@localhost ~]# kubeadm token create
935477.bcd80c2e5088763b
[root@localhost ~]# kubeadm token create --ttl 0
6ed996.aea18ca3b54760fd
[root@localhost ~]# kubeadm token list
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION   EXTRA GROUPS
6ed996.aea18ca3b54760fd   <forever>   <never>                     authentication,signing   <none>        system:bootstrappers:kubeadm:default-node-token
935477.bcd80c2e5088763b   23h         2018-05-23T17:25:08+08:00   authentication,signing   <none>        system:bootstrappers:kubeadm:default-node-token

计算新的连接参数

节点加入集群需要执行:

kubeadm join --token {token} {ip:port} --discovery-token-ca-cert-hash sha256:{sha256}

所以我们需要得到 sha256。

[root@localhost ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
28116d25a952e58d4ad94c68a5190e17bebac4f7d066fe3a9c8dc78f93eb4108

新节点就可以使用上面的 tokensha256 加入集群了。

当然,你不想执行这个操作可以用参数忽略检验 sha256,安全性低一点:

kubeadm join --token {token} {ip:port} --discovery-token-unsafe-skip-ca-verification

x509: certificate is valid for x.x.x.x, not x.x.x.x]

原因是 Master 初始化的时候,节点机器之间通信网卡的IP没有在证书里面。
初始化请指定 --apiserver-cert-extra-sans={IPS},IP之间以逗号隔开,例:

kubeadm init --apiserver-cert-extra-sans=127.0.0.1,127.0.0.2

iptables 拦截 bridge 网卡数据

初始化报错如下:

[root@localhost ~]# kubeadm join --token 8a82fd.05d65480b13cbc10 192.168.56.100:6443 --discovery-token-ca-cert-hash sha256:bea1c696f02db219ce561b3213556b0a098890f6485fbac6c0014012b9d214ea
[preflight] Running pre-flight checks.
	[WARNING Hostname]: hostname "localhost.node01" could not be reached
	[WARNING Hostname]: hostname "localhost.node01" lookup localhost.node01 on 192.168.7.2:53: no such host
	[WARNING FileExisting-crictl]: crictl not found in system path
[preflight] Some fatal errors occurred:
	[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

禁止 iptablesbridge 数据进行处理

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf

重复 join Master

join 失败之后需要 reset 再继续 join:

kubeadm join ...

kubeadm reset

kubeadm join ...

...

join 报 no route to host

呵呵,没关 Master 防火墙吧~

Node 上无法使用 kubectl

需要将 Master 内的配置文件拷贝到节点才可以:

# 以下命令是在 Master 执行,当然也可以用别的办法
[root@localhost ~]# scp .kube/config root@192.168.56.102:/root/.kube/
root@192.168.56.102's password:
config

五、参考文章

  1. kubeadm join - Kubernetes