kubeadm1.13安装(代码片段)

author author     2023-02-15     456

关键词:

前言

kubeadm 快速安装kubernetes集群,kubeadm 主要功能已经GA,除了高可用 还在alpha。功能如下图

Area Maturity Level
Command line UX GA
Implementation GA
Config file API beta
CoreDNS GA
kubeadm alpha subcommands alpha
High availability alpha
DynamicKubeletConfig alpha
Self-hosting alpha

当前我们线上稳定运行的Kubernetes集群是使用pod形式部署的高可用集群,这里体验Kubernetes 1.13中的kubeadm是为了了解官方对集群初始化和配置方面的最佳方式

1.准备

1.1系统配置

在安装之前,需要先做如下准备。三台CentOS 7.5主机如下:

cat /etc/hosts
192.168.1.57 master
192.168.1.33 node1
192.168.1.34 node2

如果各个主机启用了防火墙,需要开放Kubernetes各个组件所需要的端口,可以查看Installing kubeadm中的”Check required ports”一节。 这里简单起见在各节点禁用防火墙:

systemctl stop firewalld
systemctl disable firewalld

禁用SELINUX:

setenforce 0
vi /etc/selinux/config
SELINUX=disabled

创建/etc/sysctl.d/k8s.conf文件,添加如下内容:

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

如果准备使用ipvs 模式node节点添加keepalive配置如下

net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 10

相关issue:
https://github.com/moby/moby/issues/31208

执行命令使修改生效。

modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf

##1.2kube-proxy开启ipvs的前置条件
由于ipvs已经加入到了内核的主干,所以为kube-proxy开启ipvs的前提需要加载以下的内核模块:

ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4

在所有的Kubernetes节点node1和node2上执行以下脚本:

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

上面脚本创建了的/etc/sysconfig/modules/ipvs.modules文件,保证在节点重启后能自动加载所需模块。 使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已经正确加载所需的内核模块。

接下来还需要确保各个节点上已经安装了ipset软件包yum install ipset。 为了便于查看ipvs的代理规则,最好安装一下管理工具ipvsadm yum install ipvsadm。

如果以上前提条件如果不满足,则即使kube-proxy的配置开启了ipvs模式,也会退回到iptables模式。

1.3安装Docker

Kubernetes从1.6开始使用CRI(Container Runtime Interface)容器运行时接口。默认的容器运行时仍然是Docker,使用的是kubelet中内置dockershim CRI实现。

安装docker的yum源:

cd /etc/yum.repos.d/
wget https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/docker-ce.repo

查看最新的Docker版本:

yum list docker-ce.x86_64  --showduplicates |sort -r
docker-ce.x86_64            3:18.09.0-3.el7                     docker-ce-stable
docker-ce.x86_64            18.06.1.ce-3.el7                    docker-ce-stable
docker-ce.x86_64            18.06.0.ce-3.el7                    docker-ce-stable
docker-ce.x86_64            18.03.1.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            18.03.0.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.12.1.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.12.0.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.09.1.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.09.0.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.06.2.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.06.1.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.06.0.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.03.3.ce-1.el7                    docker-ce-stable
docker-ce.x86_64            17.03.2.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.03.1.ce-1.el7.centos             docker-ce-stable
docker-ce.x86_64            17.03.0.ce-1.el7.centos             docker-ce-stable

Kubernetes 1.12已经针对Docker的1.11.1, 1.12.1, 1.13.1, 17.03, 17.06, 17.09, 18.06等版本做了验证,需要注意Kubernetes 1.12最低支持的Docker版本是1.11.1。Kubernetes 1.13对Docker的版本依赖方面没有变化。

确认一下iptables filter表中FOWARD链的默认策略(pllicy)为ACCEPT。

iptables -nvL
Chain INPUT (policy ACCEPT 263 packets, 19209 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DOCKER-USER  all  --  *      *       0.0.0.0/0            0.0.0.0/0
    0     0 DOCKER-ISOLATION-STAGE-1  all  --  *      *       0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
    0     0 DOCKER     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  docker0 docker0  0.0.0.0/0            0.0.0.0/0

Docker从1.13版本开始调整了默认的防火墙规则,禁用了iptables filter表中FOWARD链,这样会引起Kubernetes集群中跨Node的Pod无法通信。docker在后面的18版本又改回来了。

2.使用kubeadm部署Kubernetes

2.1 安装kubeadm和kubelet

下面在各节点安装kubeadm和kubelet:
使用阿里云k8s 仓,记得禁用check

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubelet-1.13.1 kubeadm-1.13.1 kubectl-1.13.1

运行kubelet --help可以看到原来kubelet的绝大多数命令行flag参数都被DEPRECATED了,而官方推荐我们使用--config指定配置文件,并在配置文件中指定原来这些flag所配置的内容。具体内容可以查看这里Set Kubelet parameters via a config file。这也是Kubernetes为了支持动态Kubelet配置(Dynamic Kubelet Configuration)才这么做的,参考Reconfigure a Node’s Kubelet in a Live Cluster

kubelet的配置文件必须是json或yaml格式,具体可查看这里。

Kubernetes 1.8开始要求关闭系统的Swap,如果不关闭,默认配置下kubelet将无法启动。

因为我的机器都开启了swap,关闭swap可能会对其他服务产生影响,所以这里修改kubelet的配置去掉这个限制。 之前的Kubernetes版本我们都是通过kubelet的启动参数--fail-swap-on=false去掉这个限制的。前面已经分析了Kubernetes不再推荐使用启动参数,而推荐使用配置文件。 所以这里我们改成配置文件配置的形式。

查看/etc/systemd/system/kubelet.service.d/10-kubeadm.conf,看到了下面的内容:

# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS

上面显示kubeadm部署的kubelet的配置文件--config=/var/lib/kubelet/config.yaml,实际去查看/var/lib/kubelet和这个config.yaml的配置文件都没有被创建。 可以猜想肯定是运行kubeadm初始化集群时会自动生成这个配置文件,而如果我们不关闭Swap的话,第一次初始化集群肯定会失败的。

所以还是老老实实的回到使用kubelet的启动参数--fail-swap-on=false去掉必须关闭Swap的限制。 修改/etc/sysconfig/kubelet,加入:

KUBELET_EXTRA_ARGS=--fail-swap-on=false

2.2 使用kubeadm init初始化集群

在各节点开机启动kubelet服务:

systemctl enable kubelet.service

因为默认初始镜像是gcr.io的,如果机器不×××是pull 不下来的,所以需要我们提前下载下来,并放到私有仓。

查看镜像版本

kubeadm config images list

k8s.gcr.io/kube-apiserver:v1.13.1
k8s.gcr.io/kube-controller-manager:v1.13.1
k8s.gcr.io/kube-scheduler:v1.13.1
k8s.gcr.io/kube-proxy:v1.13.1
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.2.24
k8s.gcr.io/coredns:1.2.6

生成pull命令

kubeadm config images list |sed -e ‘s/^/docker pull /g‘

下载镜像,并且上传到私有仓

docker images|grep gcr.io|awk ‘print "docker tag ",$1":"$2,$1":"$2‘|sed ‘s#k8s.gcr.io#docker.foxchan.com/google_containers#2‘

docker tag  k8s.gcr.io/kube-proxy:v1.13.1 docker.foxchan.com/google_containers/kube-proxy:v1.13.1
docker tag  k8s.gcr.io/kube-apiserver:v1.13.1 docker.foxchan.com/google_containers/kube-apiserver:v1.13.1
docker tag  k8s.gcr.io/kube-controller-manager:v1.13.1 docker.foxchan.com/google_containers/kube-controller-manager:v1.13.1
docker tag  k8s.gcr.io/kube-scheduler:v1.13.1 docker.foxchan.com/google_containers/kube-scheduler:v1.13.1
docker tag  k8s.gcr.io/etcd:3.2.24 docker.foxchan.com/google_containers/etcd:3.2.24
docker tag  k8s.gcr.io/coredns:1.2.6 docker.foxchan.com/google_containers/coredns:1.2.6
docker tag  k8s.gcr.io/pause:3.1 docker.foxchan.com/google_containers/pause:3.1

2.2.1通过配置文件安装

kubeadm config print init-defaults >kubeadm.conf

将配置文件的imageRepository: 修改为自己的私有仓
imageRepository: docker.emarbox.com/google_containers

kubernetesVersion 改为自有版本
kubernetesVersion: v1.13.1

kubeadm config images list --config kubeadm.conf
kubeadm config images pull --config kubeadm.conf
kubeadm init --config kubeadm.conf

2.2.2 通过参数化安装

kubeadm init    --kubernetes-version=v1.13.1    --pod-network-cidr=10.244.0.0/16    --apiserver-advertise-address=101.254.242.57    --ignore-preflight-errors=Swap    --token-ttl 0    --image-repository docker.emarbox.com/google_containers

--token-ttl 0 初始化的时候指定token不过期
如果token过期了,用以下步骤重新生成
创建新的token

kubeadm token create

获取ca证书sha256编码hash值

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed ‘s/^.* //‘

因为我们选择flannel作为Pod网络插件,所以上面的命令指定–pod-network-cidr=10.244.0.0/16。

[init] Using Kubernetes version: v1.13.1
[preflight] Running pre-flight checks
        [WARNING Swap]: running with swap on is not supported. Please disable swap
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull‘
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.61.11]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [node1 localhost] and IPs [192.168.61.11 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [node1 localhost] and IPs [192.168.61.11 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 19.506551 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node1" as an annotation
[mark-control-plane] Marking the node node1 as control-plane by adding the label "node-role.kubernetes.io/master=‘‘"
[mark-control-plane] Marking the node node1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 702gz5.49zhotgsiyqimwqw
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.1.57:6443 --token pm5kht.s9fh59lop2m34ge9 --discovery-token-ca-cert-hash sha256:9c0f2cc3ab7c2a4bee0532952600befc0e2faf02794c23fa6227870fef18b18a   --ignore-preflight-errors=Swap

上面记录了完成的初始化输出的内容,根据输出的内容基本上可以看出手动初始化安装一个Kubernetes集群所需要的关键步骤。

其中有以下关键内容:

  • [kubelet-start] 生成kubelet的配置文件”/var/lib/kubelet/config.yaml”
  • [certificates]生成相关的各种证书
  • [kubeconfig]生成相关的kubeconfig文件
  • [bootstraptoken]生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到
  • 下面的命令是配置常规用户如何使用kubectl访问集群:
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

    最后给出了将节点加入集群的命令

    kubeadm join 192.168.1.57:6443 --token pm5kht.s9fh59lop2m34ge9 --discovery-token-ca-cert-hash sha256:9c0f2cc3ab7c2a4bee0532952600befc0e2faf02794c23fa6227870fef18b18a --ignore-preflight-errors=Swap

查看一下集群状态:

kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   "health": "true"

确认个组件都处于healthy状态。

集群初始化如果遇到问题,可以使用下面的命令进行清理:

kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/

2.3 安装Pod Network

接下来安装flannel network add-on:

mkdir -p /etc/kubernetes/addons/
cd addons/
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f  kube-flannel.yml

clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

这里注意kube-flannel.yml这个文件里的flannel的镜像是0.10.0,quay.io/coreos/flannel:v0.10.0-amd64

如果Node有多个网卡的话,参考flannel issues 39701,目前需要在kube-flannel.yml中使用--iface参数指定集群主机内网网卡的名称,否则可能会出现dns无法解析。需要将kube-flannel.yml下载到本地,flanneld启动参数加上--iface=<iface-name>

containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.10.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=eth1

使用kubectl get pod --all-namespaces -o wide确保所有的Pod都处于Running状态。

kubectl get pod --all-namespaces -o wide
NAMESPACE       NAME                                            READY   STATUS    RESTARTS   AGE     IP               NODE             NOMINATED NODE   READINESS GATES
default         curl-66959f6557-9dv5g                           1/1     Running   1          6d3h    10.244.0.4       192.168.1.57     <none>           <none>
ingress-nginx   nginx-ingress-controller-79b7dccb-6t2nr         1/1     Running   0          3d3h    10.244.2.15      192.168.1.34   <none>           <none>
ingress-nginx   nginx-ingress-controller-79b7dccb-r7ljd         1/1     Running   0          3d3h    10.244.1.29      192.168.1.33   <none>           <none>
ingress-nginx   nginx-ingress-default-backend-759456dbc-8nrsf   1/1     Running   0          3d3h    10.244.1.28      192.168.1.33   <none>           <none>
kube-system     coredns-5fcfdd4ccd-9bk7q                        1/1     Running   0          6d20h   10.244.0.2       192.168.1.57     <none>           <none>
kube-system     coredns-5fcfdd4ccd-dzw89                        1/1     Running   0          6d20h   10.244.0.3       192.168.1.57     <none>           <none>
kube-system     etcd-192.168.1.57                               1/1     Running   0          6d20h   192.168.1.57   192.168.1.57     <none>           <none>
kube-system     kube-apiserver-192.168.1.57                     1/1     Running   0          6d20h   192.168.1.57   192.168.1.57     <none>           <none>
kube-system     kube-controller-manager-192.168.1.57            1/1     Running   0          6d20h   192.168.1.57   192.168.1.57     <none>           <none>
kube-system     kube-flannel-ds-amd64-226tp                     1/1     Running   0          6d      192.168.1.33   192.168.1.33   <none>           <none>
kube-system     kube-flannel-ds-amd64-7r74j                     1/1     Running   0          6d      192.168.1.34   192.168.1.34   <none>           <none>
kube-system     kube-flannel-ds-amd64-rxzms                     1/1     Running   0          6d3h    192.168.1.57   192.168.1.57     <none>           <none>
kube-system     kube-proxy-5r4mg                                1/1     Running   0          5d23h   192.168.1.34   192.168.1.34   <none>           <none>
kube-system     kube-proxy-82867                                1/1     Running   0          5d23h   192.168.1.33   192.168.1.33   <none>           <none>
kube-system     kube-proxy-hqgkw                                1/1     Running   0          5d23h   192.168.1.57   192.168.1.57     <none>           <none>
kube-system     kube-scheduler-192.168.1.57                     1/1     Running   0          6d20h   192.168.1.57   192.168.1.57     <none>           <none>
kube-system     tiller-deploy-84bcb9978c-2blph                  1/1     Running   0          3d3h    10.244.2.13      192.168.1.34   <none>           <none>

2.4 master node参与工作负载

使用kubeadm初始化的集群,出于安全考虑Pod不会被调度到Master Node上,也就是说Master Node不参与工作负载。这是因为当前的master节点node1被打上了node-role.kubernetes.io/master:NoSchedule的污点:

kubectl describe node node1 | grep Taint
Taints:             node-role.kubernetes.io/master:NoSchedule

因为这里搭建的是测试环境,去掉这个污点使node1参与工作负载:

kubectl taint nodes node1 node-role.kubernetes.io/master-
node "node1" untainted

2.5 测试DNS

kubectl run curl --image=radial/busyboxplus:curl -it
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
If you don‘t see a command prompt, try pressing enter.
[ [email protected]:/ ]$ 

进入后执行nslookup kubernetes.default确认解析正常:

nslookup kubernetes.default
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

2.6 向Kubernetes集群中添加Node节点

优先级:
/etc/sysconfig/kubelet 参数高于/var/lib/kubelet/config.yaml
这里修改上报节点name为ip,修改kubelet

KUBELET_EXTRA_ARGS=--fail-swap-on=false --hostname-override=192.168.1.33
systemctl daemon-reload
systemctl restart kubelet

下面我们将node2这个主机添加到Kubernetes集群中,因为我们同样在node2上的kubelet的启动参数中去掉了必须关闭swap的限制,所以同样需要--ignore-preflight-errors=Swap这个参数。 在node2上执行:

kubeadm join 192.168.1.57:6443 --token 702gz5.49zhotgsiyqimwqw --discovery-token-ca-cert-hash sha256:2bc50229343849e8021d2aa19d9d314539b40ec7a311b5bb6ca1d3cd10957c2f  --ignore-preflight-errors=Swap

[preflight] Running pre-flight checks
        [WARNING Swap]: running with swap on is not supported. Please disable swap
[discovery] Trying to connect to API Server "192.168.1.57:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.61.11:6443"
[discovery] Requesting info from "https://192.168.1.57:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.1.57:6443"
[discovery] Successfully established connection with API Server "192.168.1.57:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml‘
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node2" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run ‘kubectl get nodes‘ on the master to see this node join the cluster.

node2加入集群很是顺利,下面在master节点上执行命令查看集群中的节点:

kubectl get nodes
NAME             STATUS   ROLES    AGE     VERSION
192.168.1.33   Ready    edge     6d      v1.13.1
192.168.1.34   Ready    edge     6d      v1.13.1
192.168.1.57     Ready    master   6d20h   v1.13.1

如何从集群中移除Node
如果需要从集群中移除node2这个Node执行下面的命令:

在master节点上执行:

kubectl drain node2 --delete-local-data --force --ignore-daemonsets
kubectl delete node node2

在node2上执行:

kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/

在master上执行:

kubectl delete node node2

2.7 kube-proxy开启ipvs

修改ConfigMap的kube-system/kube-proxy中的config.conf,mode: "ipvs":

kubectl edit cm kube-proxy -n kube-system

之后重启各个节点上的kube-proxy pod:

kubectl get pod -n kube-system | grep kube-proxy | awk ‘system("kubectl delete pod "$1" -n kube-system")‘
kubectl get pod -n kube-system | grep kube-proxy
kube-proxy-pf55q                1/1     Running   0          9s
kube-proxy-qjnnc                1/1     Running   0          14s

kubectl logs kube-proxy-pf55q -n kube-system
I1208 06:12:23.516444       1 server_others.go:189] Using ipvs Proxier.
W1208 06:12:23.516738       1 proxier.go:365] IPVS scheduler not specified, use rr by default
I1208 06:12:23.516840       1 server_others.go:216] Tearing down inactive rules.
I1208 06:12:23.575222       1 server.go:464] Version: v1.13.0
I1208 06:12:23.585142       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I1208 06:12:23.586203       1 config.go:202] Starting service config controller
I1208 06:12:23.586243       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I1208 06:12:23.586269       1 config.go:102] Starting endpoints config controller
I1208 06:12:23.586275       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I1208 06:12:23.686959       1 controller_utils.go:1034] Caches are synced for endpoints config controller
I1208 06:12:23.687056       1 controller_utils.go:1034] Caches are synced for service config controller

日志中打印出了Using ipvs Proxier,说明ipvs模式已经开启。

3.Kubernetes常用组件部署

越来越多的公司和团队开始使用Helm这个Kubernetes的包管理器,我们也将使用Helm安装Kubernetes的常用组件。

3.1 Helm的安装

Helm由客户端命helm令行工具和服务端tiller组成,Helm的安装十分简单。 下载helm命令行工具到master节点node1的/usr/local/bin下,这里下载的2.12.0版本:

wget https://storage.googleapis.com/kubernetes-helm/helm-v2.12.0-linux-amd64.tar.gz
tar -zxvf helm-v2.12.0-linux-amd64.tar.gz
cd linux-amd64/
cp helm /usr/local/bin/

因为Kubernetes APIServer开启了RBAC访问控制,所以需要创建tiller使用的service account: tiller并分配合适的角色给它。 详细内容可以查看helm文档中的Role-based Access Control。 这里简单起见直接分配cluster-admin这个集群内置的ClusterRole给它。创建rbac-config.yaml文件:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system
kubectl create -f rbac-config.yaml
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created

接下来使用helm部署tiller:
使用阿里云镜像,并指定repo为阿里云

helm init --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.12.0 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts

tiller默认被部署在k8s集群中的kube-system这个namespace下:

kubectl get pod -n kube-system -l app=helm
NAME                            READY   STATUS    RESTARTS   AGE
tiller-deploy-c4fd4cd68-dwkhv   1/1     Running   0          83s
helm version
Client: &version.VersionSemVer:"v2.12.0", GitCommit:"d325d2a9c179b33af1a024cdb5a4472b6288016a", GitTreeState:"clean"
Server: &version.VersionSemVer:"v2.12.0", GitCommit:"d325d2a9c179b33af1a024cdb5a4472b6288016a", GitTreeState:"clean"

注意由于某些原因需要网络可以访问gcr.io和kubernetes-charts.storage.googleapis.com,如果无法访问可以通过helm init --service-account tiller --tiller-image <your-docker-registry>/tiller:v2.11.0 --skip-refresh使用私有镜像仓库中的tiller镜像

3.2 使用Helm部署Nginx Ingress

为了便于将集群中的服务暴露到集群外部,从集群外部访问,接下来使用Helm将Nginx Ingress部署到Kubernetes上。 Nginx Ingress Controller被部署在Kubernetes的边缘节点上,关于Kubernetes边缘节点的高可用相关的内容可以查看我前面整理的Bare metal环境下Kubernetes Ingress边缘节点的高可用(基于IPVS)。

我们将node1(192.168.1.33)和node2(192.168.1.34)同时做为边缘节点,打上Label:

kubectl label node 192.168.1.33 node-role.kubernetes.io/edge=
node/node1 labeled

kubectl label node 192.168.1.34 node-role.kubernetes.io/edge=
node/node2 labeled

kubectl get node
NAME             STATUS   ROLES    AGE     VERSION
192.168.1.33   Ready    edge     6d      v1.13.1
192.168.1.34   Ready    edge     6d      v1.13.1
192.168.1.57     Ready    master   6d20h   v1.13.1

stable/nginx-ingress chart的值文件ingress-nginx.yaml:

controller:
  replicaCount: 2
  service:
    externalIPs:
      - 192.168.1.68
  nodeSelector:
    node-role.kubernetes.io/edge: ‘‘
  affinity:
    podAntiAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchExpressions:
            - key: app 
              operator: In
              values:
              - nginx-ingress
            - key: component
              operator: In
              values:
              - controller
          topologyKey: kubernetes.io/hostname
  tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule

defaultBackend:
  nodeSelector:
    node-role.kubernetes.io/edge: ‘‘
  tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule

nginx ingress controller的副本数replicaCount为2,将被调度到node1和node2这两个边缘节点上。externalIPs指定的192.168.1.68为VIP,将绑定到kube-proxy kube-ipvs0网卡上。

helm install stable/nginx-ingress -n nginx-ingress --namespace ingress-nginx  -f ingress-nginx.yaml --set rbac.create=true
kubectl get pod -n ingress-nginx -o wide
NAME                                            READY   STATUS    RESTARTS   AGE    IP            NODE             NOMINATED NODE   READINESS GATES
nginx-ingress-controller-79b7dccb-6t2nr         1/1     Running   0          3d3h   10.244.2.15   192.168.1.34   <none>           <none>
nginx-ingress-controller-79b7dccb-r7ljd         1/1     Running   0          3d3h   10.244.1.29   192.168.1.33   <none>           <none>
nginx-ingress-default-backend-759456dbc-8nrsf   1/1     Running   0          3d3h   10.244.1.28   192.168.1.33   <none>           <none>

如果访问http://192.168.1.68返回default backend,则部署完成。

注意:这里的VIP68,只能从k8s集群的node才能访问。

实际测试的结果是无法访问,于是怀疑kube-proxy出了问题,查看kube-proxy的日志,不停的刷下面的log:

I1224 02:46:27.793377       1 graceful_termination.go:160] Trying to delete rs: 10.103.201.140:44134/TCP/10.244.1.25:44134
I1224 02:46:27.793539       1 graceful_termination.go:173] Deleting rs: 10.103.201.140:44134/TCP/10.244.1.25:44134

在Kubernetes的Github上找到了这个ISSUE https://github.com/kubernetes/kubernetes/issues/71071,大致是最近更新的IPVS proxier mode now support connection based graceful termination.引入了bug,导致Kubernetes的1.11.5、1.12.1~1.12.3、1.13.0都有这个问题,即kube-proxy在ipvs模式下不可用。而官方称在1.11.5、1.12.3、1.13.0中修复了12月4日k8s的特权升级漏洞(CVE-2018-1002105),如果针对这个漏洞做k8s升级的同学,需要小心,确认是否开启了ipvs,避免由升级引起k8s网络问题。由于我们线上的版本是1.11并且已经启用了ipvs,所以这里我们只能先把线上master node升级到了1.11.5,而kube-proxy还在使用1.11.4的版本。

https://github.com/kubernetes/kubernetes/issues/71071中已经描述有相关PR解决这个问题,后续只能跟踪一下1.11.5、1.12.3、1.13.0之后的小版本了

查看kube-proxy日志,https://github.com/kubernetes/kubernetes/issues/71071的问题依旧

deno安装(代码片段)

前提是要brewbrewinstalldeno然后安装vscode插件 查看详情

deno安装(代码片段)

前提是要brewbrewinstalldeno然后安装vscode插件 查看详情

text安装(代码片段)

查看详情

javascriptwindowsnodejs安装(代码片段)

查看详情

markdownpythonliibrary安装(代码片段)

查看详情

markdownnpmbootstrapgulp安装(代码片段)

查看详情

markdownlaravel安装(代码片段)

查看详情

shnodejs安装(代码片段)

查看详情

shvps安装(代码片段)

查看详情

textmariadbmanjaro安装(代码片段)

查看详情

textcmd安装(代码片段)

查看详情

textreactnativedependecies安装(代码片段)

查看详情

textsqlserver安装(代码片段)

查看详情

phplaravel安装(代码片段)

查看详情

markdownmacosjdk安装(代码片段)

查看详情

markdownchromedriver安装(代码片段)

查看详情

shslackwaremultilib安装(代码片段)

查看详情

shshadowsocksr安装(代码片段)

查看详情