生产部署高可用k8s集群-kubeadm

author author     2022-12-27     387

关键词:


软件环境:

软件

版本

操作系统

CentOS7.8_x64 (mini)

Docker

19-ce

Kubernetes

1.20

服务器整体规划:

角色

IP

其他单装组件

k8s-master1

192.168.40.180

docker,etcd,keepalived

k8s-master2

192.168.40.181

docker,etcd,keepalived

k8s-master3

192.168.40.183

docker,etcd,keepalived

负载均衡器对外IP

192.168.40.188 (VIP)


架构图:

【生产】部署高可用k8s集群-kubeadm_生产

环境准备:

# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld

# 关闭selinux
sed -i s/enforcing/disabled/ /etc/selinux/config # 永久
setenforce 0 # 临时

# 关闭swap
swapoff -a # 临时
sed -ri s/.*swap.*/#&/ /etc/fstab # 永久

# 根据规划设置主机名
hostnamectl set-hostname <hostname>
bash

# 在master添加hosts
cat >> /etc/hosts << EOF
192.168.40.180 k8s-master1
192.168.40.181 k8s-master2
192.168.40.182 k8s-master3
EOF

# 将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system # 生效

# 时间同步
yum install ntpdate -y
ntpdate time.windows.com

# 添加阿里云YUM软件源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

#安装必要的一些系统工具
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
# 添加dockerYUM软件源
sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

# 安装基础软件包
yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack ntpdate telnet ipvsadm iptables-services

# 禁用iptables
systemctl stop iptables && systemctl disable iptables

#清空防火墙规则
iptables -F

# 配置免密登录
ssh-keygen

开启ipvs

cat > /etc/sysconfig/modules/ipvs.modules <<\\EOF
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in $ipvs_modules; do
/sbin/modinfo -F filename $kernel_module > /dev/null 2>&1
if [ 0 -eq 0 ]; then
/sbin/modprobe $kernel_module
fi
done
EOF

把ipvs.modules上传到k8s-master1;k8s-master2;k8s-master3机器的/etc/sysconfig/modules/目录下

  • 脚本执行

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs

安装docker

yum install docker-ce-20.10.6 docker-ce-cli-20.10.6 containerd.io  -y

systemctl start docker && systemctl enable docker && systemctl status docker

配置docker镜像加速器和驱动

cat > /etc/docker/daemon.json <<\\EOF

"registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com","http://qtid6917.mirror.aliyuncs.com", "https://rncxm540.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]


EOF

systemctl daemon-reload && systemctl restart docker
systemctl status docker
docker info

#修改docker文件驱动为systemd,默认为cgroupfs,kubelet默认使用systemd,两者必须一致才可以。

安装初始化k8s需要的软件包

yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6

systemctl enable kubelet && systemctl start kubelet

systemctl status kubelet

上面kubelet状态不是running状态,这个是正常的,不用管,等k8s组件起来这个kubelet就正常了#

keepalived安装

yum install  keepalived -y

  • cat /etc/keepalived/keepalived.conf

cat > /etc/keepalived/keepalived.conf <<\\EOF
! Configuration File for keepalived

global_defs
smtp_server localhost


vrrp_script chk_kubeapiserver
script "/etc/keepalived/keepalived_checkkubeapiserver.sh"
interval 3
timeout 1
rise 3
fall 3


vrrp_instance kubernetes-internal
state BACKUP
interface ens32
virtual_router_id 171
priority 100
advert_int 1
authentication
auth_type PASS
auth_pass 1111

virtual_ipaddress
192.168.40.188

track_script
chk_kubeapiserver



EOF

  • matser2

cat > /etc/keepalived/keepalived.conf <<\\EOF
! Configuration File for keepalived

global_defs
smtp_server localhost


vrrp_script chk_kubeapiserver
script "/etc/keepalived/keepalived_checkkubeapiserver.sh"
interval 3
timeout 1
rise 3
fall 3


vrrp_instance kubernetes-internal
state BACKUP
interface ens33
virtual_router_id 171
priority 99
advert_int 1
authentication
auth_type PASS
auth_pass 1111

virtual_ipaddress
192.168.40.188

track_script
chk_kubeapiserver


EOF

  • mtaster3

cat > /etc/keepalived/keepalived.conf <<\\EOF
! Configuration File for keepalived

global_defs
smtp_server localhost


vrrp_script chk_kubeapiserver
script "/etc/keepalived/keepalived_checkkubeapiserver.sh"
interval 3
timeout 1
rise 3
fall 3


vrrp_instance kubernetes-internal
state BACKUP
interface ens32
virtual_router_id 171
priority 98
advert_int 1
authentication
auth_type PASS
auth_pass 1111

virtual_ipaddress
192.168.40.188

track_script
chk_kubeapiserver


EOF

  • 检测api 脚本

cat  > /etc/keepalived/keepalived_checkkubeapiserver.sh <<EOF
#!/bin/bash
sudo ss -ltn|grep ":6443 " > /dev/null

EOF
chmod  644 /etc/keepalived/keepalived.conf   && chmod 700 /etc/keepalived/keepalived_checkkubeapiserver.sh
systemctl daemon-reload && systemctl start keepalived && systemctl enable  keepalived



kubeadm初始化k8s集群

在k8s-master1上创建kubeadm-config.yaml文件:

[root@k8s-master1 ~]# cd /root/

[root@k8s-master1]# vim kubeadm-config.yaml

cat > kubeadm-config.yaml << EOF
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.20.6
controlPlaneEndpoint: 192.168.40.188:6443
imageRepository: registry.aliyuncs.com/google_containers
apiServer:
certSANs:
- 192.168.40.180
- 192.168.40.181
- 192.168.40.182
- 192.168.40.188
networking:
podSubnet: 10.244.0.0/16
serviceSubnet: 10.10.0.0/16
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
EOF

安装

kubeadm init --config kubeadm-config.yaml


mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

【生产】部署高可用k8s集群-kubeadm_kubeadm

master 新增

在k8s-master1 上运行脚本,将证书分发至master02和master03

cat > cert-main-master.sh <<\\EOF
#!/bin/bash
USER=root
CONTROL_PLANE_IPS="192.168.40.181 192.168.40.182"
for host in $CONTROL_PLANE_IPS; do
scp /etc/kubernetes/pki/ca.crt "$USER"@$host:
scp /etc/kubernetes/pki/ca.key "$USER"@$host:
scp /etc/kubernetes/pki/sa.key "$USER"@$host:
scp /etc/kubernetes/pki/sa.pub "$USER"@$host:
scp /etc/kubernetes/pki/front-proxy-ca.crt "$USER"@$host:
scp /etc/kubernetes/pki/front-proxy-ca.key "$USER"@$host:
scp /etc/kubernetes/pki/etcd/ca.crt "$USER"@$host:etcd-ca.crt
# Quote this line if you are using external etcd
scp /etc/kubernetes/pki/etcd/ca.key "$USER"@$host:etcd-ca.key
done
EOF

sh ./cert-main-master.sh

在master02;master3 上运行脚本cert-other-master.sh,将证书移至指定目录

cat > cert-other-master.sh <<\\EOF

#!/bin/bash
USER=root # customizable
mkdir -p /etc/kubernetes/pki/etcd
mv /$USER/ca.crt /etc/kubernetes/pki/
mv /$USER/ca.key /etc/kubernetes/pki/
mv /$USER/sa.pub /etc/kubernetes/pki/
mv /$USER/sa.key /etc/kubernetes/pki/
mv /$USER/front-proxy-ca.crt /etc/kubernetes/pki/
mv /$USER/front-proxy-ca.key /etc/kubernetes/pki/
mv /$USER/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
# Quote this line if you are using external etcd
mv /$USER/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
EOF

sh cert-other-master.sh

在k8s-master1上查看加入节点的命令:

kubeadm token create --print-join-command

添加 --control-plane

带有 --control-plane 是用于加入组建多master集群的,不带的是加入节点的

证书拷贝之后在k8s-master2 k8s-master3上执行如下命令,大家复制自己的,这样就可以把k8s-master2和加入到集群,成为控制节点:

kubeadm join 192.168.40.188:6443 --token 2nlfta.1xlp02ux6drki244     --discovery-token-ca-cert-hash sha256:862edf59b13a7a819c461d398ae1953e4de07cd1d1bbfe5463f2b0308ecec366     --control-plane

【生产】部署高可用k8s集群-kubeadm_k8s_03


查看集群中的pod

【生产】部署高可用k8s集群-kubeadm_kubeadm

注意:上面状态都是NotReady状态,说明没有安装网络插件


安装kubernetes网络组件-Calico

curl  https://docs.projectcalico.org/v3.20/manifests/calico.yaml -o calico.yaml


  • 查看当前pod网段

kubectl get cm kubeadm-config -n kube-system -o yaml | grep -i podsub

【生产】部署高可用k8s集群-kubeadm_k8s_05

  • 修改 calico.yaml

根据实际网络规划修改Pod CIDR(CALICO_IPV4POOL_CIDR)

【生产】部署高可用k8s集群-kubeadm_生产_06


选择工作模式(CALICO_IPV4POOL_IPIP),支持BGP(Never)、IPIP(Always)、CrossSubnet(开启BGP并支持跨子网)

【生产】部署高可用k8s集群-kubeadm_kubeadm

  • 应用清单

kubectl apply -f calico.yaml
kubectl get pods -n kube-system

【生产】部署高可用k8s集群-kubeadm_傻瓜式部署_08

  • 去除污点, 因为目前是3个master 节点, 有默认污点。

[root@k8s-master1 ~]# kubectl describe nodes  | grep Taints
Taints: node-role.kubernetes.io/master:NoSchedule
Taints: node-role.kubernetes.io/master:NoSchedule
Taints: node-role.kubernetes.io/master:NoSchedule


kubectl taint node k8s-master1 node-role.kubernetes.io/master:NoSchedule-
kubectl taint node k8s-master2 node-role.kubernetes.io/master:NoSchedule-
kubectl taint node k8s-master3 node-role.kubernetes.io/master:NoSchedule-

【生产】部署高可用k8s集群-kubeadm_傻瓜式部署_09


  • 测试网络

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: busybox
name: busybox
spec:
replicas: 1
selector:
matchLabels:
app: busybox
strategy:
template:
metadata:
labels:
app: busybox
spec:
containers:
- image: busybox:1.28
name: busybox
command: ["/bin/sh","-c","sleep 36000"]

【生产】部署高可用k8s集群-kubeadm_docker_10

#通过上面可以看到能访问网络,说明calico网络插件已经被正常安装了

在Kubernetes集群中创建一个pod,验证是否正常运行:

kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get pod,svc

访问地址:http://NodeIP:Port

  • 测试coredns是否正常

【生产】部署高可用k8s集群-kubeadm_kubeadm

  • 测试3个master 是否都可以运行pod
  • 将 Pod 强制打散调度到不同节点上(强反亲和)

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: kubernetes.io/hostname
labelSelector:
matchLabels:
app: nginx
containers:
- name: nginx
image: nginx

【生产】部署高可用k8s集群-kubeadm_傻瓜式部署_12

部署 Dashboard

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml

默认Dashboard只能集群内部访问,修改Service为NodePort类型,暴露到外部:

vi recommended.yaml
...
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
nodePort: 30001
selector:
k8s-app: kubernetes-dashboard
type: NodePort
...
$ kubectl apply -f recommended.yaml
$ kubectl get pods -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-6b4884c9d5-gl8nr 1/1 Running 0 13m
kubernetes-dashboard-7f99b75bf4-89cds 1/1 Running 0 13m


  • 访问地址:https://NodeIP:30001 (使用https)
  • 创建service account并绑定默认cluster-admin管理员集群角色

# 创建用户
kubectl create serviceaccount dashboard-admin -n kube-system
# 用户授权
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
# 获取用户Token
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk /dashboard-admin/print $1)

  • 使用输出的token登录Dashboard。

【生产】部署高可用k8s集群-kubeadm_docker_13

问题1

  • Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused

【生产】部署高可用k8s集群-kubeadm_生产_14

可按如下方法处理: 
vim /etc/kubernetes/manifests/kube-scheduler.yaml
修改如下内容:
把--bind-address=127.0.0.1变成--bind-address=192.168.40.188
把httpGet:字段下的hosts由127.0.0.1变成192.168.40.188
把—port=0删除
#注意:192.168.40.188是k8s的控制节点的ip
vim /etc/kubernetes/manifests/kube-controller-manager.yaml
把--bind-address=127.0.0.1变成--bind-address=192.168.40.188
把httpGet:字段下的hosts由127.0.0.1变成192.168.40.188
把—port=0删除

修改之后在k8s各个节点执行
systemctl restart kubelet

k8skubeadm部署高可用集群(代码片段)

...们k8s集群需要多master实现高可用,所以下面介绍如何通过kubeadm部署k8s高可用集群(建议生产环境master至少3个以上)二、master部署:1、三台maser节点上部署etcd集群2、使用VIP进行kubeadm初始化master注意:本次是通过物理服务器进行... 查看详情

kubeadm部署k8s1.9高可用集群--4部署master节点

部署master节点kubernetesmaster节点包含的组件:kube-apiserverkube-schedulerkube-controller-manager本文档介绍部署一个三节点高可用master集群的步骤,分别命名为k8s-host1、k8s-host2、k8s-host3:k8s-host1:172.16.120.154k8s-host2:172.16.120.155k8s-h 查看详情

通过kubeadm部署高可用的k8s集群(代码片段)

1环境准备注意:禁用swap关闭selinux关闭iptable优化内核参数限制参数root@kubeadm-master1:~#sysctl-pnet.ipv4.ip_forward=1#开启路由转发net.bridge.bridge-nf-call-iptables=1#二层的网桥在转发包时会被宿主机IPtables的forward规则匹配net.brid 查看详情

kubeadm部署高可用k8s集群(v1.14.0)(代码片段)

一、集群规划主机名IP角色主要插件VIP172.16.1.10实现master高可用和负载均衡k8s-master01172.16.1.11masterkube-apiserver、kube-controller、kube-scheduler、kubelet、kube-proxy、kube-flannel、etcdk8s-master02172.16.1.12masterkube-apiserve 查看详情

ubuntu22.04使用kubeadm安装k8s1.26版本高可用集群

...装前的准备主机规划基线准备所有k8smaster、worker节点安装kubeadm+kubectl+kubelet创建集群负载均衡器HAproxy安装keepalived和haproxy配置haproxy配置keepalivedkubeadm部署第一台master节点Calico网络组件一键安装安装完成阿里云ACK集群的架构ACK集群... 查看详情

linux12k8s-->12kubeadm部署高可用k8s(代码片段)

文章目录KubeAdmin安装k8s1、集群类型2、安装方式3、高可用架构图一、准备环境(电脑系统16G+)1、部署软件、系统要求2、节点规划二、kubeadm安装k8s1、内核优化脚本(所有机器)2、免密脚本(所有机器)3、安装IPVS和内核优化(所有机... 查看详情

部署高可用k8s集群-kubeadm

软件环境:软件版本操作系统CentOS7.8_x64(mini)Docker19-ceKubernetes1.20服务器整体规划:角色IP其他单装组件k8s-master1192.168.40.180docker,etcd,keepalivedk8s-master2192.168.40.181docker,etcd,keepalivedk8s-master3192.168.40.183docker,etcd,keepalived负载均... 查看详情

kubeadm安装高可用k8s集群(代码片段)

kubeadm安装高可用k8s集群高可用集群规划图主机规划环境搭建前言环境初始化关闭防火墙并禁止防火墙开机启动设置主机名主机名解析时间同步关闭selinux关闭swap分区将桥接的IPv4流量传递到iptables的链开启ipvs所有节点配置limit在k8s... 查看详情

使用kube-vip部署高可用k8s集群

...2.sealos(一键部署,目前高版本底层使用的是containerd)3.kubeadm+kube-vip(灵活,方便)本文使用第三种方式,K8S版本1.20.4环境:centos7.6m1-192.168.50.201m2-192.168.50.202m3-192.168.50.203vip-192.168.50.200三台服务器均需要操作 查看详情

用kubeadm部署生产级k8s集群(代码片段)

概述kubeadm?已?持集群部署,且在1.13?版本中?GA,?持多?master,多?etcd?集群化部署,它也是官?最为推荐的部署?式,?来是由它的?sig?组来推进的,?来?kubeadm?在很多??确实很好的利?了?kubernetes?的许多特性,接下来?篇我们来实践并了解... 查看详情

k8s高可用版本部署(代码片段)

K8S官方文档注意:该集群每个master节点都默认由kubeadm生成了etcd容器,组成etcd集群。正常使用集群,etcd的集群不能超过一半为down状态。docker的namespace:是利用宿主机内核的namespace功能实现容器的资源隔离k8s的names... 查看详情

k8s高可用环境部署-1.17.3版本(代码片段)

...扑结构也对应externaletcdnode~本文仅部署master节点,使用kubeadm部署worker节点非常简单,不在赘述,环境清单:服务器主机IP主机名字功能k8s-master01192.168.246.193master01master+etcd+keepalived+HaProxyk8s-master02192.168.246.19 查看详情

k8s高可用部署:keepalived+haproxy

...部ETCD)。https://kubernetes.cn/docs/setup/production-environment/tools/kubeadm/ha-topology/#external-etcd-topology堆叠ETCD:每个master节点上运行一个apiserver和etcd,etcd只与本节点apiserver通信。外部ETCD:etcd集群运行在单独的主机上,每个etcd都与apiserver节点... 查看详情

使用kubeadm搭建高可用的k8s集群(代码片段)

kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具。这个工具能通过两条指令完成一个kubernetes集群的部署:#创建一个Master节点$kubeadminit#将一个Node节点加入到当前集群中$kubeadmjoin<Master节点的IP和端口>1.安装要... 查看详情

使用kubeadm搭建k8s高可用集群(代码片段)

环境准备系统使用的Ubuntu18.04主机IP主机名docker版本172.31.1.10k8s-master119.03.15172.31.1.11k8s-master219.03.15172.31.1.12k8s-master319.03.15172.31.1.13harbor19.03.15172.31.1.14haproxy1172.31.1.15haproxy2172.31.1.16k 查看详情

rancher2.2.2-ha部署高可用k8s集群(代码片段)

对于生产环境,需以高可用的配置安装Rancher,确保用户始终可以访问RancherServer。当安装在Kubernetes集群中时,Rancher将与集群的etcd集成,并利用Kubernetes调度实现高可用。为确保高可用,本文所部署的Kubernetes集群将专用于运行Ranch... 查看详情

kubernetes(k8s)生产级实践指南从部署到核心应用

...es的重要知识点,助力快速入门。第3章高可用集群搭建---kubeadm方式【集群落地方案1】本章中将讲解,如何使用k 查看详情

使用kubeadm部署k8s集群

一.部署前准备1.将CentOS7的Linux内核便捷地升级到最新版#导入ELRepo仓库的公钥rpm--importhttps://www.elrepo.org/RPM-GPG-KEY-elrepo.org#为yum安装ELRepo仓库rpm-Uvhhttp://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm#查看可用版本yum 查看详情