关键词:
多节点部署介绍- 在生产环境中,搭建kubernetes平台时我们同时会考虑平台的高可用性,kubenetes平台是由master中心管理机制,由master服务器调配管理各个节点服务器,在之前的文章中我们搭建的是单节点(一个master服务器)的部署,当master服务器宕机时,我们的搭建的平台也就无法使用了,这个时候我们就要考虑多节点(多master)的部署,已到平台服务的高可用性。
负载均衡介绍
-
在我们搭建多节点部署时,多个master同时运行工作,在处理工作问题时总是使用同一个master完成工作,当master服务器面对多个请求任务时,处理速度就会变慢,同时其余的master服务器不处理请求也是一种资源的浪费,这个时候我们就考虑到做负载均衡服务
- 本次搭建负载均衡使用nginx服务做四层负载均衡,keepalived做地址飘逸
实验部署
实验环境
- lb01:192.168.80.19 (负载均衡服务器)
- lb02:192.168.80.20 (负载均衡服务器)
- Master01:192.168.80.12
- Master01:192.168.80.11
- Node01:192.168.80.13
- Node02:192.168.80.14
多master部署
- master01服务器操作
[root@master01 kubeconfig]# scp -r /opt/kubernetes/ root@192.168.80.11:/opt //直接复制kubernetes目录到master02 The authenticity of host ‘192.168.80.11 (192.168.80.11)‘ can‘t be established. ECDSA key fingerprint is SHA256:Ih0NpZxfLb+MOEFW8B+ZsQ5R8Il2Sx8dlNov632cFlo. ECDSA key fingerprint is MD5:a9:ee:e5:cc:40:c7:9e:24:5b:c1:cd:c1:7b:31:42:0f. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added ‘192.168.80.11‘ (ECDSA) to the list of known hosts. root@192.168.80.11‘s password: token.csv 100% 84 61.4KB/s 00:00 kube-apiserver 100% 929 1.6MB/s 00:00 kube-scheduler 100% 94 183.2KB/s 00:00 kube-controller-manager 100% 483 969.2KB/s 00:00 kube-apiserver 100% 184MB 106.1MB/s 00:01 kubectl 100% 55MB 85.9MB/s 00:00 kube-controller-manager 100% 155MB 111.9MB/s 00:01 kube-scheduler 100% 55MB 115.8MB/s 00:00 ca-key.pem 100% 1675 2.7MB/s 00:00 ca.pem 100% 1359 2.6MB/s 00:00 server-key.pem 100% 1679 2.5MB/s 00:00 server.pem 100% 1643 2.7MB/s 00:00 [root@master01 kubeconfig]# scp /usr/lib/systemd/system/kube-apiserver,kube-controller-manager, kube-scheduler.service root@192.168.80.11:/usr/lib/systemd/system //复制master中的三个组件启动脚本 root@192.168.80.11‘s password: kube-apiserver.service 100% 282 274.4KB/s 00:00 kube-controller-manager.service 100% 317 403.5KB/s 00:00 kube-scheduler.service 100% 281 379.4KB/s 00:00 [root@master01 kubeconfig]# scp -r /opt/etcd/ root@192.168.80.11:/opt/ //特别注意:master02一定要有 etcd证书,否则apiserver服务无法启动 拷贝master01上已有的etcd证书给master02使用 root@192.168.80.11‘s password: etcd 100% 509 275.7KB/s 00:00 etcd 100% 18MB 95.3MB/s 00:00 etcdctl 100% 15MB 75.1MB/s 00:00 ca-key.pem 100% 1679 941.1KB/s 00:00 ca.pem 100% 1265 1.6MB/s 00:00 server-key.pem 100% 1675 2.0MB/s 00:00 server.pem 100% 1338 1.5MB/s 00:00
- master02服务器操作
[root@master02 ~]# systemctl stop firewalld.service //关闭防火墙 [root@master02 ~]# setenforce 0 //关闭selinux [root@master02 ~]# vim /opt/kubernetes/cfg/kube-apiserver //更改文件 ... --etcd-servers=https://192.168.80.12:2379,https://192.168.80.13:2379,https://192.168.80.14:2379 --bind-address=192.168.80.11 //更改IP地址 --secure-port=6443 --advertise-address=192.168.80.11 //更改IP地址 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 ... :wq [root@master02 ~]# systemctl start kube-apiserver.service //启动apiserver服务 [root@master02 ~]# systemctl enable kube-apiserver.service //设置开机自启 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/ systemd/system/kube-apiserver.service. [root@master02 ~]# systemctl start kube-controller-manager.service //启动controller-manager [root@master02 ~]# systemctl enable kube-controller-manager.service //设置开机自启 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service. [root@master02 ~]# systemctl start kube-scheduler.service //启动scheduler [root@master02 ~]# systemctl enable kube-scheduler.service //设置开机自启 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/ systemd/system/kube-scheduler.service. [root@master02 ~]# vim /etc/profile //编辑添加环境变量 ... export PATH=$PATH:/opt/kubernetes/bin/ :wq [root@master02 ~]# source /etc/profile //重新执行 [root@master02 ~]# kubectl get node //查看节点信息 NAME STATUS ROLES AGE VERSION 192.168.80.13 Ready <none> 146m v1.12.3 192.168.80.14 Ready <none> 144m v1.12.3 //多master配置成功
负载均衡部署
-
lb01、lb02同步操作
[root@lb01 ~]# systemctl stop firewalld.service [root@lb01 ~]# setenforce 0 [root@lb01 ~]# vim /etc/yum.repos.d/nginx.repo //配置nginx服务yum源 [nginx] name=nginx repo baseurl=http://nginx.org/packages/centos/7/$basearch/ gpgcheck=0 :wq [root@lb01 yum.repos.d]# yum list //重新加载yum 已加载插件:fastestmirror base | 3.6 kB 00:00:00 extras | 2.9 kB 00:00:00 ... [root@lb01 yum.repos.d]# yum install nginx -y //安装nginx服务 已加载插件:fastestmirror Loading mirror speeds from cached hostfile * base: mirrors.aliyun.com * extras: mirrors.163.com ... [root@lb01 yum.repos.d]# vim /etc/nginx/nginx.conf //编辑nginx配置文件 ... events worker_connections 1024; stream //添加四层转发模块 log_format main ‘$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent‘; access_log /var/log/nginx/k8s-access.log main; upstream k8s-apiserver server 192.168.80.12:6443; //注意IP地址 server 192.168.80.11:6443; server listen 6443; proxy_pass k8s-apiserver; http include /etc/nginx/mime.types; default_type application/octet-stream; ... :wq [root@lb01 yum.repos.d]# systemctl start nginx //启动nginx服务 可以在浏览器中访问测试nginx服务 [root@lb01 yum.repos.d]# yum install keepalived -y //安装keepalived服务 已加载插件:fastestmirror Loading mirror speeds from cached hostfile * base: mirrors.aliyun.com * extras: mirrors.163.com ... [root@lb01 yum.repos.d]# mount.cifs //192.168.80.2/shares/K8S/k8s02 /mnt/ //挂载宿主机目录 Password for root@//192.168.80.2/shares/K8S/k8s02: [root@lb01 yum.repos.d]# cp /mnt/keepalived.conf /etc/keepalived/keepalived.conf //复制准备好的 keepalived配置文件覆盖源配置文件 cp:是否覆盖"/etc/keepalived/keepalived.conf"? yes [root@lb01 yum.repos.d]# vim /etc/keepalived/keepalived.conf //编辑配置文件 ... vrrp_script check_nginx script "/etc/nginx/check_nginx.sh" //注意脚本位置修改 vrrp_instance VI_1 state MASTER interface ens33 //注意网卡名称 virtual_router_id 51 //VRRP 路由 ID实例,每个实例是唯一的 priority 100 //优先级,备服务器设置 90 advert_int 1 //指定VRRP 心跳包通告间隔时间,默认1秒 authentication auth_type PASS auth_pass 1111 virtual_ipaddress 192.168.80.100/24 //飘逸地址 track_script check_nginx //删除下面所有内容 :wq
-
lb02服务器keepalived配置文件修改
[root@lb02 ~]# vim /etc/keepalived/keepalived.conf ... vrrp_script check_nginx script "/etc/nginx/check_nginx.sh" //注意脚本位置修改 vrrp_instance VI_1 state BACKUP //修改角色为backup interface ens33 //网卡名称 virtual_router_id 51 //VRRP 路由 ID实例,每>个实例是唯一的 priority 90 //优先级,备服务器设置 90 advert_int 1 //指定VRRP 心跳包通告间隔时间,默认1秒 authentication auth_type PASS auth_pass 1111 virtual_ipaddress 192.168.80.100/24 //虚拟IP地址 track_script check_nginx //删除下面所有内容 :wq
-
lb01、lb02同步操作
[root@lb01 yum.repos.d]# vim /etc/nginx/check_nginx.sh //编辑判断nginx状态脚本 count=$(ps -ef |grep nginx |egrep -cv "grep|$$") if [ "$count" -eq 0 ];then systemctl stop keepalived fi :wq chmod +x /etc/nginx/check_nginx.sh //添加脚本执行权限 [root@lb01 yum.repos.d]# systemctl start keepalived //启动服务
- lb01服务器操作
[root@lb01 ~]# ip a //查看地址信息 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:e9:04:ba brd ff:ff:ff:ff:ff:ff inet 192.168.80.19/24 brd 192.168.80.255 scope global ens33 valid_lft forever preferred_lft forever inet 192.168.80.100/24 scope global secondary ens33 //虚拟地址成功配置 valid_lft forever preferred_lft forever inet6 fe80::c3ab:d7ec:1adf:c5df/64 scope link valid_lft forever preferred_lft forever
- lb02服务器操作
[root@lb02 ~]# ip a //查看地址信息 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:7d:c7:ab brd ff:ff:ff:ff:ff:ff inet 192.168.80.20/24 brd 192.168.80.255 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::cd8b:b80c:8deb:251f/64 scope link valid_lft forever preferred_lft forever inet6 fe80::c3ab:d7ec:1adf:c5df/64 scope link tentative dadfailed valid_lft forever preferred_lft forever //没有虚拟IP地址 lb02属于备用服务
- lb01服务器停止nginx服务,再次在lb02服务器IP地址,看虚拟IP地址是否成功漂移
[root@lb01 ~]# systemctl stop nginx.service [root@lb01 nginx]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:e9:04:ba brd ff:ff:ff:ff:ff:ff inet 192.168.80.19/24 brd 192.168.80.255 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::c3ab:d7ec:1adf:c5df/64 scope link valid_lft forever preferred_lft forever [root@lb02 ~]# ip a //在lb02服务器查看 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:7d:c7:ab brd ff:ff:ff:ff:ff:ff inet 192.168.80.20/24 brd 192.168.80.255 scope global ens33 valid_lft forever preferred_lft forever inet 192.168.80.100/24 scope global secondary ens33 //漂移地址转移到lb02上 valid_lft forever preferred_lft forever inet6 fe80::cd8b:b80c:8deb:251f/64 scope link valid_lft forever preferred_lft forever inet6 fe80::c3ab:d7ec:1adf:c5df/64 scope link tentative dadfailed valid_lft forever preferred_lft forever
- 在lb01服务器重新开启nginx、keepalived服务
[root@lb01 nginx]# systemctl start nginx [root@lb01 nginx]# systemctl start keepalived.service [root@lb01 nginx]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:e9:04:ba brd ff:ff:ff:ff:ff:ff inet 192.168.80.19/24 brd 192.168.80.255 scope global ens33 valid_lft forever preferred_lft forever inet 192.168.80.100/24 scope global secondary ens33 //漂移地址被抢占回来 因为配置了优先级 valid_lft forever preferred_lft forever inet6 fe80::c3ab:d7ec:1adf:c5df/64 scope link valid_lft forever preferred_lft forever
- 在所有的node节点修改配置文件
[root@node01 ~]# vim /opt/kubernetes/cfg/bootstrap.kubeconfig ... server: https://192.168.80.100:6443 ... :wq [root@node01 ~]# vim /opt/kubernetes/cfg/kubelet.kubeconfig ... server: https://192.168.80.100:6443 ... :wq [root@node01 ~]# vim /opt/kubernetes/cfg/kube-proxy.kubeconfig ... server: https://192.168.80.100:6443 ... :wq [root@node01 ~]# systemctl restart kubelet.service //重启服务 [root@node01 ~]# systemctl restart kube-proxy.service
- 在lb01服务器查看日志信息
[root@lb01 nginx]# tail /var/log/nginx/k8s-access.log 192.168.80.13 192.168.80.12:6443 - [11/Feb/2020:15:23:52 +0800] 200 1118 192.168.80.13 192.168.80.11:6443 - [11/Feb/2020:15:23:52 +0800] 200 1119 192.168.80.14 192.168.80.12:6443 - [11/Feb/2020:15:26:01 +0800] 200 1119 192.168.80.14 192.168.80.12:6443 - [11/Feb/2020:15:26:01 +0800] 200 1120
多节点搭建与负载均衡配置完成
k8s------kubernetes双master负载均衡集群搭建(代码片段)
目录前言环境准备一.Master02节点部署1.在Master01节点上拷贝证书文件及服务管理文件2.修改Apiserver配置文件3.启动服务4.查看Node节点状态二.负载均衡部署1.安装Nginx服务2.修改Nginx配置文件,配置反向代理负载均衡3.启动Nginx服务4... 查看详情
k8s------kubernetes双master负载均衡集群搭建(代码片段)
目录前言环境准备一.Master02节点部署1.在Master01节点上拷贝证书文件及服务管理文件2.修改Apiserver配置文件3.启动服务4.查看Node节点状态二.负载均衡部署1.安装Nginx服务2.修改Nginx配置文件,配置反向代理负载均衡3.启动Nginx服务4... 查看详情
k8s多master集群二进制部署(代码片段)
k8s多Master集群二进制部署一、k8s多Master集群高可用方案1、高可用实现方案2、多Master高可用的搭建过程二、多master节点集群搭建(master02节点部署)三、负载均衡部署四、k8s的网站管理系统(DashboardUI)1、Dashboard介... 查看详情
k8s------kubernetes双master负载均衡集群搭建(代码片段)
目录前言环境准备一.Master02节点部署1.在Master01节点上拷贝证书文件及服务管理文件2.修改Apiserver配置文件3.启动服务4.查看Node节点状态二.负载均衡部署1.安装Nginx服务2.修改Nginx配置文件,配置反向代理负载均衡3.启动Nginx服务4... 查看详情
k8s多节点二进制部署(代码片段)
K8S多节点二进制部署一.master2节点部署二.负载均衡部署三.部署DashboardUI一.master2节点部署//从master01节点上拷贝证书文件、各master组件的配置文件和服务管理文件到master02节点scp-r/opt/etcd/root@192.168.116.70:/opt/scp-r/opt/kubernetes/root@... 查看详情
k8s多节点二进制部署(代码片段)
K8S多节点二进制部署一.master2节点部署二.负载均衡部署三.部署DashboardUI一.master2节点部署//从master01节点上拷贝证书文件、各master组件的配置文件和服务管理文件到master02节点scp-r/opt/etcd/root@192.168.116.70:/opt/scp-r/opt/kubernetes/root@... 查看详情
k8s多master集群二进制部署(代码片段)
k8s多Master集群二进制部署一、k8s多Master集群高可用方案1、高可用实现方案2、多Master高可用的搭建过程二、多master节点集群搭建(master02节点部署)三、负载均衡部署四、k8s的网站管理系统(DashboardUI)1、Dashboard介... 查看详情
k8s多master集群二进制部署(代码片段)
k8s多Master集群二进制部署一、k8s多Master集群高可用方案1、高可用实现方案2、多Master高可用的搭建过程二、多master节点集群搭建(master02节点部署)三、负载均衡部署四、k8s的网站管理系统(DashboardUI)1、Dashboard介... 查看详情
k8s多master集群二进制部署(代码片段)
k8s多Master集群二进制部署一、k8s多Master集群高可用方案1、高可用实现方案2、多Master高可用的搭建过程二、多master节点集群搭建(master02节点部署)三、负载均衡部署四、k8s的网站管理系统(DashboardUI)1、Dashboard介... 查看详情
k8s单节点集群二进制部署(步骤详细,图文详解)(代码片段)
k8s单节点集群二进制部署(步骤详细,图文详解)一、k8s集群搭建环境准备1、etcd集群master01node01node02所有node节点部署docker引擎2、flannel网络插件3、搭建master组件4、搭建node组件(1)node1节点(2)node2节... 查看详情
kubernetes多节点二进制部署(代码片段)
Kubernetes多节点二进制部署一、部署master02节点修改主机名,关闭防火墙在k8smaster01上操作在k8smaster02上操作二、部署负载均衡1.配置nginx的官方在线yum源,配置本地nginx的yum源2.部署keepalived服务3.修改k8snode节点上的bootstrap.kub... 查看详情
kubernetes节点服务搭建————二进制部署多节点服务搭建dashboardui部署(代码片段)
文章目录多master集群架构master2节点部署从master01节点上拷贝证书文件、各master组件的配置文件和服务管理文件到master02节点修改配置文件kube-apiserver中的IP在master02节点上启动各服务并设置开机自启查看node节点状态负载均衡部署安... 查看详情
kubernetes节点服务搭建————二进制部署多节点服务搭建dashboardui部署(代码片段)
文章目录多master集群架构master2节点部署从master01节点上拷贝证书文件、各master组件的配置文件和服务管理文件到master02节点修改配置文件kube-apiserver中的IP在master02节点上启动各服务并设置开机自启查看node节点状态负载均衡部署安... 查看详情
kubernetes节点服务搭建————二进制部署多节点服务搭建dashboardui部署(代码片段)
文章目录多master集群架构master2节点部署从master01节点上拷贝证书文件、各master组件的配置文件和服务管理文件到master02节点修改配置文件kube-apiserver中的IP在master02节点上启动各服务并设置开机自启查看node节点状态负载均衡部署安... 查看详情
k8s完整多节点部署(线网实战!含排错!)(代码片段)
K8s多节点部署---->使用Nginx服务实现负载均衡---->UI界面展示特别注意:此实验开始前必须要先部署单节master的k8s群集可以见本人上一篇博客:https://blog.csdn.net/JarryZho/article/details/104193913环境部署:相关软件包及文档:链接:ht... 查看详情
k8s多节点二进制部署(代码片段)
K8S多节点二进制部署一.master2节点部署二.负载均衡部署三.部署DashboardUI一.master2节点部署//从master01节点上拷贝证书文件、各master组件的配置文件和服务管理文件到master02节点scp-r/opt/etcd/root@192.168.116.70:/opt/scp-r/opt/kubernetes/root@... 查看详情
k8s——单master节点和基于单master节点的双master节点二进制部署(本机实验,防止卡顿,所以多master就不做3台了)(代码片段)
K8S——单master节点和基于单master节点的双master节点二进制部署一、准备二、ETCD集群1、master节点2、node节点三、Flannel网络部署四、测试容器间互通五、单master节点部署1、部署master组件2、node节点部署①、node1节点②、node2节点六、... 查看详情
lvs负载均衡dr模式简介与实战部署(代码片段)
LVS负载均衡DR模式简介与实战部署一、LVS-DR工作原理二、LVS-DR中的ARP问题对节点服务器进行处理,设置内核参数arp_announce=2:系统不使用IP包的源地址来设置ARP请求的源地址,而选择发送接口的IP地址。三、LVS负载均... 查看详情