centos7k8s集群部署

author author     2022-10-24     138

关键词:

安装k8s集群前期准备:
网络环境:
节点 主机名 ip
Master k8s_master 192.168.3.216
Node1 k8s_node1 192.168.3.217
Node2 k8s_node2 192.168.3.219

centos7版本:
[[email protected]_master ~]# cat /etc/redhat-release
CentOS Linux release 7.4.1708 (Core)

关闭firewalld:
systemctl stop firewalld
systemctl disable firewalld

三台主机基础服务安装:
[[email protected]_master ~]#yum -y update
[[email protected]_master ~]#yum -y install net-tools wget vim ntpd
[[email protected]_master ~]#systemctl enable ntpd
[[email protected]_master ~]#systemctl start ntpd

分别在三台主机,设置主机名:
Master
hostnamectl --static set-hostname k8s_master
Node1
hostnamectl --static set-hostname k8s_client1
Node2
hostnamectl --static set-hostname k8s_client2

设置hosts,分别再三台主机执行:
cat <<EOF > /etc/hosts
192.168.3.217 k8s_client1
192.168.3.219 k8s_client2
192.168.3.216 k8s_master
EOF

部署Master操作:
安装etcd服务:
[[email protected]_master ~]# yum -y install etcd

编辑配置文件 /etc/etcd/etcd.conf
[[email protected]_master ~]# cat /etc/etcd/etcd.conf | grep -v "^#"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
ETCD_NAME="master"
ETCD_ADVERTISE_CLIENT_URLS="http://k8s_master:2379,http://k8s_master:4001"

设置开机启动并验证状态
[[email protected]_master ~]#systemctl enable etcd
[[email protected]_master ~]#systemctl start etcd

[[email protected]_master ~]# etcdctl -C http://k8s_master:4001 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://k8s_master:2379
cluster is healthy
[[email protected]_master ~]# etcdctl -C http://k8s_master:2379 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://k8s_master:2379
cluster is healthy

安装docker服务
[[email protected]_master ~]# yum -y install docker
设置开机启动,开启服务:
[[email protected]_master ~]#systemctl enable docker
[[email protected]_master ~]#systemctl start docker
查看docker版本:
[[email protected]_master ~]# docker version
Client:
Version: 1.12.6
API version: 1.24
Package version: docker-1.12.6-71.git3e8e77d.el7.centos.1.x86_64
Go version: go1.8.3
Git commit: 3e8e77d/1.12.6
Built: Tue Jan 30 09:17:00 2018
OS/Arch: linux/amd64

Server:
Version: 1.12.6
API version: 1.24
Package version: docker-1.12.6-71.git3e8e77d.el7.centos.1.x86_64
Go version: go1.8.3
Git commit: 3e8e77d/1.12.6
Built: Tue Jan 30 09:17:00 2018
OS/Arch: linux/amd64

安装kubernetes
[[email protected]_master ~]# yum install kubernetes

在kubernetes master上需要运行以下组件:
    Kubernets API Server
    Kubernets Controller Manager
    Kubernets Scheduler

修改apiserver服务配置文件:
[[email protected]_master ~]# cat /etc/kubernetes/apiserver | grep -v "^#"
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_API_PORT="--port=8080"
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.3.216:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_API_ARGS=""

修改config配置文件:
[[email protected]_master ~]# cat /etc/kubernetes/config | grep -v "^#"
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://192.168.3.216:8080"

设置开机启动,开启服务
[[email protected]_master ~]#systemctl enable kube-apiserver kube-controller-manager kube-scheduler
[[email protected]_master ~]#systemctl start kube-apiserver kube-controller-manager kube-scheduler

查看服务端口:
[[email protected]_master ~]# netstat -tnlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:2380 0.0.0.0: LISTEN 973/etcd
tcp 0 0 0.0.0.0:22 0.0.0.0:
LISTEN 970/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0: LISTEN 1184/master
tcp6 0 0 :::6443 :::
LISTEN 1253/kube-apiserver
tcp6 0 0 :::2379 ::: LISTEN 973/etcd
tcp6 0 0 :::10251 :::
LISTEN 675/kube-scheduler
tcp6 0 0 :::10252 ::: LISTEN 674/kube-controller
tcp6 0 0 :::8080 :::
LISTEN 1253/kube-apiserver
tcp6 0 0 :::22 ::: LISTEN 970/sshd
tcp6 0 0 ::1:25 :::
LISTEN 1184/master
tcp6 0 0 :::4001 :::* LISTEN 973/etcd

部署Node:
安装docker
参考Master安装方法
安装kubernetes
参考Master安装方法
配置、启动kubernetes
node节点上需要运行一下组件
kubelet kube-proxy

Node节点主机做以下配置:
config:
[[email protected]_client1 ~]# cat /etc/kubernetes/config | grep -v "^#"
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://192.168.3.216:8080"
kubelet:
[[email protected]_client1 ~]# cat /etc/kubernetes/kubelet | grep -v "^#"
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_HOSTNAME="--hostname-override=192.168.3.217"
KUBELET_API_SERVER="--api-servers=http://192.168.3.216:8080"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS=""

设置开机启动、开启服务
[[email protected]_client1 ~]#
[[email protected]_client1 ~]# systemctl enable kubelet kube-proxy
[[email protected]_client1 ~]# systemctl start kubelet kube-proxy

查看端口:
[[email protected]_client1 ~]# netstat -ntlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0: LISTEN 942/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:
LISTEN 2258/master
tcp 0 0 127.0.0.1:10248 0.0.0.0: LISTEN 17932/kubelet
tcp 0 0 127.0.0.1:10249 0.0.0.0:
LISTEN 17728/kube-proxy
tcp6 0 0 :::10250 ::: LISTEN 17932/kubelet
tcp6 0 0 :::10255 :::
LISTEN 17932/kubelet
tcp6 0 0 :::22 ::: LISTEN 942/sshd
tcp6 0 0 ::1:25 :::
LISTEN 2258/master
tcp6 0 0 :::4194 :::* LISTEN 17932/kubelet

Master上查看集群中的节点及节点状态
[[email protected]_master ~]# kubectl get node
NAME STATUS AGE
127.0.0.1 NotReady 1d
192.168.3.217 Ready 1d
192.168.3.219 Ready 1d
[[email protected]_master ~]# kubectl -s http://k8s_master:8080 get node
NAME STATUS AGE
127.0.0.1 NotReady 1d
192.168.3.217 Ready 1d
192.168.3.219 Ready 1d

kubernetes集群搭建完成,还需flannel安装
flannel是CoreOS提供用于解决Dokcer集群跨主机通讯的覆盖网络工具。它的主要思路是:预先留出一个网段,每个主机使用其中一部分,然后每个容器被分配不同的ip;让所有的容器认为大家在同一个直连的网络,底层通过UDP/VxLAN等进行报文的封装和转发。

Master/Node上flannel安装:
[[email protected]_master ~]#yum install flannel

flannel配置:
Master/Node上修改/etc/sysconfig/flanneld

Master:
[[email protected]_master ~]# cat /etc/sysconfig/flanneld | grep -v "^#"
FLANNEL_ETCD_ENDPOINTS="http://192.168.3.216:2379"
FLANNEL_ETCD_PREFIX="/atomic.io/network"

Node:
[[email protected]_client1 ~]# cat /etc/sysconfig/flanneld | grep -v "^#"
FLANNEL_ETCD_ENDPOINTS="http://192.168.3.216:2379"
FLANNEL_ETCD_PREFIX="/atomic.io/network"

添加网络:
[[email protected]_master ~]#etcdctl mk //atomic.io/network/config ‘"Network":"172.8.0.0/16"‘

Master/Node设置服务开机启动
[[email protected]_master ~]# systemctl enable flanneld
[[email protected]_master ~]# systemctl start flanneld

Master/Node节点重启服务:
Master:
for SERVICES in docker kube-apiserver kube-controller-manager kube-scheduler; do systemctl restart $SERVICES ; done

Node:
[[email protected]_client1 ~]#systemctl restart kube-proxy kubelet docker

查看flannel网络:
Master节点:
[[email protected]_master ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 00:50:56:98:3b:d4 brd ff:ff:ff:ff:ff:ff
inet 192.168.3.216/24 brd 192.168.3.255 scope global ens160
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe98:3bd4/64 scope link
valid_lft forever preferred_lft forever
3: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN qlen 500
link/none
inet 10.8.57.0/16 scope global flannel0
valid_lft forever preferred_lft forever
inet6 fe80::3578:6e81:8dc9:ed82/64 scope link flags 800
valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:8b:7c:fd:8d brd ff:ff:ff:ff:ff:ff
inet 10.8.57.1/24 scope global docker0
valid_lft forever preferred_lft forever

Node节点:
[[email protected]_client1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 00:50:56:98:65:e0 brd ff:ff:ff:ff:ff:ff
inet 192.168.3.217/24 brd 192.168.3.255 scope global ens160
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe98:65e0/64 scope link
valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:23:4b:85:6f brd ff:ff:ff:ff:ff:ff
inet 10.8.6.1/24 scope global docker0
valid_lft forever preferred_lft forever
9: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN qlen 500
link/none
inet 10.8.6.0/16 scope global flannel0
valid_lft forever preferred_lft forever
inet6 fe80::827:f63e:34ee:1f8e/64 scope link flags 800
valid_lft forever preferred_lft forever

rabbitmq集群部署(代码片段)

rabbitmq集群部署rabbitmq集群部署rabbitmq集群部署#修改主机名hostnamectlset-hostnamerabbitmq1hostnamectlset-hostnamerabbitmq2hostnamectlset-hostnamerabbitmq3#安装相关依赖环境yum-yinstallncurses-devel#安装erlang环境cd/data/softwgeth 查看详情

rabbitmq集群部署(代码片段)

rabbitmq集群部署rabbitmq集群部署rabbitmq集群部署#修改主机名hostnamectlset-hostnamerabbitmq1hostnamectlset-hostnamerabbitmq2hostnamectlset-hostnamerabbitmq3#安装相关依赖环境yum-yinstallncurses-devel#安装erlang环境cd/data/softwgeth 查看详情

部署apachedolphinscheduler伪集群部署(代码片段)

【部署】ApacheDolphinScheduler(海豚)伪集群部署(Pseudo-Cluster)Standalone极速体验版DolphinScheduler伪集群部署前置准备工作本地部署环境准备DolphinScheduler启动环境配置用户免密及权限配置机器SSH免密登陆启动zookeeper下... 查看详情

基于kubernetes集群部署elasticsearch集群(代码片段)

在k8s中部署elasticsearch集群文章目录在k8s中部署elasticsearch集群1.部署分析2.准备镜像并推送至Harbor仓库3.创建StorageClass动态PV资源4.编写es集群configmap资源5.编写es集群statfulset资源6.编写es集群svc资源7.创建所有资源8.查看资源的状态8.1... 查看详情

谈一谈elasticsearch的集群部署

??Elasticsearch天生就支持分布式部署,通过集群部署可以提高系统的可用性。本文重点谈一谈Elasticsearch的集群节点相关问题,搞清楚这些是进行Elasticsearch集群部署和拓扑结构设计的前提。关于如何配置集群的配置文件不会在本文... 查看详情

flink集群部署模式

参考技术A根据集群的生命周期、资源隔离方式和应用程序的main()方法执行位置(client或者JobManager)可以将集群部署模式分为:FlinkSessionCluster(sessionmode)、FlinkJobCluster(per-jobmode)和FlinkApplicationCluster(applicationmode)三类。FlinkSessionCluster集... 查看详情

01-consul集群部署

参考技术A本文对consul的分布式集群部署做介绍,consul版本为1.x在服务器1上执行:本文对consul的分布式集群部署进行了简单介绍,实际以官网为准,希望对你能有所帮助 查看详情

kubeadm部署kubernetes集群(代码片段)

Kubeadm部署Kubernetes1.14.1集群原理kubeadm做为集群安装的“最佳实践”工具,目标是通过必要的步骤来提供一个最小可用的集群运行环境。它会启动集群的基本组件以及必要的附属组件,至于为集群提供更丰富功能(比如监控,度量... 查看详情

fastdfs集群部署

集群部署部署环境IP地址计算机名部署的服务172.16.10.10node1.fastdfsStorageGroup1172.16.10.11node2.fastdfsStorageGroup1172.16.10.12node3.fastdfsStorageGroup2172.16.10.13node4.fastdfsStorageGroup2172.16.10.17node5.fastdfsTr 查看详情

流量分析系统---kafka集群部署

1、集群部署的基本流程Storm上游数据源之Kakfa下载安装包、解压安装包、修改配置文件、分发安装包、启动集群2、基础环境准备安装前的准备工作(zk集群已经部署完毕)?关闭防火墙chkconfigiptablesoff&&setenforce0?创建工作目录... 查看详情

eureka非分区集群部署(代码片段)

1、简介非分区集群部署是Eureka一种简单的集群部署方式,这种方式下集群中的EurekaServer不分区。通常情况下,如果我们的Eureka服务器都在同一个机房中,可以采取这种方式集群部署。​2、修改hosts文件由于我在Windows... 查看详情

rabbitmq集群化部署

压测环境上RabbitMQ主库采用三台集群化部署,部署在172.16.103.127,172.16.103.138,172.16.103.129三台机器上。安装目录:/opt/rabbitmq/rabbitmq_3.6.2集群化部署1、设置hosts解析,所有节点配置相同vi /etc/hosts 172.16.103.129mq-n129172.16.103.128mq-n1 查看详情

cdh集群离线部署(代码片段)

CDH集群离线部署(CM6.3.1+CDH6.3.2+CentOS7)_小宇0926的博客-CSDN博客_cdh集群大数据之CDH(web页面部署Hadoop)_leon.yan1994的博客CDH集群部署最佳实践-知乎(zhihu.com)CDH6.3.1集群离线部署-掘金(juejin.cn)基于阿里云的CDH集群安装_Frank__... 查看详情

kafka-集群部署(代码片段)

第2章Kafka集群部署 2.1环境准备2.1.1集群规划hadoop102hadoop103hadoop104zkzkzkkafkakafkakafka2.1.2jar包下载http://kafka.apache.org/downloads.html 2.2Kafka集群部署  1)解压安装包[[email protected]software]$tar-zxvfk 查看详情

zookeeper集群+kafka集群部署(代码片段)

zookeeper集群+kafka集群一.Zookeeper概述1.Zookeeper定义2.Zookeeper工作机制3.Zookeeper特点4.Zookeeper数据结构5.Zookeeper应用场景6.Zookeeper选举机制二.部署Zookeeper集群一.Zookeeper概述1.Zookeeper定义zookeeper是一个开源的分布式的,为分布式框... 查看详情

zookeeper集群的部署

因为这里zookeeper的集群部署都会2n+1台Dubbo建议使用Zookeeper作为服务的注册中心。Zookeeper集群中只要有过半的节点是正常的情况下,那么整个集群对外就是可用的。正是基于这个特性,要将ZK集群的节点数量要为奇数(2n+1:如3、5... 查看详情

fastdfs集群部署

...,详见博文:FastDFS+Nginx(单点部署)事例下面来玩下FastDFS集群部署,实现高可用(HA)服务器规划:跟踪服务器1【主机】(TrackerServer):10.100.139.121跟踪服务器2【备机】(TrackerServer):10.100.138.180存储服务器1(StorageServer):10.10... 查看详情

云堡垒机分布式集群部署优缺点简单说明

目前云堡垒机安装部署模式主要分为单机部署、高可用集群部署以及分布式集群部署等。其中分布式集群部署就是将核心功能模块(如门户服务、会话中转服务、数据库服务等),分别部署在多个计算节点上。那采取... 查看详情