离线部署k8s-1.19.0集群(代码片段)

无荨 无荨     2022-12-01     224

关键词:

0、节点说明:

1、环境配置

1.1 关闭防火墙、selinux、swap

setenforce 0
sed -i \'s/=enforcing/=disabled/g\' /etc/selinux/config
systemctl stop firewalld
systemctl disable firewalld

1.2 做免密操作

生成密钥(master):
sed -i \'35cStrictHostKeyChecking no\'  /etc/ssh/ssh_config
ssh-keygen -t rsa -f /root/.ssh/id_rsa -P ""
cp /root/.ssh/id_rsa.pub  /root/.ssh/authorized_keys
发送到其他节点:
[root@k8s-master ~]# scp -r /root/.ssh/ root@192.168.160.130:/root
[root@k8s-master ~]# scp -r /root/.ssh/ root@192.168.160.131:/root

免密测试:

免密测试:
[root@k8s-master ~]# ssh 192.168.160.131
Last login: Fri May 21 10:04:36 2021 from 192.168.160.129
[root@k8s-node2 ~]# ssh 192.168.160.130
Last login: Fri May 21 09:12:10 2021 from 192.168.160.1
[root@k8s-node1 ~]# ssh 192.168.160.129
The authenticity of host \'192.168.160.129 (192.168.160.129)\' can\'t be established.
ECDSA key fingerprint is SHA256:FSe5JBJyY0olAkh+sfW3uOj1fQ+6eCXR4F5meZLvrp4.
ECDSA key fingerprint is MD5:50:44:e3:e2:35:5d:7f:68:9e:7e:63:b7:d4:e6:dd:6c.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added \'192.168.160.129\' (ECDSA) to the list of known hosts.
Last login: Fri May 21 09:12:31 2021 from 192.168.160.1
[root@k8s-master ~]#
免密测试

1.3 设置主机名解析

#设置主机名:hostnamectl set-hostname HOSTNAME
#配置主机名解析:
cat > /etc/hosts << QQQ
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.160.129  k8s-master
192.168.160.130  k8s-node1
192.168.160.131  k8s-node2
QQQ
#发送到其他节点
[root@k8s-master ~]# scp /etc/hosts 192.168.160.130:/etc
[root@k8s-master ~]# scp /etc/hosts 192.168.160.131:/etc

1.4 关闭swap交换分区

[root@k8s-master ~]# swapoff -a && sysctl -w vm.swappiness=0
vm.swappiness = 0
[root@k8s-master ~]# ssh 192.168.160.130 "swapoff -a && sysctl -w vm.swappiness=0"
vm.swappiness = 0
[root@k8s-master ~]# ssh 192.168.160.131 "swapoff -a && sysctl -w vm.swappiness=0"
vm.swappiness = 0
[root@k8s-master ~]# sed -ri \'s/.*swap.*/#&/\' /etc/fstab
[root@k8s-master ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Thu May 20 17:07:46 2021
#
# Accessible filesystems, by reference, are maintained under \'/dev/disk\'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=e22ac659-213c-476a-9ecb-6e6b9d5e7fba /boot                   xfs     defaults        0 0
#/dev/mapper/centos-swap swap                    swap    defaults        0 0
/dev/cdrom /iso iso9660 defaults 0 0
/dev/cdrom /iso iso9660 defaults 0 0
[root@k8s-master ~]#

验证swap分区关闭情况:

swap验证

 

1.5 (选项)配置yum源:(如果公司有自己的yum,使用自己的yum源)

[root@k8s-master ~]# mkdir /etc/yum.repos.d/bak
[root@k8s-master ~]# mount /dev/cdrom /iso
[root@k8s-master ~]# echo "/dev/cdrom /iso iso9660 defaults 0 0">>/etc/fstab
[root@k8s-master ~]# mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/bak
[root@k8s-master ~]# cat>/etc/yum.repos.d/iso.repo <<QQQ
> [iso]
> name=iso
> baseurl=file:///iso
> enabled=1
> gpgcheck=0
> QQQ
[root@k8s-master ~]# mkdir /iso
[root@k8s-master ~]# mount /dev/cdrom /iso
mount: /dev/sr0 写保护,将以只读方式挂载
[root@k8s-master ~]# echo "/dev/cdrom /iso iso9660 defaults 0 0">>/etc/fstab
[root@k8s-master ~]# yum -y install vim net-tools unzip
配置本地yum

1.6 安装docker (内核版本不同,安装的rpm包不同)

[root@k8s-master k8s]# scp -r /root/k8s 192.168.160.130:/root
[root@k8s-master k8s]# scp -r /root/k8s 192.168.160.131:/root
[root@k8s-master ~]# ls /root/k8s/docker/docker-rpm/
containerd.io-1.4.4-3.1.el7.x86_64.rpm                docker-scan-plugin-0.7.0-3.el7.x86_64.rpm
container-selinux-2.119.2-1.911c772.el7_8.noarch.rpm  fuse3-libs-3.6.1-4.el7.x86_64.rpm
docker-ce-20.10.6-3.el7.x86_64.rpm                    fuse-overlayfs-0.7.2-6.el7_8.x86_64.rpm
docker-ce-cli-20.10.6-3.el7.x86_64.rpm                slirp4netns-0.4.3-4.el7_8.x86_64.rpm
docker-ce-rootless-extras-20.10.6-3.el7.x86_64.rpm
[root@k8s-master ~]# cd /root/k8s/docker/docker-rpm
[root@k8s-master docker-rpm]# yum -y localinstall ./*
[root@k8s-master docker-rpm]# cd ..
[root@k8s-master2 docker]# ls
docker-rpm  docker-speed.sh
[root@k8s-master docker]# sh docker-speed.sh

        "exec-opts": ["native.cgroupdriver=systemd"]

Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

#无外网可以忽略/etc/docker/daemon.json
[root@k8s-master docker]# cat docker-speed.sh
#!/bin/bash
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-\'EOF\'

        "exec-opts": ["native.cgroupdriver=systemd"]

EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
systemctl enable docker

1.7 要保证打开内置的桥功能,这个是借助于iptables来实现的

[root@k8s-master ~]# echo 1 >/proc/sys/net/bridge/bridge-nf-call-iptables
[root@k8s-master ~]# echo 1 >/proc/sys/net/ipv4/ip_forward

2.开始部署master节点

2.1 安装kubectl、kubeadm、kubelet,并且设置kubelet开机自启

[root@k8s-master k8s-rpm]# ls
conntrack-tools-1.4.4-7.el7.x86_64.rpm  kubectl-1.19.0-0.x86_64.rpm        libnetfilter_cthelper-1.0.0-11.el7.x86_64.rpm
cri-tools-1.13.0-0.x86_64.rpm           kubelet-1.19.0-0.x86_64.rpm        libnetfilter_cttimeout-1.0.0-7.el7.x86_64.rpm
kubeadm-1.19.0-0.x86_64.rpm             kubernetes-cni-0.8.7-0.x86_64.rpm  libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm
[root@k8s-master k8s-rpm]# yum localinstall -y ./*
…….
[root@k8s-master ~]# systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

kubelet 运行在 Cluster 所有节点上,负责启动 Pod 和容器。
kubeadm 用于初始化 Cluster。
kubectl 是 Kubernetes 命令行工具。通过 kubectl 可以部署和管理应用,查看各种资源,创建、删除和更新各种组件。

2.2 初始化集群

2.2.1 拉取镜像:

[root@k8s-master k8s-images]# ll
总用量 1003112
-rw-r--r--. 1 root root  45365760 5月  20 13:45 coredns-1.7.0.tar.gz
-rw-r--r--. 1 root root 225547264 5月  20 13:45 dashboard-v2.0.1.tar.gz
-rw-r--r--. 1 root root 254629888 5月  20 13:45 etcd-3.4.9-1.tar.gz
-rw-r--r--. 1 root root  65271296 5月  20 13:45 flannel-v0.13.1-rc2.tar.gz
-rw-r--r--. 1 root root 120040960 5月  20 13:45 kube-apiserver-v1.19.0.tar.gz
-rw-r--r--. 1 root root 112045568 5月  20 13:45 kube-controller-manager-v1.19.0.tar.gz
-rw-r--r--. 1 root root 119695360 5月  20 13:45 kube-proxy-v1.19.0.tar.gz
-rw-r--r--. 1 root root  46919168 5月  20 13:45 kube-scheduler-v1.19.0.tar.gz
-rw-r--r--. 1 root root    692736 5月  20 13:44 pause-3.2.tar.gz
[root@k8s-master ~]# docker load -i /root/k8s/k8s-images/kube-apiserver-v1.19.0.tar.gz
[root@k8s-master ~]# docker load -i /root/k8s/k8s-images/coredns-1.7.0.tar.gz
[root@k8s-master ~]# docker load -i /root/k8s/k8s-images/dashboard-v2.0.1.tar.gz
[root@k8s-master ~]# docker load -i /root/k8s/k8s-images/etcd-3.4.9-1.tar.gz
[root@k8s-master ~]# docker load -i /root/k8s/k8s-images/flannel-v0.13.1-rc2.tar.gz
[root@k8s-master ~]# docker load -i /root/k8s/k8s-images/kube-controller-manager-v1.19.0.tar.gz
[root@k8s-master ~]# docker load -i /root/k8s/k8s-images/kube-proxy-v1.19.0.tar.gz
[root@k8s-master ~]# docker load -i /root/k8s/k8s-images/kube-scheduler-v1.19.0.tar.gz
[root@k8s-master ~]# docker load -i /root/k8s/k8s-images/pause-3.2.tar.gz
拉取本地镜像

2.2.2 修改镜像名称并删除旧镜像

#使用kubeadm config images list查看需要tag的镜像版本

[root@k8s-master ~]# kubeadm config images list
I0521 13:38:41.768238   12611 version.go:252] remote version is much newer: v1.21.1; falling back to: stable-1.19
W0521 13:38:45.604893   12611 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
k8s.gcr.io/kube-apiserver:v1.19.11
k8s.gcr.io/kube-controller-manager:v1.19.11
k8s.gcr.io/kube-scheduler:v1.19.11
k8s.gcr.io/kube-proxy:v1.19.11
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.9-1
k8s.gcr.io/coredns:1.7.0
#通过kubeadm config images list修改镜像名称:
[root@k8s-master ~]# docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.19.0 k8s.gcr.io/kube-apiserver:v1.19.11
[root@k8s-master ~]# docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.19.0 k8s.gcr.io/kube-controller-manager:v1.19.11
[root@k8s-master ~]# docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.19.0 k8s.gcr.io/kube-scheduler:v1.19.11
[root@k8s-master ~]# docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.19.0 k8s.gcr.io/kube-proxy:v1.19.11
[root@k8s-master ~]# docker tag registry.cn-hangzhou.aliyuncs.com/k8sos/flannel:v0.13.1-rc2 quay.io/coreos/flannel:v0.11.0-amd64
[root@k8s-master ~]# docker tag registry.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:v3.2
[root@k8s-master ~]# docker tag registry.aliyuncs.com/google_containers/etcd:3.4.9-1 k8s.gcr.io/etcd:3.4.9-1
[root@k8s-master ~]# docker tag registry.aliyuncs.com/google_containers/coredns:1.7.0 k8s.gcr.io/coredns:v1.7.0
[root@k8s-master ~]# docker tag kubernetesui/dashboard:v2.0.1  k8s.gcr.io/dashboard:v2.0.1
[root@k8s-master ~]# docker images
REPOSITORY                           TAG             IMAGE ID       CREATED         SIZE
quay.io/coreos/flannel               v0.11.0-amd64   60e169ce803f   3 months ago    64.3MB
k8s.gcr.io/kube-proxy                v1.19.11        bc9c328f379c   8 months ago    118MB
k8s.gcr.io/kube-apiserver            v1.19.11        1b74e93ece2f   8 months ago    119MB
k8s.gcr.io/kube-controller-manager   v1.19.11        09d665d529d0   8 months ago    111MB
k8s.gcr.io/kube-scheduler            v1.19.11        cbdc8369d8b1   8 months ago    45.7MB
k8s.gcr.io/etcd                      3.4.9-1         d4ca8726196c   10 months ago   253MB
k8s.gcr.io/coredns                   1.7.0           bfe3a36ebd25   11 months ago   45.2MB
k8s.gcr.io/dashboard                 v2.0.1          85d666cddd04   12 months ago   223MB
k8s.gcr.io/pause                     3.2             80d28bedfe5d   15 months ago   683kB
修改镜像名称

2.2.3 初始化集群

[root@k8s-master ~]# kubeadm init --apiserver-advertise-address 192.168.160.129 --pod-network-cidr=10.244.0.0/16
...
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.160.129:6443 --token 8w1usi.xrk1kgpghbn7vo66 \\
    --discovery-token-ca-cert-hash sha256:8d6937dc0c3174bbc7ff95d5c1b3cc487027007cc782522e63dd3d2ac7b45787
[root@k8s-master ~]#   mkdir -p $HOME/.kube
[root@k8s-master ~]#   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]#   sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master ~]#

2.2.4 配置网络

[root@k8s-master k8s-conf]# kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
clusterrole.rbac.authorization.k8s.io/flannel created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
[root@k8s-master k8s-conf]# 
kube-flannel.yml

2.2.5 修改controller-manager与scheduler配置文件

[root@k8s-master ~]# cd /etc/kubernetes/manifests/
[root@k8s-master manifests]# ls
etcd.yaml  kube-apiserver.yaml  kube-controller-manager.yaml  kube-scheduler.yaml
#将/etc/kubernetes/manifests/下的kube-controller-manager.yaml和kube-scheduler.yaml设置的默认端口是0导致的,解决方式是注释掉对应的port即可
[root@k8s-master manifests]# cat kube-controller-manager.yaml|grep port
#    - --port=0
    port: 10257
    port: 1025

2.2.6 检查

[root@k8s-master manifests]# kubectl get ns
NAME              STATUS   AGE
default           Active   40m
kube-node-lease   Active   40m
kube-public       Active   40m
kube-system       Active   40m
[root@k8s-master manifests]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   "health":"true"
[root@k8s-master manifests]# kubectl get po -n kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-f9fd979d6-cxhbf              1/1     Running   0          41m
coredns-f9fd979d6-vcvrb              1/1     Running   0          41m
etcd-k8s-master                      1/1     Running   0          41m
kube-apiserver-k8s-master            1/1     Running   0          41m
kube-controller-manager-k8s-master   1/1     Running   0          38m
kube-flannel-ds-amd64-cchsr          1/1     Running   0          39m
kube-proxy-xz7p5                     1/1     Running   0          41m
kube-scheduler-k8s-master            1/1     Running   0          38m
[root@k8s-master manifests]#

3、node节点加入集群

[root@k8s-node1 ~]# cd k8s/k8s-images/
[root@k8s-node1 k8s-images]# docker load -i flannel.tar.gz
[root@k8s-node1 k8s-images]# docker load -i kube-proxy.tar.gz
[root@k8s-node1 k8s-images]# docker load -i pause.tar.gz
[root@k8s-node1 k8s-images]# docker images
REPOSITORY                   TAG             IMAGE ID       CREATED         SIZE
quay.io/coreos/flannel       v0.11.0-amd64   60e169ce803f   3 months ago    64.3MB
k8s.gcr.io/kube-proxy        v1.19.11        bc9c328f379c   8 months ago    118MB
k8s.gcr.io/pause             3.2             80d28bedfe5d   15 months ago   683kB
[root@k8s-node1 test]# kubeadm join 192.168.160.129:6443 --token 8w1usi.xrk1kgpghbn7vo66 \\
>     --discovery-token-ca-cert-hash sha256:8d6937dc0c3174bbc7ff95d5c1b3cc487027007cc782522e63dd3d2ac7b45787
[preflight] Running pre-flight checks
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.6. Latest validated version: 19.03
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run \'systemctl enable kubelet.service\'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with \'kubectl -n kube-system get cm kubeadm-config -oyaml\'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run \'kubectl get nodes\' on the control-plane to see this node join the cluster.

 注:

master节点需要的镜像有:
k8s.gcr.io/kube-scheduler v1.19.3
k8s.gcr.io/kube-apiserver v1.19.3
k8s.gcr.io/kube-controller-manager v1.19.3
k8s.gcr.io/etcd 3.4.13-0
k8s.gcr.io/coredns 1.7.0
kubernetesui/dashboard v2.0.1
quay.io/coreos/flannel v0.13.0
k8s.gcr.io/kube-proxy v1.19.3
kubernetesui/metrics-scraper v1.0.4
k8s.gcr.io/pause 3.2

node节点需要的镜像有:
quay.io/coreos/flannel v0.13.0
k8s.gcr.io/kube-proxy v1.19.3
kubernetesui/metrics-scraper v1.0.4
k8s.gcr.io/pause 3.2
master、node所需镜像

所需镜像及其安装包:

[root@k8s-master ~]# tree k8s
k8s
├── dashboard
│   ├── dashboard-v2.0.1.tar.gz
│   ├── dashboard.yaml
│   └── metrics-scraper-v1.0.4.tar.gz
├── docker
│   ├── docker-rpm
│   │   ├── containerd.io-1.4.4-3.1.el7.x86_64.rpm
│   │   ├── container-selinux-2.119.2-1.911c772.el7_8.noarch.rpm
│   │   ├── docker-ce-20.10.6-3.el7.x86_64.rpm
│   │   ├── docker-ce-cli-20.10.6-3.el7.x86_64.rpm
│   │   ├── docker-ce-rootless-extras-20.10.6-3.el7.x86_64.rpm
│   │   ├── docker-scan-plugin-0.7.0-3.el7.x86_64.rpm
│   │   ├── fuse3-libs-3.6.1-4.el7.x86_64.rpm
│   │   ├── fuse-overlayfs-0.7.2-6.el7_8.x86_64.rpm
│   │   └── slirp4netns-0.4.3-4.el7_8.x86_64.rpm
│   └── docker-speed.sh
├── k8s-conf
│   └── kube-flannel.yml
├── k8s-images
│   ├── coredns-1.7.0.tar.gz
│   ├── etcd-3.4.9-1.tar.gz
│   ├── flannel-v0.11.0-amd64.tar.gz
│   ├── kube-apiserver-v1.19.11.tar.gz
│   ├── kube-controller-manager-v1.19.11.tar.gz
│   ├── kube-proxy-v1.19.11.tar.gz
│   ├── kube-scheduler-v1.19.11.tar.gz
│   └── pause-3.2.tar.gz
└── k8s-rpm
    ├── conntrack-tools-1.4.4-7.el7.x86_64.rpm
    ├── cri-tools-1.13.0-0.x86_64.rpm
    ├── kubeadm-1.19.0-0.x86_64.rpm
    ├── kubectl-1.19.0-0.x86_64.rpm
    ├── kubelet-1.19.0-0.x86_64.rpm
    ├── kubernetes-cni-0.8.7-0.x86_64.rpm
    ├── libnetfilter_cthelper-1.0.0-11.el7.x86_64.rpm
    ├── libnetfilter_cttimeout-1.0.0-7.el7.x86_64.rpm
    ├── libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm
    └── socat-1.7.3.2-2.el7.x86_64.rpm

6 directories, 32 files
k8s.tar

链接:https://pan.baidu.com/s/1_LgbKOc8VT6VFi4G1HVqtA
提取码:n4j2
复制这段内容后打开百度网盘手机App,操作更方便哦

 

cdh5.16.1集群企业真正离线部署(代码片段)

?.准备?作1.离线部署大纲MySQL离线部署CM离线部署Parcel?件离线源部署2.规划linux版本:CentOS7.2节点MySQL组件Parcel?件离线源CM服务进程?数据组件hadoop001MySQLParcelAlertPublisherEventServerNNRMDNNMZKhadoop002AlertPublisherEventServerDNNMZKhadoop003Ho 查看详情

超详细版企业离线部署cdh6.10集群与配置使用(代码片段)

     进入大数据行业数载,也从一个开发小白走到了今天,期间也历经过一摸番着石头过河的探索,到如今的胸有点墨;玩过demo,使用过负责过上千节点的大数据集群开发与使用;被人面虐过,也面跪过... 查看详情

超详细版企业离线部署cdh6.10集群与配置使用(代码片段)

     进入大数据行业数载,也从一个开发小白走到了今天,期间也历经过一摸番着石头过河的探索,到如今的胸有点墨;玩过demo,使用过负责过上千节点的大数据集群开发与使用;被人面虐过,也面跪过... 查看详情

《linux运维实战:centos7.6基于ansible一键离线部署elasticsearch7.6.2容器版分布式集群》(代码片段)

...,我这边编写了基于ansible自动化工具,一键远程离线部署elasticsearch容器版集群,当然也会编写二进制版本,敬请期待吧!说明:如果有兴趣,可以从工具下载中获取下载链接,此工具可帮助你快速... 查看详情

openshift4.4静态ip离线安装系列:初始安装(代码片段)

上篇文章准备了离线安装OCP所需要的离线资源,包括安装镜像、所有样例ImageStream和OperatorHub中的所有RedHatOperators。本文就开始正式安装OCP(OpenshiftContainerPlatform)集群,包括DNS解析、负载均衡配置、ignition配置文件生成和集群部... 查看详情

rabbitmq集群部署(代码片段)

rabbitmq集群部署rabbitmq集群部署rabbitmq集群部署#修改主机名hostnamectlset-hostnamerabbitmq1hostnamectlset-hostnamerabbitmq2hostnamectlset-hostnamerabbitmq3#安装相关依赖环境yum-yinstallncurses-devel#安装erlang环境cd/data/softwgeth 查看详情

rabbitmq集群部署(代码片段)

rabbitmq集群部署rabbitmq集群部署rabbitmq集群部署#修改主机名hostnamectlset-hostnamerabbitmq1hostnamectlset-hostnamerabbitmq2hostnamectlset-hostnamerabbitmq3#安装相关依赖环境yum-yinstallncurses-devel#安装erlang环境cd/data/softwgeth 查看详情

部署apachedolphinscheduler伪集群部署(代码片段)

【部署】ApacheDolphinScheduler(海豚)伪集群部署(Pseudo-Cluster)Standalone极速体验版DolphinScheduler伪集群部署前置准备工作本地部署环境准备DolphinScheduler启动环境配置用户免密及权限配置机器SSH免密登陆启动zookeeper下... 查看详情

centos7clouderamanager6完全离线安装cdh6集群(代码片段)

本文是在CentOS7.4下进行CDH6集群的完全离线部署。CDH5集群与CDH6集群的部署区别比较大。说明:本文内容所有操作都是在root用户下进行的。文件下载首先一些安装CDH6集群的必须文件要先在外网环境先下载好。ClouderaManager6.3.0CM6RPM... 查看详情

离线手动部署docker镜像仓库——harbor仓库(代码片段)

...为业务限制,不能连外网,不能使用catalog部署,就只能离线部署。本次记录的是离线部署harbor仓库的过程。实验环境:harbor服务器系统:CentOSLinuxrele 查看详情

kafka-集群部署(代码片段)

第2章Kafka集群部署 2.1环境准备2.1.1集群规划hadoop102hadoop103hadoop104zkzkzkkafkakafkakafka2.1.2jar包下载http://kafka.apache.org/downloads.html 2.2Kafka集群部署  1)解压安装包[[email protected]software]$tar-zxvfk 查看详情

zookeeper集群部署(代码片段)

Zookeeper集群部署前言一、Zookeeper概述1、Zookeeper定义2、Zookeeper工作机制3、Zookeeper特点4、Zookeeper数据结构5、Zookeeper应用场景6、Zookeeper选举机制第一次启动选举机制非第一次启动选举机制二、部署Zookeeper集群准备3台服务器做Zookeepe... 查看详情

离线的docker部署安装(代码片段)

...ocker环境但是某一天,环境不能连接Internet,GG了为了在离线主机上面也能部署docker环境,特地搜索了rpm包,并处理好了依赖关系,最后打包成可以一键部署的离线包docker_install.tar.gz(默认的docker目录,是在/v... 查看详情

微服务架构-离线部署k8s平台并部署测试实例(代码片段)

...环境,也即意味着是无法连接互联网的环境,这时就需要离线部署k8s平台。在此整理离线部署k8s的步骤,分享给大家,有什么不足之处,欢迎指正。1、准备环境这次离线部署k8s的版本为v1.10.1,同时docker的版本为17.12.0-ce,不过本... 查看详情

redis集群部署(代码片段)

RedisCluster部署文档updated:09/05/20191说明RedisCluster的主要特点如下:无中心结构,客户端与redis节点直连,不需要中间代理层节点冗余设计,slave->master选举,集群容错数据分片存储,且支持在线分片ASK/MOVED转向机制,可通过任意... 查看详情

基于kubernetes集群部署elasticsearch集群(代码片段)

在k8s中部署elasticsearch集群文章目录在k8s中部署elasticsearch集群1.部署分析2.准备镜像并推送至Harbor仓库3.创建StorageClass动态PV资源4.编写es集群configmap资源5.编写es集群statfulset资源6.编写es集群svc资源7.创建所有资源8.查看资源的状态8.1... 查看详情

hadoop离线day07--hadoopyarnha机制(代码片段)

目录今日内容大纲HDFS安全模式自动进入离开手动进入离开安全模式的注意事项 Hadoop集群动态扩容、缩容集群扩容集群缩容ApacheYARNYARN的概述YARN组件--3大组件client提交程序到yarn运行流程YARNschdulerYARN3大调度策略HadoopHA集群今日内... 查看详情

zookeeper集群+kafka集群部署(代码片段)

...4.Zookeeper数据结构5.Zookeeper应用场景6.Zookeeper选举机制二.部署Zookeeper集群一.Zookeeper概述1.Zookeeper定义zookeeper是一个开源的分布式的,为分布式框架提供协调 查看详情