openstack安装配置——computenode配置

author author     2022-09-15     260

关键词:

    计算节点需要配置的主要是nova和neutron的客户端,控制节点在进行资源调度及配置时需要计算节点配合方能实现的,计算节点配置内容相对较少,实际生产环境中,需要配置的计算节点数量相当庞大,那么我们就需要借助ansible或者puppet这样的自动化工具进行了,   废话不多讲,直接进入配置状态。


compute节点基础配置

[[email protected] ~]# lscpu

Architecture:          x86_64

CPU op-mode(s):        32-bit, 64-bit

Byte Order:            Little Endian

CPU(s):                8

On-line CPU(s) list:   0-7

Thread(s) per core:    1

Core(s) per socket:    1

Socket(s):             8

NUMA node(s):          1

Vendor ID:             GenuineIntel

CPU family:            6

Model:                 44

Model name:            Westmere E56xx/L56xx/X56xx (Nehalem-C)

Stepping:              1

CPU MHz:               2400.084

BogoMIPS:              4800.16

Virtualization:        VT-x

Hypervisor vendor:     KVM

Virtualization type:   full

L1d cache:             32K

L1i cache:             32K

L2 cache:              4096K

NUMA node0 CPU(s):     0-7


[[email protected] ~]# free -h

              total        used        free      shared  buff/cache   available

Mem:            15G        142M         15G        8.3M        172M         15G

Swap:            0B          0B          0B

[[email protected] ~]# lsblk

NAME            MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT

sr0              11:0    1  1024M  0 rom  

vda             252:0    0   400G  0 disk 

├─vda1          252:1    0   500M  0 part /boot

└─vda2          252:2    0 399.5G  0 part 

  ├─centos-root 253:0    0    50G  0 lvm  /

  ├─centos-swap 253:1    0   3.9G  0 lvm  

  └─centos-data 253:2    0 345.6G  0 lvm  /data


[[email protected] ~]# ifconfig

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500

        inet 192.168.10.31  netmask 255.255.255.0  broadcast 192.168.10.255

        inet6 fe80::5054:ff:fe18:bb1b  prefixlen 64  scopeid 0x20<link>

        ether 52:54:00:18:bb:1b  txqueuelen 1000  (Ethernet)

        RX packets 16842  bytes 1460696 (1.3 MiB)

        RX errors 0  dropped 1416  overruns 0  frame 0

        TX packets 747  bytes 199340 (194.6 KiB)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500

        inet 10.0.0.31  netmask 255.255.0.0  broadcast 10.0.255.255

        inet6 fe80::5054:ff:fe28:e0a7  prefixlen 64  scopeid 0x20<link>

        ether 52:54:00:28:e0:a7  txqueuelen 1000  (Ethernet)

        RX packets 16213  bytes 1360633 (1.2 MiB)

        RX errors 0  dropped 1402  overruns 0  frame 0

        TX packets 23  bytes 1562 (1.5 KiB)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


eth2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500

        inet 111.40.215.9  netmask 255.255.255.240  broadcast 111.40.215.15

        inet6 fe80::5054:ff:fe28:e07a  prefixlen 64  scopeid 0x20<link>

        ether 52:54:00:28:e0:7a  txqueuelen 1000  (Ethernet)

        RX packets 40  bytes 2895 (2.8 KiB)

        RX errors 0  dropped 0  overruns 0  frame 0

        TX packets 24  bytes 1900 (1.8 KiB)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536

        inet 127.0.0.1  netmask 255.0.0.0

        inet6 ::1  prefixlen 128  scopeid 0x10<host>

        loop  txqueuelen 0  (Local Loopback)

        RX packets 841  bytes 44167 (43.1 KiB)

        RX errors 0  dropped 0  overruns 0  frame 0

        TX packets 841  bytes 44167 (43.1 KiB)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


[[email protected] ~]# getenforce

Disabled

[[email protected] ~]# iptables -vnL

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)

 pkts bytes target     prot opt in     out     source               destination         


Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)

 pkts bytes target     prot opt in     out     source               destination         


Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)

 pkts bytes target     prot opt in     out     source               destination         

[[email protected] ~]# cat /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.10.10 controller

192.168.10.20 block

192.168.10.31 compute1

192.168.10.32 compute2

[[email protected] ~]# 


配置时间同步服务

[[email protected] ~]# yum install -y chrony

[[email protected] ~]# vim /etc/chrony.conf 

[[email protected] ~]# grep -v ^# /etc/chrony.conf | tr -s [[:space:]]

server controller iburst

stratumweight 0

driftfile /var/lib/chrony/drift

rtcsync

makestep 10 3

bindcmdaddress 127.0.0.1

bindcmdaddress ::1

keyfile /etc/chrony.keys

commandkey 1

generatecommandkey

noclientlog

logchange 0.5

logdir /var/log/chrony

[[email protected] ~]# systemctl enable chronyd.service 

[[email protected] ~]# systemctl start chronyd.service 

[[email protected] ~]# chronyc sources

210 Number of sources = 1

MS Name/IP address         Stratum Poll Reach LastRx Last sample

===============================================================================

^* controller                    3   6    17    52    -15us[ -126us] +/-  138ms

[[email protected] ~]# 


安装 OpenStack 客户端

[[email protected] ~]# yum install -y python-openstackclient


安装配置nova客户端

[[email protected] ~]# yum install -y openstack-nova-compute

[[email protected] ~]# cp /etc/nova/nova.conf{,.bak}

[[email protected] ~]# vim /etc/nova/nova.conf

[[email protected] ~]# grep -v ^# /etc/nova/nova.conf | tr -s [[:space:]]

[DEFAULT]

rpc_backend = rabbit

auth_strategy = keystone

my_ip = 192.168.10.31

use_neutron = True

firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api_database]

[barbican]

[cache]

[cells]

[cinder]

[conductor]

[cors]

[cors.subdomain]

[database]

[ephemeral_storage_encryption]

[glance]

api_servers = http://controller:9292

[guestfs]

[hyperv]

[image_file_url]

[ironic]

[keymgr]

[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = nova

password = NOVA_PASS

[libvirt]

[matchmaker_redis]

[metrics]

[neutron]

[osapi_v21]

[oslo_concurrency]

lock_path = /var/lib/nova/tmp

[oslo_messaging_amqp]

[oslo_messaging_notifications]

[oslo_messaging_rabbit]

rabbit_host = controller

rabbit_userid = openstack

rabbit_password = RABBIT_PASS

[oslo_middleware]

[oslo_policy]

[rdp]

[serial_console]

[spice]

[ssl]

[trusted_computing]

[upgrade_levels]

[vmware]

[vnc]

enabled = True

vncserver_listen = 0.0.0.0

vncserver_proxyclient_address = $my_ip

novncproxy_base_url = http://controller:6080/vnc_auto.html

[workarounds]

[xenserver]

[[email protected] ~]# egrep -c ‘(vmx|svm)‘ /proc/cpuinfo  //检验是否支持虚拟机的硬件加速

8

[[email protected] ~]#

如果此处检验结果为0就请参考openstack环境准备一文中kvm虚拟机如何开启嵌套虚拟化栏目内容


[[email protected] ~]# systemctl enable libvirtd.service openstack-nova-compute.service

Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-compute.service to /usr/lib/systemd/system/openstack-nova-compute.service.

[[email protected] ~]# systemctl start libvirtd.service openstack-nova-compute.service  //计算节点上不会启动相应端口,只能通过服务状态进行查看

[[email protected] ~]# systemctl status libvirtd.service openstack-nova-compute.service

● libvirtd.service - Virtualization daemon

   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)

   Active: active (running) since Sun 2017-07-16 19:10:26 CST; 12min ago

     Docs: man:libvirtd(8)

           http://libvirt.org

 Main PID: 1002 (libvirtd)

   CGroup: /system.slice/libvirtd.service

           └─1002 /usr/sbin/libvirtd


Jul 16 19:10:26 compute1 systemd[1]: Starting Virtualization daemon...

Jul 16 19:10:26 compute1 systemd[1]: Started Virtualization daemon.

Jul 16 19:21:06 compute1 systemd[1]: Started Virtualization daemon.


● openstack-nova-compute.service - OpenStack Nova Compute Server

   Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; enabled; vendor preset: disabled)

   Active: active (running) since Sun 2017-07-16 19:21:11 CST; 1min 21s ago

 Main PID: 1269 (nova-compute)

   CGroup: /system.slice/openstack-nova-compute.service

           └─1269 /usr/bin/python2 /usr/bin/nova-compute


Jul 16 19:21:06 compute1 systemd[1]: Starting OpenStack Nova Compute Server...

Jul 16 19:21:11 compute1 nova-compute[1269]: /usr/lib/python2.7/site-packages/pkg_resources/__init__.py:187: RuntimeWarning: You have...

Jul 16 19:21:11 compute1 nova-compute[1269]: stacklevel=1,

Jul 16 19:21:11 compute1 systemd[1]: Started OpenStack Nova Compute Server.

Hint: Some lines were ellipsized, use -l to show in full.

[[email protected] ~]#


前往controller节点验证计算服务配置


安装配置neutron客户端

控制节点网络配置完成后开始继续以下步骤

[[email protected] ~]# yum install -y openstack-neutron-linuxbridge ebtables ipset

[[email protected] ~]# cp /etc/neutron/neutron.conf{,.bak}

[[email protected] ~]# vim /etc/neutron/neutron.conf

[[email protected] ~]# grep -v ^# /etc/neutron/neutron.conf | tr -s [[:space:]]

[DEFAULT]

rpc_backend = rabbit

auth_strategy = keystone

[agent]

[cors]

[cors.subdomain]

[database]

[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = neutron

password = NEUTRON_PASS

[matchmaker_redis]

[nova]

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

[oslo_messaging_amqp]

[oslo_messaging_notifications]

[oslo_messaging_rabbit]

rabbit_host = controller

rabbit_userid = openstack

rabbit_password = RABBIT_PASS

[oslo_policy]

[qos]

[quotas]

[ssl]

[[email protected] ~]# 


linuxbridge代理配置

[[email protected] ~]# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak} 

[[email protected] ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[[email protected] ~]# grep -v ^# /etc/neutron/plugins/ml2/linuxbridge_agent.ini | tr -s [[:space:]]

[DEFAULT]

[agent]

[linux_bridge]

physical_interface_mappings = provider:eth1

[securitygroup]

enable_security_group = True

firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

[vxlan]

enable_vxlan = True

local_ip = 192.168.10.31

l2_population = True

[[email protected] ~]# 


再次编辑nova配置文件,追加网络配置

[[email protected] ~]# vim /etc/nova/nova.conf

url = http://controller:9696

auth_url = http://controller:35357

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = neutron

password = NEUTRON_PASS


重启计算节点服务,启用并启动linuxbridge代理服务

[[email protected] ~]# systemctl restart openstack-nova-compute.service

[[email protected] ~]# systemctl enable neutron-linuxbridge-agent.service

Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service.

[[email protected] ~]# systemctl start neutron-linuxbridge-agent.service


前往controller节点验证网络服务配置

本文出自 “爱情防火墙” 博客,请务必保留此出处http://183530300.blog.51cto.com/894387/1957732

openstack怎样创建虚拟网络

参考技术A(一)网络服务配置1、在computenode上面安装nova-networknova-api-metadata,在controller管理机上面安装nova-network服务computenode:#apt-getinstallnova-api-metadatacontroller:#apt-getinstallnova-network2、编辑nova.conf定义网络模块,在[DEFAULT]选项下... 查看详情

openstack安装部署(代码片段)

openstack的部署安装一一、OpenStack环境配置一、配置项目1、改名三台主机名2、关闭防火墙3、配置host文件4、免交户5、配置NDS6、配置网卡7、安装依赖环境8、时间同步二、系统环境配置2、安装RabbitMQ3、安装memcached一、OpenStack环境... 查看详情

openstack安装部署(代码片段)

openstack的部署安装一一、OpenStack环境配置一、配置项目1、改名三台主机名2、关闭防火墙3、配置host文件4、免交户5、配置NDS6、配置网卡7、安装依赖环境8、时间同步二、系统环境配置2、安装RabbitMQ3、安装memcached一、OpenStack环境... 查看详情

openstack的安装部署教程

Openstack的安装部署教程一、环境规划二、全部节点环境配置工作1.配置hosts2.关闭所有防火墙和selinux3.关闭NetworkManager服务三、配置openstack的yum仓库1.安装httpd2.上传RHEL7OSP-6.0镜像3.配置yum源4.配置openstack仓库5.检查yum仓库5.给controller... 查看详情

解决erroroslo_service.periodic_taskattributeerror:‘computenode‘objecthasnoattribute‘nodename‘

openstack管理和创建容器,出现如下类似的错误:2021-06-0402:25:28.55722410ERRORoslo_service.periodic_taskDuringhandlingoftheaboveexception,anotherexceptionoccurred:2021-06-0402:25:28.55722410ERRORoslo_service.periodic_ta 查看详情

openstack安装配置——blocknode配置

...再创建,但数据一定不能,所以就需要数据的持久存储,openstack官方给出的数据存储方案就是cinder模块,cinder模块需要cinder服务端和cinder存储节点共同构成,在本实验中,我们把cinder服务端一并安装在了controller节点上,另行配... 查看详情

openstack安装配置——orchestration安装配置

...例,手动一个一个的去套模版启动,相信也行太low了吧,openstack官方也为运维人员准备了强大的利器,那就是任务编排orchestration服务模块了,当然如果要想用好这一工具来帮我解决低级趣味的任务编排,还需要我们专业去学习... 查看详情

openstack安装配置——controllernode配置

    实际生产环境中,每个服务模块很有可能都是一个集群,但我们这里只是带大家配置了一个实验环境,所以我们这里把keystone、nova、neutron、glance、dashboard都安装在了contoller节点上。controller节点基础配置[[email... 查看详情

openstack安装配置——filesharenode配置

...候需要数据源的共享来实现多节点的实时数据保持一致,openstack官方提供了manila服务模块实现了云盘共享,manila服务也是需要manila服务端和存储节点共同组成的,本实验中为了节约虚机节点,就把manila服务端安装在了controller节... 查看详情

openstack安装与配置

安装环境Memcached编译安装环境L:Linux(centos7.2)主机信息:CPU>=2C,开启支持虚拟化内存>=3072MBHD=100GB部署规划:172.24.77.221linux1-host.jay.com):运行172.24.77.222linux2-host.jay.com):运行查看openstackyum版本yumlistcentos-release-open 查看详情

openstack安装部署指南翻译系列之环境配置

1.1.1. 环境配置为了最大限度地减少混乱并为OpenStack提供更多资源,建议最少安装Linux发行版。此外,必须在每个节点上安装64位版本的发行版。环境选项主要包括以下几个部分:l 安全l 主机网络l 网络时间协议(NT... 查看详情

openstack运维-环境部署|报错排查[t版](代码片段)

OpenStack运维-环境部署一、环境配置二、基础配置1.配置国内YUM源2.修改主机名及关闭防火墙3.安装相关环境依赖包4.调优NAT网卡及配置DNS和映射5.配置免交护6.配置时间同步所有节点安装控制节点ct计算节点c1/2三、系统环境配置1.安... 查看详情

6.安装和配置openstack图片服务组件

安装和配置图片服务组件这里是安装在控制器上安装和配置图片服务组件yuminstall–yopenstack-glancepython-glanceclient 编辑/etc/glance/glance-api.confmv/etc/glance/glance-api.conf/etc/glance/glance-api.conf_bakvim/etc/glance/glance-api. 查看详情

用openstack搭建简单的云平台并启动云主机(代码片段)

OpenStackOpenStack简介OpenStack重要集成组件OpenStack平台部署部署环境建立虚拟机配置时间同步安装OpenStack安装RabbitMQ消息队列服务Memcached缓存令牌的安装安装和配置OpenStack身份认证服务配置keystone配置ApacheHTTP服务器创建服务实体和API... 查看详情

openstack概述、部署安装环境、部署openstack、openstack操作基础

...务器案例3:配置yum仓库案例4:检查基础环境案例5:部署Openstack案例6:网络管理案例7:登录openstack1案例1:配置yum仓库1.1问题本案例要求配置基本环境:配置三台虚拟机2CPU,6G内存,50G硬盘2CPU,4.5G内存,100G硬盘配置静态IPifcfg-... 查看详情

安装社区o版openstack

参考技术A修改docker服务配置/usr/lib/systemd/system/docker.service在[Service]增加MountFlags=shared,如下配置docke代理重启服务配置yum源安装所需的rpm包修改/etc/kolla/globals.yml,主要修改如下配置进入kolla_toolbox容器创建.admin-rc.sh文件测试命令 查看详情

手动安装openstack并配置虚拟化集成vm

手动安装openstack并配置虚拟化集成VM云计算含义:弹性质可以随便增加内存和cpu,硬盘对用户是透明的对数据进行去重应用,数据,跑的时间,中间件,系统虚拟机,服务器,存储,网络计算分层云计算绝不等于虚拟化云计算用... 查看详情

centos7安装配置openstack-dashboard(官网openstack-juno版)

感谢朋友支持本博客。欢迎共同探讨交流。因为能力和时间有限。错误之处在所难免。欢迎指正!假设转载。请保留作者信息。博客地址:http://blog.csdn.net/qq_21398167原博文地址:http://blog.csdn.net/qq_21398167/article/details/47036395Systemrequ... 查看详情