二进制OpenStack

二进制搭建OpenStack

1.环境准备

1.1机器的准备

主机名服务器配置操作系统IP地址
controller-node4C8Gcentos7.9172.17.1.117
computer-node4C8Gcentos7.9172.17.1.118

1.2网络架构

在这里插入图片描述

[root@cotroller-node ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope hostvalid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:ea:52:98 brd ff:ff:ff:ff:ff:ffinet 172.17.1.117/24 brd 172.17.1.255 scope global ens33valid_lft forever preferred_lft foreverinet6 fe80::20c:29ff:feea:5298/64 scope linkvalid_lft forever preferred_lft forever
3: ens36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:ea:52:a2 brd ff:ff:ff:ff:ff:ffinet 192.168.1.117/24 brd 192.168.1.255 scope global ens36valid_lft forever preferred_lft foreverinet6 fe80::20c:29ff:feea:52a2/64 scope linkvalid_lft forever preferred_lft forever
4: tapbd3d44cd-c8@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000link/ether d6:2d:69:2f:f3:18 brd ff:ff:ff:ff:ff:ff link-netnsid 0
[root@cotroller-node ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
NAME=ens33
UUID=8f03bc10-de37-4254-97bd-d9fe57c2b5d5
DEVICE=ens33
ONBOOT=yes
IPADDR=172.17.1.117
NETMASK=255.255.255.0
GATEWAY=172.17.1.2
DNS=8.8.8.8[root@cotroller-node ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens36
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
NAME=ens36
UUID=8f03bc10-de37-4254-97bd-d9fe57c2b5d5
DEVICE=ens36
ONBOOT=yes
IPADDR=192.168.1.117
NETMASK=255.255.255.0

1.3 组件密码

组件密码备注
集群管理员admin用户admin
数据库mariadbadmin用户root
消息队列rabbitmqadmin用户openStack
keystone数据库admin用户keystone
placement数据库admin用户placement
placement用户admin
glance数据库admin用户glance
glance用户admin
cinder数据库admin用户cinder
cinder用户admin
neutron数据库admin用户neutron
neutron用户admin
nova数据库admin用户nova
nova用户admin

1.4环境初始化

1.4.1关闭防火墙和seLinux
[root@cotroller-node ~]#systemctl stop firewalld
[root@cotroller-node ~]#systemctl disable firewalld
[root@cotroller-node ~]#systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemonLoaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)Active: inactive (dead)Docs: man:firewalld(1)[root@cotroller-node ~]# sed -i s/SELINUX=enable/SELINUX=disabled/g /etc/selinux/config
1.4.2编辑hosts文件
[root@cotroller-node ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.17.1.117 controller-node controller.wengsq.com
172.17.1.118 computer-node1 node1.wengsq.com
1.4.3 时间同步
[root@cotroller-node ~]# yum install ntpdate -y
[root@cotroller-node ~]# crontab -l
* * * * * ntpdate ntp.aliyun.com
1.4.4配置OpenStack的yum源
(1)配置extras源
[root@cotroller-node yum.repos.d]# cat extras.repo
[extras]
name=CentOS-$releasever - Extras - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/extras/$basearch/http://mirrors.aliyuncs.com/centos/$releasever/extras/$basearch/http://mirrors.cloud.aliyuncs.com/centos/$releasever/extras/$basearch/
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7(2)配置OpenStack  Train版本的yum源
[root@cotroller-node yum.repos.d]# pwd
/etc/yum.repos.d
[root@cotroller-node yum.repos.d]# yum install centos-release-openstack-train -y
[root@cotroller-node yum.repos.d]# sed -i 's/mirrorlist=/#mirrorlist=/g' /etc/yum.repos.d/CentOS-OpenStack-train.repo
[root@cotroller-node yum.repos.d]# sed -i 's/#baseurl=/baseurl=/g' /etc/yum.repos.d/CentOS-OpenStack-train.repo
[root@cotroller-node yum.repos.d]# sed -i 's@http://mirror.centos.org@https://mirrors.aliyun.com@g' /etc/yum.repos.d/CentOS-OpenStack-train.repo

2.部署控制节点相关组件

2.1安装pyhton-OpenStackclient

[root@cotroller-node ~]# yum install python-openstackclient -y
#验证
[root@cotroller-node ~]# openstack --version
openstack 4.0.2

2.2部署数据库

(1)安装mariadb数据库
[root@cotroller-node ~]# yum install mariadb mariadb-server python2-PyMySQL -y
[root@cotroller-node ~]# systemctl --now enable mariadb
[root@cotroller-node ~]# systemctl status mariadb
#验证数据库服务是否启动成功
[root@cotroller-node ~]# ss -ntl|grep 3306
LISTEN     0      80        [::]:3306                  [::]:*
#初始化数据库
[root@cotroller-node ~]# /usr/bin/mysql_secure_installation
1 NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!
2
3  
4 In order to log into MariaDB to secure it, we'll need the current
5 password for the root user.  If you've just installed MariaDB, and
6 you haven't set the root password yet, the password will be blank,
7 so you should just press enter here.
8  
9 Enter current password for root (enter for none):
10 OK, successfully used password, moving on...
11  
12 Setting the root password ensures that nobody can log into the MariaDB
13 root user without the proper authorisation.
14  
15 Set root password? [Y/n]
16 New password:
17 Re-enter new password:
18 Password updated successfully!
19 Reloading privilege tables..
20 ... Success!
21  
22  
23 By default, a MariaDB installation has an anonymous user, allowing anyone
24 to log into MariaDB without having to have a user account created for
25 them.  This is intended only for testing, and to make the installation
26 go a bit smoother.  You should remove them before moving into a
27 production environment.
28  
29 Remove anonymous users? [Y/n]
30 ... Success!
31  
32 Normally, root should only be allowed to connect from 'localhost'.  This
33 ensures that someone cannot guess at the root password from the network.
34  
35 Disallow root login remotely? [Y/n]
36 ... Success!
37  
38 By default, MariaDB comes with a database named 'test' that anyone can
39 access.  This is also intended only for testing, and should be removed
40 before moving into a production environment.
41  
42 Remove test database and access to it? [Y/n]
43 - Dropping test database...
44 ... Success!
45 - Removing privileges on test database...
46 ... Success!
47  
48 Reloading the privilege tables will ensure that all changes made so far
49 will take effect immediately.
50  
51 Reload privilege tables now? [Y/n]
52 ... Success!
53  
54 Cleaning up...
55  
56 All done!  If you've completed all of the above steps, your MariaDB
57 installation should now be secure.
58  
59 Thanks for using MariaDB!#注意:数据库默认是安装在/var/lib/mysql目录下,对于生产数据库来说,一般需要给数据库挂载独立的硬盘,而且是性能较好的SSD硬盘,保证数据库性能,测试环境可以使用本地的目录。

2.3消息队列的安装

(1)安装rabbimq
[root@cotroller-node ~]# yum install rabbitmq-server -y
[root@cotroller-node ~]# systemctl --now enable rabbitmq-server(2)创建管理账号
[root@cotroller-node ~]# rabbitmqctl add_user openstack admin   #用户:openstack  密码:admin
Setting tags for user "openstack" to [administrator][root@cotroller-node ~]# rabbitmqctl set_permissions -p "/" openstack ".*" ".*" ".*"
(3)绑定用户角色
[root@cotroller-node ~]# rabbitmqctl set_user_tags openstack administrator  #给openstack用户绑定到管理者角色(4)设置用户权限
[root@cotroller-node ~]# rabbitmqctl set_permissions -p "/" openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/"#验证:查看用户是否创建成功
[root@cotroller-node ~]# rabbitmqctl list_users
Listing users
openstack	[administrator]
guest	[administrator]

2.4安装缓存数据库

(1)部署memcached缓存数据库
注意:keystone服务会使用memcached组件来存放认证令牌,因此需要在控制节点上安装这个服务。
[root@cotroller-node ~]# yum install memcached python-memcached -y
(2)修改数据库配置
[root@cotroller-node ~]# vim /etc/sysconfig/memcachedPORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="512"
OPTIONS="-l 127.0.0.1,::1,cotroller-node"
(3)启动服务
[root@cotroller-node ~]# systemctl --now enable memcached
[root@cotroller-node ~]# ss -ntlp|grep mem
LISTEN     0      128    192.168.1.117:11211                    *:*                   users:(("memcached",pid=985,fd=31))
LISTEN     0      128    172.17.1.117:11211                    *:*                   users:(("memcached",pid=985,fd=30))
LISTEN     0      128    127.0.0.1:11211                    *:*                   users:(("memcached",pid=985,fd=26))
LISTEN     0      128    [fe80::20c:29ff:feea:52a2]%ens36:11211                 [::]:*                   users:(("memcached",pid=985,fd=29))
LISTEN     0      128    [fe80::20c:29ff:feea:5298]%ens33:11211                 [::]:*                   users:(("memcached",pid=985,fd=28))
LISTEN     0      128      [::1]:11211                 [::]:*                   users:(("memcached",pid=985,fd=27))

2.5创建组件数据库

[root@cotroller-node ~]# mysql -u root -padmin#创建OpenStack核心组件所需的数据库、用户,并给组件用户授予对应数据库的权限
(1)创建keystone数据库
create database keystone default character set utf8;
grant all privileges on keystone.* to keystone@'172.17.1.%' \
identified by 'admin';
grant all privileges on keystone.* to keystone@'localhost' \
identified by 'admin';(2)创建placement数据库
create database placement default character set utf8;
grant all privileges on placement.* to keystone@'172.17.1.%' \
identified by 'admin';
grant all privileges on placement.* to keystone@'localhost' \
identified by 'admin';(3)创建placement数据库
create database placement default character set utf8;
grant all privileges on placement.* to keystone@'172.17.1.%' \
identified by 'admin';
grant all privileges on placement.* to keystone@'localhost' \
identified by 'admin';(4)创建neutron数据库
create database neutron default character set utf8;
grant all privileges on neutron.* to keystone@'172.17.1.%' \
identified by 'admin';
grant all privileges on neutron.* to keystone@'localhost' \
identified by 'admin';(5)创建placement数据库
create database cinder default character set utf8;
grant all privileges on cinder.* to keystone@'172.17.1.%' \
identified by 'admin';
grant all privileges on cinder.* to keystone@'localhost' \
identified by 'admin';(6)创建placement数据库
create database nova default character set utf8;
grant all privileges on nova.* to keystone@'172.17.1.%' \
identified by 'admin';
grant all privileges on nova.* to keystone@'localhost' \
identified by 'admin';(7)创建placement数据库
create database nova_api default character set utf8;
grant all privileges on nova_api.* to keystone@'172.17.1.%' \
identified by 'admin';
grant all privileges on nova_api.* to keystone@'localhost' \
identified by 'admin';(8)创建placement数据库
create database nova_cell0 default character set utf8;
grant all privileges on nova_cell0.* to keystone@'172.17.1.%' \
identified by 'admin';
grant all privileges on nova_cell0.* to keystone@'localhost' \
identified by 'admin';

2.6 OpenStack控制节点的组件安装

[root@cotroller-node ~]# yum install -y openstack-keystone httpd mod_wsgi openstack-placement-api openstack-glance openstack-cinder openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge openstack-nova openstack-dashboard
2.6.1配置keystone组件
2.6.1.1修改keystone的配置文件
[root@cotroller-node ~]# vim /etc/keystone/keystone.conf
373:[cache]
432:backend = oslo_cache.memcache_pool
433:enabled = true
434:memcache_servers = localhost:11211
585:[database]
604:connection = mysql+pymysql://keystone:admin@controller-node/keystone
2.6.1.2 keystone数据库初始化
  1. 初始化鉴权服务数据库
[root@cotroller-node ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone

2.初始化fernet密码仓库

 [root@cotroller-node ~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone[root@cotroller-node ~]# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

​ 3.设置管理员密码变量,这个管理员也是后面登录OpenStack管理界面的管理员

[root@cotroller-node ~]#  export ADMIN_PASS=admin

​ 4.初始化鉴权服务

[root@cotroller-node ~]# keystone-manage bootstrap --bootstrap-password $ADMIN_PASS \
--bootstrap-admin-url http://controller-node:5000/v3/ \
--bootstrap-internal-url http://controller-node:5000/v3/ \
--bootstrap-public-url http://controller-node:5000/v3/ \
--bootstrap-region-id RegionOne
2.6.1.3配置apache服务器
#修改配置文件
[root@cotroller-node ~]# vim /etc/httpd/conf/httpd.conf
42:Listen controller-node:80
96 ServerName cotroller-node:80#创建keystone的配置文件软连接到/etc/httpd/conf.d/目录下
[root@cotroller-node ~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/#打开wsgi-keystone.conf文件,修改里面的第一行监听地址
[root@cotroller-node ~]# cat /usr/share/keystone/wsgi-keystone.conf
Listen cotroller-node:5000#启动httpd服务
[root@cotroller-node ~]# systemctl enable --now httpd#创建admin用户的授权文件
[root@cotroller-node ~]# cat /etc/profile.d/admin-openrc
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller-node:5000/v3
export OS_IDENTITY_API_VERSION=3[root@cotroller-node ~]# sourece /etc/profile.d/admin-openrc
2.6.1.4 验证keystone服务是否安装成功
[root@cotroller-node ~]# openstack user list
+----------------------------------+-----------+
| ID                               | Name      |
+----------------------------------+-----------+
| 53921aab7e454d62abaaf9c7b082a2de | admin     |    
+----------------------------------+-----------+
2.6.1.5 创建service项目
[root@cotroller-node ~]# openstack project create --domain default --description "Service project" service#注意:这个service项目需要给后面的几个组件使用
2.6.2 配置placement组件
2.6.2.1创建placement用户
[root@cotroller-node ~]# openstack user create --domain default --password-prompt placement
2.6.2.2绑定角色

将placement用户绑定到admin角色

[root@cotroller-node ~]# openstack role add --project service --user placement admin
2.6.2.3 创建placement服务
[root@cotroller-node ~]# openstack service create --name placement --description "Placement API" placement
2.6.2.4 创建服务访问端点
[root@cotroller-node ~]# openstack endpoint create --region RegionOne placement public http://controller-node:8778
[root@cotroller-node ~]# openstack endpoint create --region RegionOne placement internal http://controller-node:8778
[root@cotroller-node ~]# openstack endpoint create --region RegionOne placement admin http://controller-node:8778
2.6.2.5 修改placement配置文件
[root@cotroller-node ~]# grep -Evn "^#|^$" /etc/placement/placement.conf
1:[DEFAULT]
191:[api]
209:auth_strategy = keystone
212:[cors]
241:[keystone_authtoken]
256:www_authenticate_uri = http://controller-node:5000
257:auth_url = http://controller-node:5000/v3
258:auth_version = v3
259:service_token_roles = service
260:service_token_roles_required = true
261:memcached_servers = controller-node:11211
262:auth_type = password
263:project_domain_name = Default
264:user_domain_name = Default
265:project_name = service
266:username = placement
267:password = placement
413:[oslo_policy]
462:[placement]
511:[placement_database]
524:connection = mysql+pymysql://placement:admin@controller-node/placement
577:[profiler]
2.6.2.6 初始化placement组件数据库
[root@cotroller-node ~]# su -s /bin/sh -c "placement-manage db sync" placement#注意:这个命令可能会看到警告信息,对后续没有影响。数据库初始化完成后,会在/etc/httpd/conf.d/目录下生成一个00-placement-api.conf文件,里面是placement的虚拟机主机,打开这个文件,将里面的监听
[root@cotroller-node ~]# cat /etc/httpd/conf.d/00-placement-api.conf
Listen cotroller-node:8778#注意:同时还要在这个文件的<VirtualHost *:8778>里添加下面 ... 中间的这部分代码:<Directory /usr/bin>Require all denied<Files "placement-api"><RequireAll>Require all grantedRequire not env blockAccess</RequireAll></Files></Directory>#注意:这是2.2和2.4版本的httpd的差异,添加这段内容,httpd才能正常调用/usr/bin/placement-api命令,placement-api才能在httpd2.4版本上正常工作。#重启服务
[root@cotroller-node ~]# systemctl restart httpd
2.6.2.7 验证placement服务是否正常
[root@cotroller-node ~]# placement-status upgrade check
+----------------------------------+
| Upgrade Check Results            |
+----------------------------------+
| Check: Missing Root Provider IDs |
| Result: Success                  |
| Details: None                    |
+----------------------------------+
| Check: Incomplete Consumers      |
| Result: Success                  |
| Details: None                    |
+----------------------------------+[root@cotroller-node ~]# echo $?
0
0所有升级就绪检查成功通过
1至少有一个检查遇到了问题,需要进一步调查,可能是个警告,升级也可以成功
2升级状态检查失败,需要调查。需要考虑是某些原因导致升级失败
255未知错误

注意:如果一切正常,那么返回结果应该是0,到这里Placement服务就配置完毕。

2.6.3 配置glance组件
2.6.2.1创建glance用户
[root@cotroller-node ~]# openstack user create --domain default --password-prompt glance
2.6.2.2绑定角色

将placement用户绑定到admin角色

[root@cotroller-node ~]# openstack role add --project service --user glance admin
2.6.2.3 创建glance服务
[root@cotroller-node ~]# openstack service create --name glance --description "OpenStack image" image
2.6.2.4 创建服务访问端点
[root@cotroller-node ~]# openstack endpoint create --region RegionOne glance public http://controller-node:9292
[root@cotroller-node ~]# openstack endpoint create --region RegionOne glance internal http://controller-node:9292
[root@cotroller-node ~]# openstack endpoint create --region RegionOne glance admin http://controller-node:9292
2.6.2.5 修改glance服务配置文件
[root@cotroller-node ~]# grep -Evn "^#|^$" /etc/glance/glance-api.conf
1:[DEFAULT]
2:bind_host=172.17.1.117
2071:[database]
2090:connection = mysql+pymysql://glance:admin@controller-node/glance
3349:[glance_store]
3350:stores = file,http
3351:default_store = file
3353:filesystem_store_datadir = /var/lib/glance/images
4863:[keystone_authtoken]
4864:www_authenticate_uri = http://controller-node:5000
4865:auth_version = v3
4866:auth_url = http://controller-node:5000/v3
4867:memcached_servers = controller-node:11211
4868:service_token_roles = service
4869:service_token_roles_required = True
4870:auth_type = password
4871:project_domain_name = Default
4872:user_domain_name = Default
4873:project_name = service
4874:username = glance
4875:password = glance
5509:[paste_deploy]
5511:flavor = keystone#注意:这里使用操作系统本地文件路径作为镜像的存储位置,即镜像文件会放到控制节点的磁盘上存储。后面生产环境中会将它和Ceph对接,存储到后端Ceph集群中。
2.6.2.6 初始化glance组件数据库
[root@cotroller-node ~]# su -s /bin/sh -c "glance-manage db_sync" glance
2.6.2.7 启动glance-api服务
[root@cotroller-node ~]# systemctl enable openstack-glance-api.service#验证
[root@cotroller-node ~]# openstack image list
+--------------------------------------+--------+--------+
| ID                                   | Name   | Status |
+--------------------------------------+--------+--------+
+--------------------------------------+--------+--------+#镜像创建测试
[root@cotroller-node ~]# openstack image create --file /root/cirros-0.6.1-x86_64-disk.img --disk-format qcow2 --container-format bare --public cirros
[root@cotroller-node ~]# openstack image list
+--------------------------------------+--------+--------+
| ID                                   | Name   | Status |
+--------------------------------------+--------+--------+
| 834231aa-3440-490f-834e-01f805ee07be | cirros | active |
+--------------------------------------+--------+--------+
2.6.4 配置Cinder组件
2.6.4.1创建cinder用户
[root@cotroller-node ~]# openstack user create --domain default --password-prompt cinder
2.6.4.2绑定角色

将cinder用户绑定到admin角色

[root@cotroller-node ~]# openstack role add --project service --user cinder admin
2.6.4.3 创建cinder服务(需要创建v2和v3版本的cinder存储服务)
[root@cotroller-node ~]# openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
[root@cotroller-node ~]# openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
2.6.4.4 创建服务访问端点
  • 创建v2版本的
[root@cotroller-node ~]# openstack endpoint create --region RegionOne volumev2 public http://controller-node:8776/v2/%\(project_id\)s
[root@cotroller-node ~]# openstack endpoint create --region RegionOne volumev2 internal http://controller-node:8776/v2/%\(project_id\)s
[root@cotroller-node ~]# openstack endpoint create --region RegionOne volumev2 admin http://controller-node:8776/v2/%\(project_id\)s
  • 创建v3版本的
[root@cotroller-node ~]# openstack endpoint create --region RegionOne volumev3 public http://controller-node:8776/v3/%\(project_id\)s
[root@cotroller-node ~]# openstack endpoint create --region RegionOne volumev3 admin http://controller-node:8776/v3/%\(project_id\)s
[root@cotroller-node ~]# openstack endpoint create --region RegionOne volumev3 internal http://controller-node:8776/v3/%\(project_id\)s

注意事项:如果控制节点和计算节点是分离的,那么在控制节点上只需要启动前两个服务即可,在计算节点上只
需要启动后两个服务就行。同时,计算节点和控制节点的配置因为安装的服务不同是有差异的。

  • openstack-cinder-api,负责核心接口
  • openstack-cinder-scheduler,负责调度,选择合适的节点接受调度任务
  • openstack-cinder-volume,负责调度卷驱动创建实际的存储卷
  • openstack-cinder-backup,负责卷快照任务
2.6.2.5 修改cinder配置文件
[root@cotroller-node ~]# grep -Evn "^#|^$" /etc/cinder/cinder.conf
1:[DEFAULT]
2:auth_strategy = keystone
3:glance_api_servers = http://controller-node:9292
4:my_ip = 172.17.1.117
5:osapi_volume_listen = 172.17.1.117
6:transport_url = rabbit://openstack:admin@controller-node
3843:[database]
3844:connection = mysql+pymysql://cinder:admin@controller-node/cinder
4098:[keystone_authtoken]
4099:www_authenticate_uri = http://controller:5000
4100:auth_version = v3
4101:auth_url = http://controller-node:5000
4102:memcached_servers = 172.17.1.117:11211
4103:auth_type = password
4104:project_domain_name = default
4105:user_domain_name = default
4106:project_name = service
4107:username = cinder
4108:password = cinder
4324:[oslo_concurrency]
4325:lock_path = /var/lib/cinder/tmp
2.6.2.6 初始化cinder组件数据库
[root@cotroller-node ~]# su -s /bin/sh -c "cinder-manage db sync" cinder
[root@cotroller-node ~]# systemctl --now enable openstack-cinder-api.service openstack-cinder-scheduler.service
[root@cotroller-node ~]# [root@cotroller-node ~]# cinder list
+--------------------------------------+-----------+------+------+-------------+----------+--------------------------------------+
| ID                                   | Status    | Name | Size | Volume Type | Bootable | Attached to                          |
+--------------------------------------+-----------+------+------+-------------+----------+------------------------
+--------------------------------------+-----------+------+------+-------------+----------+------------------------
2.6.5配置Neutron组件
2.6.2.1创建neutron用户
[root@cotroller-node ~]# openstack user create --domain default --password-prompt neutron
2.6.2.2绑定角色

将neutron用户绑定到admin角色

[root@cotroller-node ~]# openstack role add --project service --user neutron admin
2.6.2.3 创建neutron服务
[root@cotroller-node ~]# openstack service create --name neutron --description "OpenStack Networking" network
2.6.2.4 创建服务访问端点
[root@cotroller-node ~]# openstack endpoint create --region RegionOne neutron public http://controller-node:9696
[root@cotroller-node ~]# openstack endpoint create --region RegionOne neutron internal http://controller-node:9696
[root@cotroller-node ~]# openstack endpoint create --region RegionOne neutron admin http://controller-node:9696
2.6.2.5 修改neutron配置文件
#使用provider网络,即单纯的二层网络,打开/etc/neutron/neutron.conf配置文件,需要新增或修改的配置
[root@cotroller-node ~]# grep -Evn "^#|^$" /etc/neutron/neutron.conf
1:[DEFAULT]
2:core_plugin = ml2
3:service_plugins =
4:notify_nova_on_port_status_changes = true
5:notify_nova_on_port_data_changes = true
6:transport_url = rabbit://openstack:admin@controller-node
7:auth_strategy = keystone
261:[database]
262:connection = mysql+pymysql://neutron:admin@controller-node/neutron
366:[keystone_authtoken]
367:www_authenticate_uri = http://controller-node:5000
368:auth_url = http://controller-node:5000/v3
369:memcached_servers = localhost:11211
370:auth_type = password
371:project_domain_name = default
372:user_domain_name = default
373:project_name = service
374:username = neutron
375:password = neutron
536:[oslo_concurrency]
538:lock_path = /var/lib/neutron/tmp
# Neutron服务需要和Nova服务进行交互,因此需要在配置文件最后添加Nova组件的配置
1084:[nova]
1085:auth_url = http://controller-node:5000
1086:auth_type = password
1087:project_domain_name = default
1088:user_domain_name = default
1089:region_name = RegionOne
1090:project_name = service
1091:username = nova
1092:password = nova
2.6.2.6 修改ML2配置
[root@cotroller-node ~]# grep -Evn "^#|^$"  /etc/neutron/plugins/ml2/ml2_conf.ini
1:[DEFAULT]
151:[ml2]
152:type_drivers = flat,vlan
153:tenant_network_types =
154:mechanism_drivers = linuxbridge
155:extension_drivers = port_security
157:[ml2_type_flat]
158:flat_networks = provider
159:
160:[securitygroup]
161:enable_ipset = true
#注意:这个配置表示只支持flat和vlan类型的网络
2.6.2.7 修改linux agent配置文件
[root@cotroller-node ~]# grep -Evn "^#|^$"  /etc/neutron/plugins/ml2/linuxbridge_agent.ini
1:[DEFAULT]
151:[linux_bridge]
152:physical_interface_mappings = provider:ens36
153:
154:[vxlan]
155:enable_vxlan = false
156:
157:[securitygroup]
158:enable_security_group = true
159:firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver#注意:provider后面是控制节点上虚拟机网络的接口地址,根据实际情况来填写,因为每个人虚拟机上第二块网卡的名称都可能不一样ss。针对这个配置文件,centos7系统还需要新建一个模块加载配置。
[root@cotroller-node ~]# cat /etc/modules-load.d/neutron.conf
br_netfilter
#为了能够开机自启动br_netfilter模块,还需要手动加载一下:
[root@cotroller-node ~]# modprobe br_netfilter
[root@cotroller-node ~]# echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
[root@cotroller-node ~]# echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
[root@cotroller-node ~]# echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
[root@cotroller-node ~]# sysctl -p
2.6.2.8 配置dhcp agent
[root@cotroller-node ~]# grep -Evn "^#|^$" /etc/neutron/dhcp_agent.ini
1:[DEFAULT]
# 在这个文件最后加上下面三行。
151:interface_driver = linuxbridge
152:dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
153:enable_isolated_metadata = true
2.6.2.9 配置metadata agent
[root@cotroller-node ~]# grep -Evn "^#|^$" /etc/neutron/metadata_agent.ini
1:[DEFAULT]
2:nova_metadata_host = controller-node
3:metadata_proxy_shared_secret = admin
#注意:同时这两行必须写入到/etc/neutron/neutron.conf文件的[DEFAULT]位置,否则neutron-metadata-agent获取不到这个配置#创建ml2文件的软链接
[root@cotroller-node ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
2.6.2.10初始化neutron组件数据库
[root@cotroller-node ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron#启动服务
[root@cotroller-node ~]# systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
[root@cotroller-node ~]# systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
#注意:确认服务正常了以后再配置开机自启动,否则会一直报错,导致多次重启失败后无法重启。需要执行下面的命令重设状态后才能重新启动[root@cotroller-node ~]# systemctl reset-failed neutron-server  #上面的服务是neutron-server,其他的服务如果也出现这种状态,用同样的命令执行即可。#验证
[root@cotroller-node ~]# openstack port list  #如果正常执行,说明neutron服务正常
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------+--------+
| ID                                   | Name | MAC Address       | Fixed IP Addresses                                                           | Status |
+--------------------------------------+------+-------------------+------------------------------------------------
+--------------------------------------+------+-------------------+------------------------------------------------
2.6.2.11创建provider网络
(1)创建一个虚拟网络
[root@cotroller-node ~]# openstack network create --share --external --provider-physical-network provider --provider-network-type flat flat_net(2)创建一个子网:给虚拟机分配IP地址的
[root@cotroller-node ~]# openstack subnet create --network flat_net --allocation-pool start=192.168.1.150,end=192.168.1.253 --dns-nameserver=192.168.1.2 --gateway=192.168.1.2 --subnet-range=192.168.1.0/24 flat_subnet#查看网络里的子网信息
[root@cotroller-node ~]# openstack network list
+--------------------------------------+----------+--------------------------------------+
| ID                                   | Name     | Subnets                              |
+--------------------------------------+----------+--------------------------------------+
| 3dbc31ea-da18-4e71-ab9a-9986a112a856 | flat_net | af3f0158-6ef3-476a-bfe8-9c2dc2ac8761 |
+--------------------------------------+----------+--------------------------------------+
2.6.5配置Nova组件
2.6.5.1创建Nova用户
[root@cotroller-node ~]# openstack user create --domain default --password-prompt nova
2.6.5.2 绑定角色
[root@cotroller-node ~]# openstack role add --project service --user nova admin
2.6.5.3 创建Nova服务
[root@cotroller-node ~]# openstack service create --name nova --description "OpenStack Compute" compute
2.6.5.4 创建服务访问端点
[root@cotroller-node ~]# openstack endpoint create --region RegionOne compute public http://controller-node:8774/v2.1
[root@cotroller-node ~]# openstack endpoint create --region RegionOne compute internal http://controller-node:8774/v2.1
[root@cotroller-node ~]# openstack endpoint create --region RegionOne compute admin http://controller-node:8774/v2.1
2.6.5.5 修改Nova配置文件
[root@cotroller-node ~]# grep -Evn "^#|^$" /etc/nova/nova.conf
1:[DEFAULT]
2:enabled_apis = osapi_compute,metadata
3:my_ip=172.17.1.117
4:metadata_host=$my_ip
5:firewall_driver=nova.virt.firewall.NoopFirewallDriver
6:transport_url=rabbit://openstack:admin@controller-node
1631:[api]
1632:auth_strategy=keystone
1833:[api_database]
1848:connection=mysql://nova:admin@controller-node/nova_api
2047:[cinder]
2048:catalog_info=volumev3::internalURL
2049:os_region_name=RegionOne
2050:auth_type=password
2051:auth_url=http://controller-node:5000
2052:project_name=service
2053:project_domain_name=default
2054:username=cinder
2055:user_domain_name=default
2056:password=cinder
2327:[database]
2346:connection=mysql+pymysql://nova:admin@controller-node/nova
2638:[glance]
2649:api_servers=http://controller-node:9292
3215:[keystone_authtoken]
3216:www_authenticate_uri=http://controller-node:5000/
3217:auth_url=http://controller-node:5000
3218:memcached_servers=controller-node:11211
3219:auth_type=password
3220:project_domain_name = Default
3221:user_domain_name = Default
3222:project_name = service
3223:username = nova
3224:password = nova
4001:[neutron]
4004:auth_type = password
4005:auth_url = http://controller-node:5000
4006:project_domain_name = default
4007:user_domain_name = default
4008:region_name = RegionOne
4009:project_name = service
4010:username = neutron
4011:password = neutron
4012:service_metadata_proxy = true
4013:metadata_proxy_shared_secret = admin     #注意:metadata_proxy_shared_secret参数的值必须和neutron组件中的一样,否则neutron组件之间通信会失败
4773:[placement]
4774:auth_type=password
4775:auth_url=http://controller-node:5000/v3
4776:project_name=service
4777:project_domain_name=default
4778:username=placement
4779:user_domain_name=default
4780:password=placement
4781:region_name=RegionOne
5907:[vnc]
5908:enabled=true
5909:server_listen=$my_ip
5910:server_proxyclient_address=$my_ip
5911:novncproxy_host=controller-node
2.6.5.6 初始化Nova组件数据库
[root@cotroller-node ~]# su -s /bin/sh -c "nova-manage api_db sync" nova   #出现如下错误#错误:
An error has occurred:
Traceback (most recent call last):File "/usr/lib/python2.7/site-packages/nova/cmd/manage.py", line 2708, in mainret = fn(*fn_args, **fn_kwargs)File "/usr/lib/python2.7/site-packages/nova/cmd/manage.py", line 968, in syncreturn migration.db_sync(version, database='api')File "/usr/lib/python2.7/site-packages/nova/db/migration.py", line 26, in db_syncreturn IMPL.db_sync(version=version, database=database, context=context)File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/migration.py", line 53, in db_synccurrent_version = db_version(database, context=context)File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/migration.py", line 66, in db_versionreturn versioning_api.db_version(get_engine(database, context=context),File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/migration.py", line 43, in get_enginereturn db_session.get_api_engine()File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 152, in get_api_enginereturn api_context_manager.writer.get_engine()File "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 833, in get_enginereturn self._factory.get_writer_engine()File "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 372, in get_writer_engineself._start()File "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 510, in _startengine_args, maker_args)File "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 534, in _setup_for_connectionsql_connection=sql_connection, **engine_kwargs)File "/usr/lib/python2.7/site-packages/debtcollector/renames.py", line 43, in decoratorreturn wrapped(*args, **kwargs)File "/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py", line 177, in create_engineengine = sqlalchemy.create_engine(url, **engine_args)File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/__init__.py", line 431, in create_enginereturn strategy.create(*args, **kwargs)File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/strategies.py", line 87, in createdbapi = dialect_cls.dbapi(**dbapi_args)File "/usr/lib64/python2.7/site-packages/sqlalchemy/dialects/mysql/mysqldb.py", line 104, in dbapireturn __import__("MySQLdb")
ImportError: No module named MySQLdb#原因:报错信息已经提示很明确了,是缺少mysqldb模块导致的#解决方案:
[root@cotroller-node ~]#yum install MySQL-python -y
#在此执行Nova组件数据库的初始化就正常了
[root@cotroller-node ~]# su -s /bin/sh -c "nova-manage api_db sync" nova #注册cell0数据库
[root@cotroller-node ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
#创建cell1 cell
[root@cotroller-node ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
#初始化Nova数据库
[root@cotroller-node ~]# su -s /bin/sh -c "nova-manage db sync" nova#验证cell0和cell1正确注册命令
[root@cotroller-node ~]# su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
+-------+--------------------------------------+-----------------------------------------+------------------------------------------------------+----------+
|  名称 |                 UUID                 |              Transport URL              |                      数据库连接                      | Disabled |
+-------+--------------------------------------+-----------------------------------------+------------------------------------------------------+----------+
| cell0 | 00000000-0000-0000-0000-000000000000 |                  none:/                 | mysql+pymysql://nova:****@controller-node/nova_cell0 |  False   |
| cell1 | 2e431643-3a24-44da-9e2c-e197d7f26262 | rabbit://openstack:****@controller-node |    mysql+pymysql://nova:****@controller-node/nova    |  False   |
+-------+--------------------------------------+-----------------------------------------+------------------------------------------------------+----------+
2.6.5.7 启动服务并验证
#开启Nova服务
[root@cotroller-node ~]# systemctl enable --now openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service#验证Nova是否正常
[root@cotroller-node ~]# nova list
+--------------------------------------+--------+---------+------------+-------------+------------------------+
| ID                                   | Name   | Status  | Task State | Power State | Networks               |
+--------------------------------------+--------+---------+------------+-------------+------------------------+
+--------------------------------------+--------+---------+------------+-------------+------------------------+
2.6.7Horizon面板配置
2.6.7.1修改面板配置文件
[root@cotroller-node openstack-dashboard]# grep -Evn "^#|^$" /etc/openstack-dashboard/local_settings
15:import os
17:from django.utils.translation import ugettext_lazy as _
20:from openstack_dashboard.settings import HORIZON_CONFIG
22:DEBUG = False
39:ALLOWED_HOSTS = ['controller.wengsq.com', '172.17.1.117']    #配置允许那个域名或者IP访问这个地址
76:LOCAL_PATH = '/tmp'
87:SECRET_KEY='f54584361b060f7a58f0'
# 配置memcached作为缓存
94:CACHES = {
95:    'default': {
96:        'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
97:        'LOCATION': 'controller-node:11211',
98:    },
99:}# session存储引擎
104:SESSION_ENGINE = 'django.contrib.sessions.backends.signed_cookies'
108:EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
# 配置主机名
118:OPENSTACK_HOST = "controller.wengsq.com"
119:OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
#设置各个组件的API接口版本
121:OPENSTACK_API_VERSIONS = {
122:"identity": 3,
123:"image": 2,
124:"volume": 3,
126:}
#设置keystone用户的默认域和默认角色
127:WEBROOT='/dashboard'
128:OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
129:OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
130:OPENSTACK_KEYSTONE_DEFAULT_ROLE = "admin"
# 如果使用的是provider网络,那么OPENSTACK_NEUTRON_NETWORK需要改成下面的样子
134:OPENSTACK_NEUTRON_NETWORK = {
135:    'enable_auto_allocated_network': False,
136:    'enable_distributed_router': False,
137:    'enable_fip_topology_check': True,
138:    'enable_ha_router': False,
139:    'enable_ipv6': True,
140:    # TODO(amotoki): Drop OPENSTACK_NEUTRON_NETWORK completely from here.
141:    # enable_quotas has the different default value here.
142:    'enable_quotas': True,
143:    'enable_rbac_policy': True,
144:    'enable_router': True,
146:    'default_dns_nameservers': [],
147:    'supported_provider_types': ['*'],
148:    'segmentation_id_range': {},
149:    'extra_provider_types': {},
150:    'supported_vnic_types': ['*'],
151:    'physical_networks': [],
153:}
157:TIME_ZONE = "Asia/Shanghai"
2.6.7.2 修改dashboard配置文件
[root@cotroller-node openstack-dashboard]# cat /etc/httpd/conf.d/openstack-dashboard.conf
[root@cotroller-node openstack-dashboard]# cat /etc/httpd/conf.d/openstack-dashboard.conf
WSGIDaemonProcess dashboard
WSGIProcessGroup dashboard
WSGISocketPrefix run/wsgi
WSGIApplicationGroup %{GLOBAL}        #添加这行
WSGIScriptAlias /dashboard /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi#重启httpd和memcached服务
[root@cotroller-node ~]# systemctl restart httpd memcached

注意:到这里所有核心组件全部配置完成并正常启动,然后我们就可以打开网页,访问我们的OpenStack管
理面板了。
在这里插入图片描述

3.OpenStack计算节点的配置

计算机节点因为只需要负责创建虚拟机并和控制节点进行交互,因此需要安装的组件比控制节点要少
得多,主要有:

  • Neutron,负责虚拟机网络
  • Cinder,负责虚拟机存储
  • Nova,负责虚拟机管理

3.1部署Neutron网络组件

3.1.1安装网络组件所需的包
[root@computer-node1 ~]# yum install openstack-neutron-linuxbridge ebtables ipset conntrack-tools -y
3.1.2 修改neutron配置文件
[root@computer-node1 ~]# grep -Evn "^#|^$" /etc/neutron/neutron.conf
1:[DEFAULT]
2:transport_url = rabbit://openstack:admin@controller-node
3:auth_strategy = keystone
255:[database]
273:connection = mysql+pymyqsl@neutron:admin@controller-node/neutron
360:[keystone_authtoken]
375:www_authenticate_uri = http://controller-node:5000
376:auth_url = http://controller-node:5000
377:memcached_servers = controller-node:11211
378:service_token_roles = service
379:service_token_roles_required = true
380:auth_type = password
381:project_domain_name = default
382:user_domain_name = default
383:project_name = service
384:username = neutron
385:password=neutron
532:[oslo_concurrency]
534:lock_path = /var/lib/neutron/tmp#修改linuxbridge_agent.ini文件
[root@computer-node1 ~]# grep -Evn "^#|^$" /etc/neutron/plugins/ml2/linuxbridge_agent.ini
1:[DEFAULT]
3:[linux_bridge]
4:physical_interface_mappings = provider:ens33
6:[vxlan]
7:enable_vxlan = false
9:[securitygroup]
10:enable_security_group = true
11:firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
3.1.3配置br_netfilter模块开机自启动
3.1.3.1修改/etc/modules-load.d/neutron.conf文件
[root@computer-node1 ~]# cat /etc/modules-load.d/neutron.conf
br_netfilter
[root@computer-node1 ~]# modprobe br_netfilter#验证
[root@computer-node1 ~]# lsmod | grep br_netfilter
br_netfilter           22256  0
bridge                151336  1 br_netfilter
3.1.3.2 修改/etc/sysctl.conf
[root@computer-node1 ~]# echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
[root@computer-node1 ~]# echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
[root@computer-node1 ~]# echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
[root@computer-node1 ~]# sysctl -p#启动网络服务
[root@computer-node1 ~]# systemctl --now enable neutron-linuxbridge-agent
[root@computer-node1 ~]# systemctl status neutron-linuxbridge-agent
● neutron-linuxbridge-agent.service - OpenStack Neutron Linux Bridge AgentLoaded: loaded (/usr/lib/systemd/system/neutron-linuxbridge-agent.service; enabled; vendor preset: disabled)Active: active (running) since 六 2024-04-20 10:44:52 CST; 1s agoProcess: 7043 ExecStartPre=/usr/bin/neutron-enable-bridge-firewall.sh (code=exited, status=0/SUCCESS)Main PID: 7048 (neutron-linuxbr)Tasks: 1CGroup: /system.slice/neutron-linuxbridge-agent.service└─7048 /usr/bin/python2 /usr/bin/neutron-linuxbridge-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron...420 10:44:52 computer-node1 systemd[1]: Stopped OpenStack Neutron Linux Bridge Agent.
420 10:44:52 computer-node1 systemd[1]: Starting OpenStack Neutron Linux Bridge Agent...
420 10:44:52 computer-node1 neutron-enable-bridge-firewall.sh[7043]: net.bridge.bridge-nf-call-iptables = 1
420 10:44:52 computer-node1 neutron-enable-bridge-firewall.sh[7043]: net.bridge.bridge-nf-call-ip6tables = 1
420 10:44:52 computer-node1 systemd[1]: Started OpenStack Neutron Linux Bridge Agent.

3.2 存储配置

计算节点支持使用多种类型的后端,例如直接使用物理机上的磁盘,或者使用网络存储。本次测试环
境使用的是本地磁盘上的lvm卷,此时对于虚拟机来说需要两个独立的lvm卷,一个给Nova使用,用于
存放虚拟机系统盘,一个给Cinder使用,用于存放虚拟机数据盘。因此本次计算节点使用两块300G的
磁盘/dev/sdb和/dev/sdc,首先来创建LVM后端需要的vg。

3.2.1 安装存储所需的包
[root@computer-node1 ~]# yum install -y lvm2 device-mapper-persistent-data openstack-cinder targetcli python-keystone 
3.2.2创建vg

先创建pv,本次配置里因为nova和cinder都需要对接后端的LVM,将盘sdb和sdc分别创建pv

(1)创建PV逻辑卷
[root@computer-node1 ~]# pvcreate /dev/sdb
[root@computer-node1 ~]# pvcreate /dev/sdc
#验证逻辑卷是否创建成功
[root@computer-node1 ~]# pvdisplay--- Physical volume ---PV Name               /dev/sdcVG Name               cinder-volumesPV Size               200.00 GiB / not usable 4.00 MiBAllocatable           yesPE Size               4.00 MiBTotal PE              51199Free PE               2511Allocated PE          48688PV UUID               8IODw8-KnVj-fHt9-HC6W-06ns-GDxL-Fvx6Mk--- Physical volume ---PV Name               /dev/sdbVG Name               nova-volumesPV Size               200.00 GiB / not usable 4.00 MiBAllocatable           yesPE Size               4.00 MiBTotal PE              51199Free PE               51199Allocated PE          0PV UUID               Q5cMWr-ioYZ-lv6z-B7U1-keUx-ZU35-41kzQx(2)创建vg卷组
[root@computer-node1 ~]# vgcreate nova-volumes /dev/sdb  #卷组名称叫nova-volumes
[root@computer-node1 ~]# vgcreate cinder-volumes /dev/sdc  #卷组名称叫cinder-volumes
#验证vg卷组是否创建成功
[root@computer-node1 ~]# vgdisplay--- Volume group ---VG Name               cinder-volumesSystem IDFormat                lvm2Metadata Areas        1Metadata Sequence No  12VG Access             read/writeVG Status             resizableMAX LV                0Cur LV                5Open LV               3Max PV                0Cur PV                1Act PV                1VG Size               <200.00 GiBPE Size               4.00 MiBTotal PE              51199Alloc PE / Size       48688 / <190.19 GiBFree  PE / Size       2511 / <9.81 GiBVG UUID               2v3RK6-OwKb-WGxy-C4gR-ctqL-hxLr-R02HWX--- Volume group ---VG Name               nova-volumesSystem IDFormat                lvm2Metadata Areas        1Metadata Sequence No  1VG Access             read/writeVG Status             resizableMAX LV                0Cur LV                0Open LV               0Max PV                0Cur PV                1Act PV                1VG Size               <200.00 GiBPE Size               4.00 MiBTotal PE              51199Alloc PE / Size       0 / 0Free  PE / Size       51199 / <200.00 GiBVG UUID               Fztufk-3USv-opng-Czyo-2Dmt-Fiuj-FTc7RO
3.2.3修改lvm配置

打开/etc/lvm/lvm.conf文件,在devices部分添加下面的过滤器:

[root@computer-node1 ~]# global_filter = [ "a|/dev/sd*|", "r/.*/" ]

*注意:这个过滤器的意思是,a即accept,表示只接受物理磁盘作为后端存储卷。r即reject,.是通配符,
即/dev/目录下的所有其他设备都不接受作为后端存储卷。
同时不要使用官方文档上介绍的filter过滤器,因为这个过滤器因为功能问题已经被废弃,需要使用现
在的global_filter。

3.2.4修改cinder配置
[root@computer-node1 ~]# grep -Evn "^#|^$" /etc/cinder/cinder.conf
1:[DEFAULT]
2:backup_ceph_user = cinder-backup2
3:backup_driver = cinder.backup.drivers.ceph.CephBackupDriver
5:transport_url = rabbit://openstack:admin@controller-node
6:auth_strategy = keystone
7:glance_api_servers = http://controller-node:9292
8:enabled_backends = lvm
11:[lvm]
12:target_helper = lioadm
13:target_protocol = iscsi
14:target_ip_address = 172.17.1.118
15:volume_backend_name = LVM
16:volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
17:volume_group = cinder-volumes
18:volumes_dir = $state_path/volumes
3855:[database]
3856:connection = mysql+pymysql://cinder:admin@controller-node/cinder
4110:[keystone_authtoken]
4111:www_authenticate_uri = http://controller-node:5000
4112:auth_url = http://controller-node:5000
4113:memcached_servers = controller-node:11211
4114:auth_type = password
4115:project_domain_name = default
4116:user_domain_name = default
4117:project_name = service
4118:username = cinder
4119:password = cinder#启动lvm服务
[root@computer-node1 ~]# systemctl --now enable lvm2-lvmetad.service
#启动cinder服务
[root@computer-node1 ~]# systemctl --now enable openstack-cinder-volume

注意:cinder-backup备份配置目前支持的后端存储有

  • Ceph,块存储
  • GCS,对象存储
  • glusterfs,文件存储
  • Swift,对象存储

在上面的示例中使用的是Ceph后端存储,因此还需要另外两个文件ceph.conf和ceph.client.cinder-
backup2.keyring,把这两个文件放在/etc/ceph/目录下,这两个文件的作用分别是:

  • ceph.conf,ceph集群的连接配置文件
  • ceph.client.cinder-backup2.keyring,Ceph集群的认证文件,cinder-backup组件利用这个文件连接ceph集群的存
    储池来存放虚拟机快照。

3.3 计算配置

3.3.1 安装计算节点所需的安装包
[root@computer-node1 ~]# yum install openstack-nova-compute
3.3.2 修改nova组件配置信息
[root@computer-node1 ~]# grep -Ev "^#|^$" /etc/nova/nova.conf -n
1:[DEFAULT]
2:my_ip= 172.17.1.118
3:enabled_apis=osapi_compute,metadata
4:transport_url = rabbit://openstack:admin@controller-node
1626:[api]
1627:api_servers=http://controller-node:9292
2041:[cinder]
2042:catalog_info=volumev3::internalURL
2043:os_region_name=RegionOne
2044:auth_type=password
2045:auth_url=http://controller-node:5000
2046:project_domain_name=Default
2047:username=cinder
2048:user_domain_name=Default
2049:password=cinder
2631:[glance]
2633:api_servers=http://controller-node:9292
3207:[keystone_authtoken]
3208:www_authenticate_uri=http://controller-node:5000
3209:auth_url=http://controller-node:5000
3210:memcached_servers=controller-node:11211
3211:auth_type=password
3212:project_domain_name = default
3213:user_domain_name = default
3214:project_name = service
3215:username = nova
3216:password = nova
3377:[libvirt]
# vmware做宿主机的时候使用下面这两个配置项
3378:virt_type=qemu
3379:cpu_mode=none
3380:snapshot_image_format=qcow2
3381:images_type=lvm
3382:images_volume_group=nova-volumes  # 这里是对接lvm后端的配置,需要在images_volume_group里指定nova-volumes这个vg名称
3998:[neutron]
3999:auth_url = http://controller-node:5000
4000:auth_type = password
4001:project_domain_name = default
4002:user_domain_name = default
4003:region_name = RegionOne
4004:project_name = service
4005:username = neutron
4006:password = neutron
4769:[placement]
4770:region_name = RegionOne
4771:project_domain_name = Default
4772:project_name = service
4773:auth_type=password
4774:user_domain_name = default
4775:auth_url = http://controller-node:5000/v3
4776:username = placement
4777:password = placement
5904:[vnc]
5905:enabled=true
5906:server_listen=172.17.1.118
5907:server_proxyclient_address = $my_ip
5908:novncproxy_base_url=http://controller-node:6080/vnc_auto.html
5909:novncproxy_host=172.17.1.118#启动服务
[root@computer-node1 ~]# systemctl --now enable libvirtd.service openstack-nova-compute.service

注意:到这里,计算节点就配置完毕了,剩下的是在控制节点上的操作。

3.3.3添加nova节点到cell数据库(在控制节点上操作)
(1)在控制节点查看新增的nova计算节点的状态
[root@cotroller-node ~]# openstack compute service list --service nova-compute
+----+--------------+----------------+------+---------+-------+----------------------------+
| ID | Binary       | Host           | Zone | Status  | State | Updated At                 |
+----+--------------+----------------+------+---------+-------+----------------------------+
|  6 | nova-compute | computer-node1 | nova | enabled | up    | 2024-04-20T04:15:34.000000 |
+----+--------------+----------------+------+---------+-------+----------------------------+(2)注册该计算节点到系统中
[root@computer-node1 ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
3.3.4 创建虚拟机看能否运行成功
3.3.4.1 创建实例

在这里插入图片描述

3.3.4.2创建虚拟机

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

3.3.4.3 故障排查

故障现象:

在这里插入图片描述
故障排查:

[root@cotroller-node nova]# tail -n 1000 nova-novncproxy.log

在这里插入图片描述
故障解决:在控制节点

[root@cotroller-node core]# pwd
/usr/share/novnc/core
[root@cotroller-node core]# ll
总用量 164
-rw-r--r-- 1 root root   4182 13 2023 base64.js
drwxr-xr-x 2 root root    106 46 14:38 decoders
-rw-r--r-- 1 root root   2589 13 2023 deflator.js
-rw-r--r-- 1 root root  11241 13 2023 des.js
-rw-r--r-- 1 root root  15831 13 2023 display.js
-rw-r--r-- 1 root root   1423 13 2023 encodings.js
-rw-r--r-- 1 root root   1959 13 2023 inflator.js
drwxr-xr-x 2 root root    182 46 14:38 input
-rw-r--r-- 1 root root 105544 13 2023 rfb.js
drwxr-xr-x 2 root root    148 46 14:38 util
-rw-r--r-- 1 root root  10606 417 15:50 websock.js
您在 /var/spool/mail/root 中有新邮件
[root@cotroller-node core]# vim websock.js
230     open(uri, protocols) {
231         this.attach(new WebSocket(uri,['binary','base64']));   #把日志提示的两种协议添加上
232     }
#重启openstack-nova-novncproxy服务
[root@cotroller-node ~]# systemctl restart openstack-nova-novncproxy.service

在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/826134.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Java JNI调用本地方法1(调用C++方法)

一、基础概念 1、JNI&#xff08;Java Native interface&#xff09;:sun公司提供的JNI是Java平台的一个功能强大的接口&#xff0c;实现java和操作系统本地代码的相互调用功能&#xff0c;系统本地代码通常是由其他语言编写的&#xff0c;如C。 二、JNI使用步骤 1、定义一个J…

选定进行压缩的卷可能已损坏。请使用chkdsk来修复损坏问题,然后尝试再次压缩该卷

Windows Server 2008R2环境下&#xff0c;进行磁盘重新分区时&#xff0c;想要对系统盘进行“压缩卷”&#xff0c;结果报错提示“选定进行压缩的卷可能已损坏。请使用Chkdsk来修复损坏问题&#xff0c;然后尝试再次压缩该卷。”这是硬盘出现了坏道导致的&#xff0c;硬盘出错无…

中仕公考:教师编制和事业单位d类一样吗?

教师编制和事业单位D类在考试内容、专业要求、晋升途径等方面有很大的不同中仕为大家介绍一下&#xff1a; 考试内容&#xff1a;教师编的考试包括教育综合知识和学科专业知识&#xff0c;有的地区会额外考公共基础知识。事业单位D类的考试更侧重于职业能力倾向测验和综合应用…

Linux的学习之路:14、文件(1)

摘要 有一说一文件一天学不完&#xff0c;细节太多了&#xff0c;所以这里也没更新完&#xff0c;这里部分文件知识&#xff0c;然后C语言和os两种的文件操作 目录 摘要 一、文件预备 二、c文件操作 三、OS文件操作 1、系统文件I/O 2、接口介绍 四、思维导图 一、文件…

uniapp全局监听分享朋友圈或朋友

把大象装进冰箱需要几步&#xff1a; 1、创建shart.js文件 export default{data(){return {//设置默认的分享参数//如果页面不设置share&#xff0c;就触发这个默认的分享share:{title:标题,path:/pages/index/index,imageUrl:图片,desc:描述,content:内容}}},onLoad(){let ro…

若依前后端部署到一起

引用&#xff1a;https://blog.csdn.net/qq_42341853/article/details/129127553 前端改造&#xff1a; 配置打包前缀 修改router.js 编程hash模式&#xff1a; 前端打包&#xff1a;npm run build:prod 后端修改&#xff1a; 添加thymeleaf包&#xff0c;和配置文件 spri…

JAVA 项目<果园之窗>_1

这几天有空看能不能把水果店管理系统整出来&#xff0c;目标是整个网页版本的&#xff0c;以我的电脑做服务器&#xff0c;数据存在mysql中 以我目前的理解整个项目大致可分为前端部分、后端部分、数据库部分&#xff0c;也就这三个部分 目前打开并运行了一个别人的项目&#…

leetcode(474.最大连续1的个数)(python)

看数据范围知&#xff0c;复杂度不超过&#xff08;nlogn&#xff09;&#xff0c;不过感觉LeetCode很少卡算法时间。 题目要求数组的*****的个数&#xff0c;想到dp动态规划 dp[i][0]表示前i个数字&#xff0c;以第i个元素结尾&#xff0c;全为1的个数 dp[i][1]表示前i个数…

React间接实现一个动态组件逻辑

在开发一个浏览器插件的时候&#xff0c;用的plasmo框架和react支持的&#xff0c;里面使用react开发一个菜单功能&#xff0c;但是又不想使用react-router&#xff0c;所以就想着能不能使用一个很简单的方式做一个替代方案&#xff1f;那肯定是可以。 我在引入一个组件后&…

vue2响应式 VS vue3响应式

Vue2响应式 存在问题&#xff1a; 新增属性&#xff0c;删除属性&#xff0c;界面不会更新。 直接通过下标修改数组界面不会自动更新。 Vue2使用object.defineProperty来劫持数据是否发生改变&#xff0c;如下&#xff1a; 能监测到获取和修改属性&#xff1a; 新增的属性…

C++笔记:类和对象(一)

类和对象 认识类和对象 先来回忆一下C语言中的类型和变量&#xff0c;类型就像是定义了数据的规则&#xff0c;而变量则是根据这些规则来实际存储数据的容器。类是我们自己定义的一种数据类型&#xff0c;而对象则是这种数据类型的一个具体实例。类就可以理解为类型&#xff0c…

line 1:20 no viable alternative at input ‘point,‘

背景 遇到一个蛋疼的问题&#xff0c;搞得老夫难受的一&#xff0c;解决了索性记录下 Creating a new SqlSession SqlSession [org.apache.ibatis.session.defaults.DefaultSqlSession61b0005f] was not registered for synchronization because synchronization is not activ…

python爬虫-----深入了解 requests 库(第二十五天)

&#x1f388;&#x1f388;作者主页&#xff1a; 喔的嘛呀&#x1f388;&#x1f388; &#x1f388;&#x1f388;所属专栏&#xff1a;python爬虫学习&#x1f388;&#x1f388; ✨✨谢谢大家捧场&#xff0c;祝屏幕前的小伙伴们每天都有好运相伴左右&#xff0c;一定要天天…

openAI tts Java文本转语音完整前后端代码 html

Java后端代码 maven 仓库&#xff1a; <!--openAI 请求工具--> <dependency><groupId>com.unfbx</groupId><artifactId>chatgpt-java</artifactId><version>1.1.5</version> </dependency>maven 仓库官方 tts 使用案例…

浅析RED和EN 18031

文章目录 前言欧盟的法律文件什么是REDRED的发展EU 2022/30法规EU 2023/2444RED与EN 18031的关系 前言 提示&#xff1a;本文大致表述了欧盟的一些立法常识&#xff0c;RED的由来与发展&#xff0c;以及它跟EN 18031的关系 因为工作原因&#xff0c;最近稍微研究了一下欧盟即将…

微波炉定时器开关

微波炉火力调节开关及定时器开关内部结构 参考链接&#xff1a; 微波炉火力调节开关及定时器开关判断好坏小经验-百度经验 (baidu.com)https://jingyan.baidu.com/article/5d6edee2d175c399eadeecfd.html微波炉拆解图示&#xff0c;微波炉结构原理&#xff0c;轻松玩转微波炉维…

AI大模型探索之路-应用篇14:认识国产开源大模型GLM

目录 前言 一、国产主流大模型概览 1. 国内主流大模型清单 2. 主流大模型综合指数 3. 大语言模型评测榜单 二、GLM大模型介绍 三、GLM大模型发展历程 四、GLM家族之基座模型GLM-130B 五、GLM家族之ChatGLM3 六、GLM家族之WebGLM 七、GLM家族之CogVLM 1. CogVLM 2. …

春招冲刺百题计划|栈

Java基础复习 Java数组的声明与初始化Java ArrayListJava HashMapJava String 类Java LinkedListJava Deque继承LinkedListJava Set 第一题&#xff1a;有效的括号 很简单的题&#xff0c;从大一做到现在&#xff0c;就是复习一下语法。 class Solution {public boolean i…

系统架构最佳实践 -- 新能源汽车产业架构设计

随着环保意识的增强和能源结构的转型&#xff0c;新能源汽车产业正迅速崛起成为汽车行业的新宠。构建一个完善的新能源汽车产业架构对于推动产业发展、提升竞争力至关重要。本文将从设计原则、关键技术、产业生态等方面&#xff0c;探讨如何设计与实现新能源汽车产业架构。 新能…

LabVIEW专栏六、LabVIEW项目

一、梗概 项目&#xff1a;后缀是.lvproj&#xff0c;在实际开发的过程中&#xff0c;一般是要用LabVIEW中的项目来管理代码&#xff0c;也就是说相关的VI或者外部文件&#xff0c;都要放在项目中来管理。 在LabVIEW项目中&#xff0c;是一个互相依赖的整体&#xff0c;可以包…