A模块:私有云平台部署与运维(30分)
节点(开启虚拟化) | ens33 | ens34(外网) | 配置 |
---|---|---|---|
controller | 192.168.100.10 | 192.168.200.10 | 硬件:6核16G 硬盘:150G+80G |
compute | 192.168.200.10 | 192.168.200.20 | 硬件:6核16G 硬盘:150G+80G |
本模块内容总计30分,以下是分值排布
任务1 OpenStack私有云平台搭建(11.5分)
1. 基础环境配置(0.5分)
把controller节点主机名设置为controller, compute节点主机名设置为compute,修改hosts文件将IP地址映射为主机名;配置SSH免密通信;在compute节点把数据盘的80G空间分为3个空白分区,大小自定义,供后续组件使用。
在controller节点将hostnamectl && cat /etc/hosts && cat /root/.ssh/known_hosts命令的返回结果提交到答题框。
配置主机名(其实这一步在创建虚拟机时可以直接更改)
[root@localhost ~]# hostnamectl set-hostname controller
[root@localhost ~]# hostnamectl set-hostname compute
配置hosts并复制过去
[root@controller ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.100.10 controller
192.168.100.20 compute
[root@controller ~]# scp /etc/hosts compute:/etc/hosts
controller和compute都做一遍
ssh-copy-id compute
ssh-copy-id controller
ssh-copy-id 192.168.100.10
ssh-copy-id 192.168.100.20
答案
Static hostname: controller
Icon name: computer-vm
Chassis: vm
Machine ID: 92ea8feb38024795993fe70d0ecdb8e7
Boot ID: 1e9d10ed784a4d9f8fe4a4bf58367ef9
Virtualization: vmware
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-1160.el7.x86_64
Architecture: x86-64
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.100.10 controller
192.168.100.20 compute
192.168.100.20 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMpDmRnUU9ueTwetWpOsKFL0bIZ+A6Iq/gtNGVlnX21a85rq/R/IyZoBiZ/33zPzcO9PRoosdpqi+wekI6XXt+g=
compute ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMpDmRnUU9ueTwetWpOsKFL0bIZ+A6Iq/gtNGVlnX21a85rq/R/IyZoBiZ/33zPzcO9PRoosdpqi+wekI6XXt+g=
controller,192.168.100.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKPQGSMVM2vBJtCLhJjI/AVfeVHuwl9mGZHZenc95HUTFqjWLLHRFNrsp81C5uR4Htc1nAxdZMb9ppS77DbrSQs=
192.168.200.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKPQGSMVM2vBJtCLhJjI/AVfeVHuwl9mGZHZenc95HUTFqjWLLHRFNrsp81C5uR4Htc1nAxdZMb9ppS77DbrSQs=
192.168.200.20 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMpDmRnUU9ueTwetWpOsKFL0bIZ+A6Iq/gtNGVlnX21a85rq/R/IyZoBiZ/33zPzcO9PRoosdpqi+wekI6XXt+g=
2. yum源配置(0.5分)
在controller节点使用提供的CentOS-7-x86_64-DVD-2009.iso和openstack-train.tar.gz配置本地yum源local.repo,在compute节点创建FTP源ftp.repo,使用controller节点为FTP服务器。
在controller节点将yum repolist && rpm -qa | grep ftp && ssh compute "cat /etc/yum.repos.d/ftp.repo"命令的返回结果提交到答题框。
先挂载镜像
[root@controller ~]# vi /etc/fstab
#
# /etc/fstab
# Created by anaconda on Fri Mar 14 17:38:00 2025
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos_controller-root / xfs defaults 0 0
UUID=a0e69903-82c4-4a23-a61e-01b62aea29c3 /boot xfs defaults 0 0
/dev/mapper/centos_controller-home /home xfs defaults 0 0
/dev/mapper/centos_controller-swap swap swap defaults 0 0
/opt/CentOS-7-x86_64-DVD-2009.iso /mnt/centos iso9660 defaults 0 0
/opt/kubernetes_v2.1.iso /mnt/iaas iso9660 defaults 0 0
编写yum源(文件地址自行决定)
[root@controller ~]# rm -rf /etc/yum.repos.d/*
[root@controller ~]# vi /etc/yum.repos.d/local.repo
[centos]
baseurl=file:///mnt/centos/
name=centos
enabled=1
gpgcheck=0
[iaas]
baseurl=file:///mnt/iaas/iaas-repo/
name=iaas
enabled=1
gpgcheck=0
复制到compute上面,先重启controller
[root@controller ~]# scp /etc/yum.repos.d/local.repo compute:/etc/yum.repos.d/ftp.repo
[root@controller ~]# reboot
安装vsftpd并配置
[root@controller ~]# yum install vsftpd vim -y
[root@controller ~]# vim /etc/vsftpd/vsftpd.conf
添加一行(自行决定,最好是同时包括iaas与centos)
anon_root=/mnt/
设置开机自启并启动
[root@controller ~]# systemctl enable vsftpd
[root@controller ~]# systemctl restart vsftpd
控制节点编辑
[root@compute ~]# vi /etc/yum.repos.d/ftp.repo
[centos]
baseurl=ftp://controller/centos/
name=centos
enabled=1
gpgcheck=0
[iaas]
baseurl=ftp://controller/iaas/iaas-repo/
name=iaas
enabled=1
gpgcheck=0
此时controller和compute都可以安装软件包
vim
/lsof
/net-tools
/bash-com*
这几个软件包用来方便做题
答案
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
源标识 源名称 状态
centos centos 4,070
iaas iaas 954
repolist: 5,024
vsftpd-3.0.2-28.el7.x86_64
[centos]
baseurl=ftp://controller/centos/
name=centos
enabled=1
gpgcheck=0
[iaas]
baseurl=ftp://controller/iaas/iaas-repo/
name=iaas
enabled=1
gpgcheck=0
3. 基础安装(0.5分)
在控制节点和计算节点分别安装openstack-shell软件包,根据表2配置两个节点脚本文件中的基本变量(配置脚本文件为/root/variable.sh),其它变量根据实际情况配置。配置完成后执行openstack-completion.sh 脚本完成基础安装。
在controller节点将cat /root/variable.sh | grep -Ev ‘^$|#|000000’ && systemctl status chronyd命令的返回结果提交到答题框。
使用fdisk将另一块80G硬盘进行分区,根据题目要求自由大小,先分成20G / 20G / 20G / 20G(这里分区为后补,后方题目中有用到四块盘所以按照20G * 4 大小分区)
[root@compute ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 100G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 99G 0 part
├─centos-root 253:0 0 50G 0 lvm /
├─centos-swap 253:1 0 7.9G 0 lvm [SWAP]
└─centos-home 253:2 0 41.1G 0 lvm /home
sdb 8:16 0 80G 0 disk
├─sdb1 8:17 0 30G 0 part
├─sdb2 8:18 0 30G 0 part
├─sdb3 8:19 0 10G 0 part
├─sdb4 8:20 0 1K 0 part
└─sdb5 8:21 0 5G 0 part
安装对应软件包
[root@controller ~]# yum install openstack-shell -y
[root@compute ~]# yum install openstack-shell -y
[root@controller ~]# vim variable.sh
在vim中使用替换命令:%s/^#//g
与:%s/PASS=/PASS=000000/g
取消所有注释,并将所有密码进行替换,并填充其余空白配置部分
HOST_IP=192.168.100.10
HOST_PASS=000000
HOST_NAME=controller
HOST_IP_NODE=192.168.100.20
HOST_PASS_NODE=000000
HOST_NAME_NODE=compute
network_segment_IP=192.168.100.0/24
RABBIT_USER=openstack
RABBIT_PASS=000000
DB_PASS=000000
DOMAIN_NAME=demo
ADMIN_PASS=000000
DEMO_PASS=000000
KEYSTONE_DBPASS=000000
GLANCE_DBPASS=000000
GLANCE_PASS=000000
NOVA_DBPASS=000000
NOVA_PASS=000000
NEUTRON_DBPASS=000000
NEUTRON_PASS=000000
METADATA_SECRET=
INTERFACE_IP_HOST=192.168.100.10
INTERFACE_IP_NODE=192.168.100.20
INTERFACE_NAME_HOST=ens34
INTERFACE_NAME_NODE=ens34
Physical_NAME=provider
minvlan=100
maxvlan=200
CINDER_DBPASS=000000
CINDER_PASS=000000
BLOCK_DISK=sdb1
SWIFT_PASS=000000
OBJECT_DISK=sdb2
STORAGE_LOCAL_NET_IP=192.168.100.20
HEAT_DBPASS=000000
HEAT_PASS=000000
CEILOMETER_DBPASS=000000
CEILOMETER_PASS=000000
MANILA_DBPASS=000000
MANILA_PASS=000000
SHARE_DISK=sdb3
CLOUDKITTY_DBPASS=000000
CLOUDKITTY_PASS=000000
BARBICAN_DBPASS=000000
BARBICAN_PASS=000000
复制配置文件
[root@controller ~]# scp variable.sh compute:/root/variable.sh
开始配置
[root@controller ~]# openstack-completion.sh
[root@compute ~]# openstack-completion.sh
安装完成之后建议重启
答案
HOST_IP=192.168.100.10
HOST_NAME=controller
HOST_IP_NODE=192.168.100.20
HOST_NAME_NODE=compute
network_segment_IP=192.168.100.0/24
RABBIT_USER=openstack
DOMAIN_NAME=demo
METADATA_SECRET=
INTERFACE_IP_HOST=192.168.100.10
INTERFACE_IP_NODE=192.168.100.20
INTERFACE_NAME_HOST=ens34
INTERFACE_NAME_NODE=ens34
Physical_NAME=provider
minvlan=100
maxvlan=200
BLOCK_DISK=sdb1
OBJECT_DISK=sdb2
STORAGE_LOCAL_NET_IP=192.168.100.20
SHARE_DISK=sdb3
● chronyd.service - NTP client/server
Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
Active: active (running) since 日 2025-03-16 18:26:45 CST; 1min 20s ago
Docs: man:chronyd(8)
man:chrony.conf(5)
Main PID: 51058 (chronyd)
CGroup: /system.slice/chronyd.service
└─51058 /usr/sbin/chronyd
3月 16 18:26:45 controller systemd[1]: Starting NTP client/server...
3月 16 18:26:45 controller chronyd[51058]: chronyd version 3.4 starting (+CMDMON +NTP +REFCLOCK...UG)
3月 16 18:26:45 controller chronyd[51058]: Frequency 1.888 +/- 3.505 ppm read from /var/lib/chr...ift
3月 16 18:26:45 controller systemd[1]: Started NTP client/server.
3月 16 18:26:49 controller chronyd[51058]: Selected source 192.168.100.10
Hint: Some lines were ellipsized, use -l to show in full.
4. 数据库安装与调优(1分)
在controller节点上使用openstack-controller-mysql.sh脚本安装Mariadb、Memcached、RabbitMQ等服务。安装服务完毕后,修改/etc/my.cnf文件,完成下列要求:
(1)设置数据库支持大小写;
(2)设置数据库缓存innodb表的索引,数据,插入数据时的缓冲为4G;
(3)设置数据库的log buffer为64MB;
(4)设置数据库的redo log大小为256MB;
(5)设置数据库的redo log文件组为2。
在controller节点将cat /etc/my.cnf | grep -Ev ^'(#|KaTeX parse error: Expected 'EOF', got '&' at position 4: )' &̲& source /root/…DB_PASS -e "show variables like ‘innodb_log%’;"命令的返回结果提交到答题框。
安装服务
[root@controller ~]# openstack-controller-mysql.sh
配置mysql
[root@controller ~]# vim /etc/my.cnf
在底部加入
lower_case_table_names = 1
innodb_buffer_pool_size = 4G
innodb_log_buffer_size = 64M
innodb_log_file_size = 256M
innodb_log_files_in_group = 2
重启服务
[root@controller ~]# systemctl restart mysqld.service
答案
[client-server]
[mysqld]
symbolic-links=0
!includedir /etc/my.cnf.d
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8
max_connections = 10000
lower_case_table_names = 1
innodb_buffer_pool_size = 4G
innodb_log_buffer_size = 64M
innodb_log_file_size = 256M
innodb_log_files_in_group = 2
+-----------------------------+-----------+
| Variable_name | Value |
+-----------------------------+-----------+
| innodb_log_buffer_size | 67108864 |
| innodb_log_checksums | ON |
| innodb_log_compressed_pages | ON |
| innodb_log_file_size | 268435456 |
| innodb_log_files_in_group | 2 |
| innodb_log_group_home_dir | ./ |
| innodb_log_optimize_ddl | ON |
| innodb_log_write_ahead_size | 8192 |
+-----------------------------+-----------+
5. Keystone服务安装与使用(1分)
在controller节点上使用openstack-controller-keystone.sh脚本安装Keystone服务。安装完成后,使用相关命令,创建用户competition,密码为000000。
在controller节点将source /root/admin-openrc && openstack service list && openstack user list命令的返回结果提交到答题框。
安装服务
[root@controller ~]# openstack-controller-keystone.sh
创建用户
[root@controller ~]# source admin-openrc
[root@controller ~]# openstack user create --password 000000 competition
答案
+----------------------------------+----------+----------+
| ID | Name | Type |
+----------------------------------+----------+----------+
| ede5eaacb309417c98e5b3d8303339d1 | keystone | identity |
+----------------------------------+----------+----------+
+----------------------------------+-------------+
| ID | Name |
+----------------------------------+-------------+
| de239fd69a1242e98383be54e994ea8a | admin |
| 66224f9d37fc4522a858ec2b8336f6a9 | demo |
| f633e9478b874141ad11256f24757e8d | competition |
+----------------------------------+-------------+
6. Glance安装与使用(1分)
在controller节点上使用openstack-controller-glance.sh脚本安装glance 服务。使用命令将提供的cirros-0.3.4-x86_64-disk.img镜像上传至平台,命名为cirros,并设置最小启动需要的硬盘为10G,最小启动需要的内存为1G。
在controller节点将source /root/admin-openrc && openstack-service status glance && openstack image show cirros | sed s/[[:space:]]//g命令的返回结果提交到答题框。
安装服务
[root@controller ~]# openstack-controller-glance.sh
导入镜像(镜像位置自行决定)
[root@controller ~]# openstack image create --container-format bare --disk-format qcow2 --min-disk 10 --min-ram 1024 --public cirros < /mnt/iaas/images/cirros-0.3.4-x86_64-disk.img
答案
MainPID=11638 Id=openstack-glance-api.service ActiveState=active
MainPID=11639 Id=openstack-glance-registry.service ActiveState=active
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|Field|Value|
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|checksum|ee1eca47dc88f4879d8a229cc70a07c6|
|container_format|bare|
|created_at|2025-03-16T10:56:31Z|
|disk_format|qcow2|
|file|/v2/images/4921056f-69d6-4c5e-84cc-73f0690935d7/file|
|id|4921056f-69d6-4c5e-84cc-73f0690935d7|
|min_disk|10|
|min_ram|1024|
|name|cirros|
|owner|acde114387c2494196a1c0e90c22545c|
|properties|os_hash_algo='sha512',os_hash_value='1b03ca1bc3fafe448b90583c12f367949f8b0e665685979d95b004e48574b953316799e23240f4f739d1b5eb4c4ca24d38fdc6f4f9d8247a2bc64db25d6bbdb2',os_hidden='False'|
|protected|False|
|schema|/v2/schemas/image|
|size|13287936|
|status|active|
|tags||
|updated_at|2025-03-16T10:56:31Z|
|virtual_size|None|
|visibility|public|
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
7. Nova安装(1分)
在controller节点和compute节点上分别使用openstack-controller-nova.sh脚本、openstack-compute-nova.sh脚本安装Nova服务。安装完成后,请修改nova相关配置文件,解决因等待时间过长而导致虚拟机启动超时从而获取不到IP地址而报错失败的问题。
在controller节点将source admin-openrc && openstack-service status nova && cat /etc/nova/nova.conf | grep -Ev ^'(#|$|[)'命令的返回结果提交到答题框。
[root@controller ~]# openstack-controller-nova.sh
[root@compute ~]# openstack-compute-nova.sh
[root@controller ~]# vim /etc/nova/nova.conf
找到vif_plugging_is_fatal
,取消注释,并修改为false
重启服务
[root@controller ~]# systemctl restart openstack-nova-*
答案
MainPID=14227 Id=openstack-nova-api.service ActiveState=active
MainPID=14223 Id=openstack-nova-conductor.service ActiveState=active
MainPID=14221 Id=openstack-nova-novncproxy.service ActiveState=active
MainPID=14225 Id=openstack-nova-scheduler.service ActiveState=active
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:000000@controller
my_ip = 192.168.100.10
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
vif_plugging_is_fatal=false
auth_strategy = keystone
connection = mysql+pymysql://nova:000000@controller/nova_api
connection = mysql+pymysql://nova:000000@controller/nova
api_servers = http://controller:9292
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = demo
user_domain_name = demo
project_name = service
username = nova
password = 000000
lock_path = /var/lib/nova/tmp
region_name = RegionOne
project_domain_name = demo
project_name = service
auth_type = password
user_domain_name = demo
auth_url = http://controller:5000/v3
username = placement
password = 000000
discover_hosts_in_cells_interval = 300
enabled = true
server_listen = 192.168.100.10
server_proxyclient_address = 192.168.100.10
8. Neutron安装(1分)
使用提供的脚本openstack-controller-neutron.sh和openstack-compute-neutron.sh,分别在controller和compute节点上安装neutron服务。
在controller节点将source /root/admin-openrc && openstack-service status neutron && openstack network agent list命令的返回结果提交到答题框。
[root@controller ~]# openstack-controller-neutron.sh
[root@compute ~]# openstack-compute-neutron.sh
答案
MainPID=15715 Id=neutron-dhcp-agent.service ActiveState=active
MainPID=15718 Id=neutron-l3-agent.service ActiveState=active
MainPID=15724 Id=neutron-linuxbridge-agent.service ActiveState=active
MainPID=15716 Id=neutron-metadata-agent.service ActiveState=active
MainPID=15713 Id=neutron-server.service ActiveState=active
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+------------+---------------