一、虚拟机的安装
1.下载Vmware并安装
(百度网盘)百度网盘 请输入提取码,提取码:448v
2.在Vmware中安装虚拟机
(1).点击新建虚拟机
(2).选择自定义,直到出现下图界面,选择稍后安装操作系统;
(3).在这里选择处理器的配置和虚拟机的内存;
(4).网络类型选择(NAT);
(5).出现下列界面选择编辑虚拟机设置,CD/DVD选择选择ISO镜像文件,导入镜像文件。
完成Linux系统的安装
二、虚拟机的配置(两个节点都需配置)
1.节点规划
节点 | IP地址 |
OpenStack01(控制节点) | 192.168.51.138 |
OpenStack02(计算节点) | 192.168.51.139 |
2.节点的配置
(1)设置静态IP
-
[root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33 TYPE="Ethernet" PROXY_METHOD="none" BROWSER_ONLY="no" BOOTPROTO="static" DEFROUTE="yes" IPV4_FAILURE_FATAL="no" IPV6INIT="yes" IPV6_AUTOCONF="yes" IPV6_DEFROUTE="yes" IPV6_FAILURE_FATAL="no" IPV6_ADDR_GEN_MODE="stable-privacy" NAME="ens33" UUID="67e289c5-05ec-4b93-977b-5ca39399cddc" DEVICE="ens33" ONBOOT="yes" IPADDR=192.168.51.138 NETMASK=255.255.255.0 GATEWAY=192.168.51.2 DNS1=8.8.4.4 DNS2=8.8.8.8
(2).重启网络
[root@localhost ~]# systemctl restart network
(3).测试网络
[root@localhost ~]# ping www.baidu.com
PING www.wshifen.com (103.235.46.39) 56(84) bytes of data.
64 bytes from 103.235.46.39 (103.235.46.39): icmp_seq=1 ttl=128 time=80.1 ms
64 bytes from 103.235.46.39 (103.235.46.39): icmp_seq=2 ttl=128 time=78.2 ms
三、控制节点的系统环境准备
1.配置域名解析
(1).配置主机名
-
[root@localhost ~]# hostname openstack01.zuiyoujie.com [root@localhost ~]# hostname openstack01.zuiyoujie.com [root@localhost ~]# echo "openstack01.zuiyoujie.com"> /etc/hostname [root@localhost ~]# cat /etc/hostname openstack01.zuiyoujie.com
(2).配置域名解析
-
[root@localhost ~]# vi /etc/hosts 192.168.51.138 openstack01.zuiyoujie.com controller 192.168.51.139 openstack02.zuiyoujie.com compute02 block02 object02
2.关闭防火墙和selinux
(1).关闭iptables
-
[root@localhost ~]# systemctl stop firewalld.service [root@localhost ~]# systemctl disable firewalld.service Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. [root@localhost ~]# systemctl status firewalld.service [root@localhost ~]# systemctl status firewalld.service ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled) Active: inactive (dead)
(2).关闭selinux
-
[root@localhost ~]# setenforce 0 [root@localhost ~]# getenforce Permissive [root@localhost ~]# sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux [root@localhost ~]# grep SELINUX=disabled /etc/sysconfig/selinux SELINUX=disabled
3.配置时间同步
(1).在控制端配置时间同步服务
[root@localhost ~]# yum install chrony -y
(2).编辑配置文件确定有以下配置
-
[root@localhost ~]# vim /etc/chrony.conf server ntp1.aliyun.com iburst server ntp2.aliyun.com iburst allow 192.168.51.0/24
(3).重启ntp服务,并配置开机自启动
-
[root@localhost ~]# systemctl restart chronyd.service [root@localhost ~]# systemctl status chronyd.service ● chronyd.service - NTP client/server Loaded: loaded (/usr/lib/systemd/system/chronyd.service; disabled; vendor preset: enabled) Active: active (running) since 四 2021-11-18 15:38:16 CST; 4s ago [root@localhost ~]# systemctl enable chronyd.service Created symlink from /etc/systemd/system/multi-user.target.wants/chronyd.service to /usr/lib/systemd/system/chronyd.service. [root@localhost ~]# systemctl list-unit-files |grep chronyd.service chronyd.service enabled
(4).设置时区,同步时间
-
[root@localhost ~]# timedatectl set-timezone Asia/Shanghai [root@localhost ~]# chronyc sources 210 Number of sources = 6 MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^? time.cloudflare.com 0 7 0 - +0ns[ +0ns] +/- 0ns ^- tick.ntp.infomaniak.ch 1 6 37 24 -2371us[-2371us] +/- 165ms ^? sv1.ggsrv.de 2 6 1 24 +19ms[ +19ms] +/- 94ms ^? time.cloudflare.com 0 7 0 - +0ns[ +0ns] +/- 0ns ^* 120.25.115.20 2 6 37 27 +19ms[+4205us] +/- 41ms ^- 203.107.6.88 2 6 37 27 +7036us[+7036us] +/- 40ms [root@localhost ~]# timedatectl status Local time: 四 2021-11-18 15:40:03 CST Universal time: 四 2021-11-18 07:40:03 UTC RTC time: 四 2021-11-18 07:40:04 Time zone: Asia/Shanghai (CST, +0800) NTP enabled: yes NTP synchronized: yes RTC in local TZ: no DST active: n/a
(5).重启虚拟机,使配置生效
-
[root@localhost ~]# reboot
4.配置相关yum源
(1).配置阿里云的base和epel源
-
[root@openstack01 ~]# mv -f /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup [root@openstack01 ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo [root@openstack01 ~]# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo [root@openstack01 ~]# mv /etc/yum.repos.d/epel.repo /etc/yum.repos.d/epel.repo.backup
(2).安装openstack-rocky仓库
-
[root@openstack01 ~]# yum install centos-release-openstack-rocky -y [root@openstack01 ~]# yum clean all [root@openstack01 ~]# yum makecache
(3).更新安装包
-
[root@openstack01 ~]# yum update -y
(4).安装openstack客户端相关软件
-
[root@openstack01 ~]# yum install python-openstackclient openstack-selinux -y
5.在控制节点安装数据库
(1).安装mariadb相关软件包
-
[root@openstack01 ~]# yum install mariadb mariadb-server MySQL-python python2-PyMySQL -y
(2).创建openstack的数据库配置文件
-
[root@openstack01 ~]# vi /etc/my.cnf.d/mariadb-server.cnf [mysqld] bind-address = 0.0.0.0 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 init-connect = 'SET NAMES utf8' datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock log-error=/var/log/mariadb/mariadb.log pid-file=/var/run/mariadb/mariadb.pid # # * Galera-related settings #
(3).启动数据库设置开机启动
-
[root@openstack01 ~]# systemctl restart mariadb.service [root@openstack01 ~]# systemctl status mariadb.service ● mariadb.service - MariaDB 10.1 database server Loaded: loaded (/usr/lib/systemd/system/mariadb.service; disabled; vendor preset: disabled) Active: active (running) since 四 2021-11-18 16:06:10 CST; 2s ago [root@openstack01 ~]# systemctl enable mariadb.service Created symlink from /etc/systemd/system/multi-user.target.wants/mariadb.service to /usr/lib/systemd/system/mariadb.service. [root@openstack01 ~]# systemctl list-unit-files |grep mariadb.service mariadb.service enabled
(4).初始化数据库并重新启动
[root@openstack01 ~]# /usr/bin/mysql_secure_installation
NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!
In order to log into MariaDB to secure it, we'll need the current
password for the root user. If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.
Enter current password for root (enter for none):
OK, successfully used password, moving on...
Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.
Set root password? [Y/n] y
New password:
Re-enter new password:
Password updated successfully!
Reloading privilege tables..
... Success!
By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them. This is intended only for testing, and to make the installation
go a bit smoother. You should remove them before moving into a
production environment.
Remove anonymous users? [Y/n] y
... Success!
Normally, root should only be allowed to connect from 'localhost'. This
ensures that someone cannot guess at the root password from the network.
Disallow root login remotely? [Y/n] y
... Success!
By default, MariaDB comes with a database named 'test' that anyone can
access. This is also intended only for testing, and should be removed
before moving into a production environment.
Remove test database and access to it? [Y/n] y
- Dropping test database...
... Success!
- Removing privileges on test database...
... Success!
Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.
Reload privilege tables now? [Y/n] y
... Success!
Cleaning up...
All done! If you've completed all of the above steps, your MariaDB
installation should now be secure.
[root@openstack01 ~]# systemctl restart mariadb.service
[root@openstack01 ~]# openssl rand -hex 10
0eff5e993a6b28de5aa9
(5).创建openstack相关数据库,进行授权
-
[root@openstack01 ~]# mysql -p123456 Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 2 Server version: 10.1.20-MariaDB MariaDB Server Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> flush privileges; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | +--------------------+ 3 rows in set (0.00 sec) MariaDB [(none)]> select user,host from mysql.user; +------+-----------+ | user | host | +------+-----------+ | root | 127.0.0.1 | | root | ::1 | | root | localhost | +------+-----------+ 3 rows in set (0.00 sec) MariaDB [(none)]> exit Bye
6.在控制节点安装消息队列rabbitmq
(1).安装rabbitmq-server
-
[root@openstack01 ~]# yum install rabbitmq-server -y
(2).启动rabbitmq,并配置自启动
-
[root@openstack01 ~]# systemctl start rabbitmq-server.service [root@openstack01 ~]# systemctl status rabbitmq-server.service ● rabbitmq-server.service - RabbitMQ broker Loaded: loaded (/usr/lib/systemd/system/rabbitmq-server.service; disabled; vendor preset: disabled) Active: active (running) since 四 2021-11-18 16:14:55 CST; 2s ago [root@openstack01 ~]# systemctl enable rabbitmq-server.service Created symlink from /etc/systemd/system/multi-user.target.wants/rabbitmq-server.service to /usr/lib/systemd/system/rabbitmq-server.service. [root@openstack01 ~]# systemctl list-unit-files |grep rabbitmq-server.service rabbitmq-server.service enabled
(3).创建消息队列中openstack账号及密码
-
[root@openstack01 ~]# rabbitmqctl add_user openstack openstack Creating user "openstack" [root@openstack01 ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*" Setting permissions for user "openstack" in vhost "/" [root@openstack01 ~]# rabbitmqctl set_permissions -p "/" openstack ".*" ".*" ".*" Setting permissions for user "openstack" in vhost "/"
(4).启用rabbitmq_management插件实习web管理
-
[root@openstack01 ~]# rabbitmq-plugins list Configured: E = explicitly enabled; e = implicitly enabled | Status: * = running on rabbit@openstack01 |/ [ ] amqp_client 3.6.16 [ ] cowboy 1.0.4 [ ] cowlib 1.0.2 [ ] rabbitmq_amqp1_0 3.6.16 [ ] rabbitmq_auth_backend_ldap 3.6.16 [ ] rabbitmq_auth_mechanism_ssl 3.6.16 [ ] rabbitmq_consistent_hash_exchange 3.6.16 [ ] rabbitmq_event_exchange 3.6.16 [ ] rabbitmq_federation 3.6.16 [ ] rabbitmq_federation_management 3.6.16 [ ] rabbitmq_jms_topic_exchange 3.6.16 [ ] rabbitmq_management 3.6.16 [ ] rabbitmq_management_agent 3.6.16 [ ] rabbitmq_management_visualiser 3.6.16 [ ] rabbitmq_mqtt 3.6.16 [ ] rabbitmq_random_exchange 3.6.16 [ ] rabbitmq_recent_history_exchange 3.6.16 [ ] rabbitmq_sharding 3.6.16 [ ] rabbitmq_shovel 3.6.16 [ ] rabbitmq_shovel_management 3.6.16 [ ] rabbitmq_stomp 3.6.16 [ ] rabbitmq_top 3.6.16 [ ] rabbitmq_tracing 3.6.16 [ ] rabbitmq_trust_store 3.6.16 [ ] rabbitmq_web_dispatch 3.6.16 [ ] rabbitmq_web_mqtt 3.6.16 [ ] rabbitmq_web_mqtt_examples 3.6.16 [ ] rabbitmq_web_stomp 3.6.16 [ ] rabbitmq_web_stomp_examples 3.6.16 [ ] sockjs 0.3.4 [root@openstack01 ~]# rabbitmq-plugins enable rabbitmq_management The following plugins have been enabled: amqp_client cowlib cowboy rabbitmq_web_dispatch rabbitmq_management_agent rabbitmq_management Applying plugin configuration to rabbit@openstack01... started 6 plugins. [root@openstack01 ~]# systemctl restart rabbitmq-server.service [root@openstack01 ~]# rabbitmq-plugins list Configured: E = explicitly enabled; e = implicitly enabled | Status: * = running on rabbit@openstack01 |/ [e*] amqp_client 3.6.16 [e*] cowboy 1.0.4 [e*] cowlib 1.0.2 [ ] rabbitmq_amqp1_0 3.6.16 [ ] rabbitmq_auth_backend_ldap 3.6.16 [ ] rabbitmq_auth_mechanism_ssl 3.6.16 [ ] rabbitmq_consistent_hash_exchange 3.6.16 [ ] rabbitmq_event_exchange 3.6.16 [ ] rabbitmq_federation 3.6.16 [ ] rabbitmq_federation_management 3.6.16 [ ] rabbitmq_jms_topic_exchange 3.6.16 [E*] rabbitmq_management 3.6.16 [e*] rabbitmq_management_agent 3.6.16 [ ] rabbitmq_management_visualiser 3.6.16 [ ] rabbitmq_mqtt 3.6.16 [ ] rabbitmq_random_exchange 3.6.16 [ ] rabbitmq_recent_history_exchange 3.6.16 [ ] rabbitmq_sharding 3.6.16 [ ] rabbitmq_shovel 3.6.16 [ ] rabbitmq_shovel_management 3.6.16 [ ] rabbitmq_stomp 3.6.16 [ ] rabbitmq_top 3.6.16 [ ] rabbitmq_tracing 3.6.16 [ ] rabbitmq_trust_store 3.6.16 [e*] rabbitmq_web_dispatch 3.6.16 [ ] rabbitmq_web_mqtt 3.6.16 [ ] rabbitmq_web_mqtt_examples 3.6.16 [ ] rabbitmq_web_stomp 3.6.16 [ ] rabbitmq_web_stomp_examples 3.6.16 [ ] sockjs 0.3.4 [root@openstack01 ~]# lsof -i:15672 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME beam.smp 65673 rabbitmq 56u IPv4 199827 0t0 TCP *:15672 (LISTEN)
(5).浏览器访问RabbitMQ进行测试
访问地址:192.168.51.138:15672
默认用户名和密码都是guest
出现此界面安装完成
7.在控制节点安装Memcached
(1).安装Memcached用于缓存令牌
-
[root@openstack01 ~]# yum install memcached python-memcached -y
(2).修改memcached配置文件
-
[root@openstack01 ~]# vim /etc/sysconfig/memcached PORT="11211" USER="memcached" MAXCONN="1024" CACHESIZE="64" OPTIONS="-l 127.0.0.1,controller"
(3).启动memcached并设置开机自启动
-
[root@openstack01 ~]# systemctl start memcached.service [root@openstack01 ~]# systemctl status memcached.service ● memcached.service - memcached daemon Loaded: loaded (/usr/lib/systemd/system/memcached.service; disabled; vendor preset: disabled) Active: active (running) since 四 2021-11-18 16:29:19 CST; 4s ago Main PID: 67009 (memcached) Tasks: 10 CGroup: /system.slice/memcached.service └─67009 /usr/bin/memcached -p 11211 -u memcached -m 64 -c 1024 -l 127.0.0.1,controller 11月 18 16:29:19 openstack01.zuiyoujie.com systemd[1]: Started memcached daemon. [root@openstack01 ~]# netstat -anptl|grep memcached tcp 0 0 192.168.51.138:11211 0.0.0.0:* LISTEN 67009/memcached tcp 0 0 127.0.0.1:11211 0.0.0.0:* LISTEN 67009/memcached [root@openstack01 ~]# systemctl enable memcached.service Created symlink from /etc/systemd/system/multi-user.target.wants/memcached.service to /usr/lib/systemd/system/memcached.service. [root@openstack01 ~]# systemctl list-unit-files |grep memcached.service memcached.service enabled
8.在控制节点上安装Etcd服务
(1).安装etcd服务
-
[root@openstack01 ~]# yum install etcd -y
(2).修改etcd配置文件
-
[root@openstack01 ~]# vim /etc/etcd/etcd.conf #修改下列配置 #[Member] #ETCD_LISTEN_PEER_URLS="http://192.168.51.138:2380" ETCD_LISTEN_CLIENT_URLS="http://192.168.51.138:2379" ETCD_NAME="controller #[Clustering] #ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.51.138:2380" ETCD_ADVERTISE_CLIENT_URLS="http://192.168.51.138:2379" #ETCD_INITIAL_CLUSTER="controller=http://192.168.51.138:2380" #ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01" #ETCD_INITIAL_CLUSTER_STATE="new"
(3).启动etcd并设置开机启动
-
[root@openstack01 ~]# systemctl start etcd.service [root@openstack01 ~]# systemctl status etcd.service ● etcd.service - Etcd Server Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled; vendor preset: disabled) Active: active (running) since 四 2021-11-18 16:48:23 CST; 36s ago [root@openstack01 ~]# netstat -anptl|grep etcd tcp 0 0 192.168.51.138:2379 0.0.0.0:* LISTEN 9625/etcd tcp 0 0 127.0.0.1:2380 0.0.0.0:* LISTEN 9625/etcd tcp 0 0 192.168.51.138:55430 192.168.51.138:2379 ESTABLISHED 9625/etcd tcp 0 0 192.168.51.138:2379 192.168.51.138:55430 ESTABLISHED 9625/etcd [root@openstack01 ~]# systemctl enable etcd.service Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service. [root@openstack01 ~]# systemctl list-unit-files |grep etcd.service etcd.service enabled
###注意:如果此处无法启动服务:
可以通过yum remove etcd卸载etcd服务,然后通过yum install etcd 进行重新安装并且配置。
至此,控制节点controller就完成了基础环境的配置。
二、安装Keyston认证服务组件(控制节点)
1.在控制节点创建keystone相关数据库。
(1).创建keystone数据库并授权
-
[root@openstack01 ~]# mysql -p123456 MariaDB [(none)]> CREATE DATABASE keystone; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> flush privileges; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | keystone | | mysql | | performance_schema | +--------------------+ 4 rows in set (0.10 sec) MariaDB [(none)]> select user,host from mysql.user; +----------+-----------+ | user | host | +----------+-----------+ | keystone | % | | root | 127.0.0.1 | | root | ::1 | | keystone | localhost | | root | localhost | +----------+-----------+ 5 rows in set (0.00 sec) MariaDB [(none)]> exit Bye
2.在控制节点安装keystone相关软件包
(1).安装keystone相关软件包
-
[root@openstack01 ~]# yum install openstack-keystone httpd mod_wsgi -y [root@openstack01 ~]# yum install openstack-keystone python-keystoneclient openstack-utils -y
(2).快速修改keystone配置
-
[root@openstack01 ~]# openstack-config --set /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:keystone@controller/keystone [root@openstack01 ~]# openstack-config --set /etc/keystone/keystone.conf token provider fernet
查看生效的配置
-
[root@openstack01 ~]# grep '^[a-z]' /etc/keystone/keystone.conf connection = mysql+pymysql://keystone:keystone@controller/keystone provider = fernet
3.初始化同步keystone数据库
(1).同步keystone数据库
-
[root@openstack01 ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone
(2).同步完成进行连接测试
-
[root@openstack01 ~]# mysql -h192.168.51.138 -ukeystone -pkeystone -e "use keystone;show tables;"
(4).初始化Fernet令牌库
-
[root@openstack01 ~]# mysql -h192.168.51.138 -ukeystone -pkeystone -e "use keystone;show tables;"|wc -l 45
4.初始化Fernet令牌
-
[root@openstack01 ~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone [root@openstack01 ~]# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
5.配置启动Apache(httpd)
(1).修改httpd主配置文件
-
[root@openstack01 ~]# vim /etc/httpd/conf/httpd.conf +95 ServerName controller
(2).配置虚拟主机
-
[root@openstack01 ~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
(3).启动httpd并配置开机自启动
-
[root@openstack01 ~]# systemctl start httpd.service [root@openstack01 ~]# systemctl status httpd.service ● httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled; vendor preset: disabled) Active: active (running) since 四 2021-11-18 21:09:50 CST; 5s ago [root@openstack01 ~]# netstat -anptl|grep httpd tcp6 0 0 :::5000 :::* LISTEN 19998/httpd tcp6 0 0 :::80 :::* LISTEN 19998/httpd [root@openstack01 ~]# systemctl enable httpd.service Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service. [root@openstack01 ~]# systemctl list-unit-files |grep httpd.service httpd.service enabled
6.初始化keystone认证服务
(1).创建keystone用户,初始化的服务实体和API端点
-
[root@openstack01 ~]# keystone-manage bootstrap --bootstrap-password 123456 \ > --bootstrap-admin-url http://controller:5000/v3/ \ > --bootstrap-internal-url http://controller:5000/v3/ \ > --bootstrap-public-url http://controller:5000/v3/ \ > --bootstrap-region-id RegionOne
(2).临时配置管理员账户的相关变量进行管理
-
[root@openstack01 ~]# export OS_PROJECT_DOMAIN_NAME=Default [root@openstack01 ~]# export OS_PROJECT_NAME=admin [root@openstack01 ~]# export OS_USER_DOMAIN_NAME=Default [root@openstack01 ~]# export OS_USERNAME=admin [root@openstack01 ~]# export OS_PASSWORD=123456 [root@openstack01 ~]# export OS_AUTH_URL=http://controller:5000/v3 [root@openstack01 ~]# export OS_IDENTITY_API_VERSION=3
查看声明的变量
-
[root@openstack01 ~]# env |grep OS_ OS_USER_DOMAIN_NAME=Default OS_PROJECT_NAME=admin OS_IDENTITY_API_VERSION=3 OS_PASSWORD=123456 OS_AUTH_URL=http://controller:5000/v3 OS_USERNAME=admin OS_PROJECT_DOMAIN_NAME=Default
查看keystone实例相关信息
-
[root@openstack01 ~]# openstack endpoint list +----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------+ | 1514c7622fea4971a54c2c3196bc99da | RegionOne | keystone | identity | True | public | http://controller:5000/v3/ | | 48fb0cf5c96c49a19e50bf0d75bae08f | RegionOne | keystone | identity | True | internal | http://controller:5000/v3/ | | 8f666d61be604eb0b45fcba38d6286e9 | RegionOne | keystone | identity | True | admin | http://controller:5000/v3/ | +----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------+ [root@openstack01 ~]# openstack project list +----------------------------------+-------+ | ID | Name | +----------------------------------+-------+ | 22dc52a1ad1846beadb471cf4f41327a | admin | +----------------------------------+-------+ [root@openstack01 ~]# openstack user list +----------------------------------+-------+ | ID | Name | +----------------------------------+-------+ | 4129a331c0a442018622efc169c07bc6 | admin | +----------------------------------+-------+
7.创建keystone的一般实例
(1).创建一个名为example的keystone域
-
[root@openstack01 ~]# openstack domain create --description "An Example Domain" example +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | An Example Domain | | enabled | True | | id | 0f9da48649694e439d0750efa0f2cdf8 | | name | example | | tags | [] | +-------------+----------------------------------+
(2).为keystone系统环境创建名为service的项目提供服务
-
[root@openstack01 ~]# openstack project create --domain default --description "Service Project" service +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | Service Project | | domain_id | default | | enabled | True | | id | f0120ebcdcb24e38b5ac0302c56a4ef1 | | is_domain | False | | name | service | | parent_id | default | | tags | [] | +-------------+----------------------------------+
(3).创建myproject项目和对应的用户及角色
-
[root@openstack01 ~]# openstack project create --domain default --description "Demo Project" myproject +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | Demo Project | | domain_id | default | | enabled | True | | id | 1c216f6865ea4ca9a5297cab12d0cdfc | | is_domain | False | | name | myproject | | parent_id | default | | tags | [] | +-------------+----------------------------------+
(4).在默认域创建myuser用户
-
[root@openstack01 ~]# openstack user create --domain default --password-prompt myuser User Password: Repeat User Password: +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | 8af2b494dbe645ed9f417da1748e7c9e | | name | myuser | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+
(5).在role表创建myrole角色
-
[root@openstack01 ~]# openstack role create myrole +-----------+----------------------------------+ | Field | Value | +-----------+----------------------------------+ | domain_id | None | | id | f5c8d791cc014b0f83add811e9a2eb04 | | name | myrole | +-----------+----------------------------------+
(6).将myrole角色添加到myproject项目中和myuser用户组中
-
[root@openstack01 ~]# openstack role add --project myproject --user myuser myrole
8.验证操作keystone是否安装成功
(1).去除环境变量
-
[root@openstack01 ~]# unset OS_AUTH_URL OS_PASSWORD [root@openstack01 ~]# env |grep OS_ OS_USER_DOMAIN_NAME=Default OS_PROJECT_NAME=admin OS_IDENTITY_API_VERSION=3 OS_USERNAME=admin OS_PROJECT_DOMAIN_NAME=Default
(2).作为管理员用户去请求一个认证的token
-
[root@openstack01 ~]# openstack --os-auth-url http://controller:5000/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name admin --os-username admin token issue Password: +------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | expires | 2021-11-18T14:50:09+0000 | | id | gAAAAABhlloRG4w9zxu16i-A7OXeq4UStxz7Uj83RtOoTUS66_OWL9sopLlGuIEuc_poVtvHcYQOSF3kmBSxL5Y8uQBG6EjLkuQJTBOWD7q9sSBuYngqujyJcWGqfVRVwb8U4BV0LC9aowfOVoLGMS67v5VNB-mLJAfD5UWE12Bl0imzDDZhyhs | | project_id | 22dc52a1ad1846beadb471cf4f41327a | | user_id | 4129a331c0a442018622efc169c07bc6 | +------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
(3).使用普通用户获取认证token
-
[root@openstack01 ~]# openstack --os-auth-url http://controller:5000/v3 \ > --os-project-domain-name Default --os-user-domain-name Default \ > --os-project-name myproject --os-username myuser token issue Password: +------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | expires | 2021-11-18T15:02:11+0000 | | id | gAAAAABhllzj67qvjhdoai3S9ckucEpnM59iZ-5YmS5bAxcsYIn4EOcgN1hdoHwioFjrcshcobXuYZorh79CDtBbBH-yFeKPe436-z47khIwVYIHuyZCkRICXyOANH3iCVTOqmdvDXHQr1bTAZsAwBEEu5GQQgcxRVv8JclEGjXZdUE23LYAbeU | | project_id | b24f6271a02a4915b071a8af104cb321 | | user_id | 1912bf1a2c074889a45f4bd2193387cd | +------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
9.创建OpenStack客户端环境脚本
(1).创建admin用户的环境管理脚本
-
[root@openstack01 ~]# mkdir -p /server/tools [root@openstack01 ~]# cd /server/tools [root@openstack01 tools]# vim keystone-admin-pass.sh #新建文件,写入下列内容 export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=123456 export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 [root@openstack01 tools]# env |grep OS_ OS_USER_DOMAIN_NAME=Default OS_PROJECT_NAME=admin OS_IDENTITY_API_VERSION=3 OS_USERNAME=admin OS_PROJECT_DOMAIN_NAME=Default
(2).创建普通用户myuser的客户端环境变量脚本
-
[root@openstack01 tools]# vim keystone-myuser-pass.sh #新建此文件 export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=myproject export OS_USERNAME=myuser export OS_PASSWORD=myuser export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2
(3).测试环境管理脚本
-
[root@openstack01 tools]# source keystone-admin-pass.sh
(4).请求认证令牌
-
[root@openstack01 tools]# openstack token issue +------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | expires | 2021-11-18T15:08:44+0000 | | id | gAAAAABhll5sJL_H1YZ4-EF1kY-YHfqGtLxQKemfHo1DBErarVdoVMNsmAOembNYGfJCeihiRCHS-O1oMOo8i8vXtTecRcMPr1bpojAhPDE_tw8V09IRR4D4LFqFMcAketw2hv3KGTpjqGSG2W8CY-rg1FSf53PXxKXRcXqaPeaE4xap6rBB2mU | | project_id | 5461c6241b4743ef82b48c62efe275c0 | | user_id | f86be32740d94614860933ec034adf0b | +------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
出现如上所示的命令,说明Keystone安装完成。
三、安装Glance镜像服务组件(控制节点)
1.在控制端安装镜像服务glace
(1).创建glance数据库
-
[root@openstack01 tools]# cd [root@openstack01 ~]# mysql -p123456 Welcome to the MariaDB monitor. Commands end with ; or \g. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> CREATE DATABASE glance; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> flush privileges; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> exit Bye
2.在keystone上创建glance用户
(1).在keystone上创建glance用户
-
[root@openstack01 ~]# cd /server/tools [root@openstack01 tools]# source keystone-admin-pass.sh [root@openstack01 tools]# openstack user create --domain default --password=glance glance +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | dd93119899a44130b0ea8bc1aa51bb79 | | name | glance | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ [root@openstack01 tools]# openstack user list +----------------------------------+--------+ | ID | Name | +----------------------------------+--------+ | 1912bf1a2c074889a45f4bd2193387cd | myuser | | dd93119899a44130b0ea8bc1aa51bb79 | glance | | f86be32740d94614860933ec034adf0b | admin | +----------------------------------+--------+
(2).在keystone上将glance用户添加为service项目的admin角色(权限)
-
[root@openstack01 tools]# openstack role add --project service --user glance admin
(3).创建glance镜像服务的实体
-
[root@openstack01 tools]# openstack service create --name glance --description "OpenStack Image" image +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Image | | enabled | True | | id | 2034ba4ebefa45f5862e63762826267f | | name | glance | | type | image | +-------------+----------------------------------+ [root@openstack01 tools]# openstack service list +----------------------------------+----------+----------+ | ID | Name | Type | +----------------------------------+----------+----------+ | 2034ba4ebefa45f5862e63762826267f | glance | image | | 517bf50f40b94aadb635b3595cca3ac3 | keystone | identity | +----------------------------------+----------+----------+
(4).创建镜像服务的API端点(endpoint)
-
[root@openstack01 tools]# openstack endpoint create --region RegionOne image public http://controller:9292 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 6a0d05d369dd430d9f044f66f1125230 | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 2034ba4ebefa45f5862e63762826267f | | service_name | glance | | service_type | image | | url | http://controller:9292 | +--------------+----------------------------------+ [root@openstack01 tools]# openstack endpoint create --region RegionOne image internal http://controller:9292 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 7dcf92c2b3474b1fad4459f2596c796c | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 2034ba4ebefa45f5862e63762826267f | | service_name | glance | | service_type | image | | url | http://controller:9292 | +--------------+----------------------------------+ [root@openstack01 tools]# openstack endpoint create --region RegionOne image admin http://controller:9292 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 2f991fdda3c6475c8478649956eb3bac | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 2034ba4ebefa45f5862e63762826267f | | service_name | glance | | service_type | image | | url | http://controller:9292 | +--------------+----------------------------------+ [root@openstack01 tools]# openstack endpoint list +----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------+ | 0ab772fbfded46998eae4f3c84310837 | RegionOne | keystone | identity | True | public | http://controller:5000/v3/ | | 26b3cb41a2f64fefb5873976505723a2 | RegionOne | keystone | identity | True | admin | http://controller:5000/v3/ | | 2f991fdda3c6475c8478649956eb3bac | RegionOne | glance | image | True | admin | http://controller:9292 | | 6a0d05d369dd430d9f044f66f1125230 | RegionOne | glance | image | True | public | http://controller:9292 | | 7dcf92c2b3474b1fad4459f2596c796c | RegionOne | glance | image | True | internal | http://controller:9292 | | 8b704463162648ea985832fd45ac91ff | RegionOne | keystone | identity | True | internal | http://controller:5000/v3/ | +----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------+
至此,glance在keystone上面注册完成,可以进行安装。
3.安装glance相关软件
(1).检查python版本
-
[root@openstack01 tools]# python --version Python 2.7.5
注意:在当前版本python3.5中有一个ssl方面的bug,建议采用python2X的版本
(2).安装glance软件
-
[root@openstack01 tools]# yum install openstack-glance python-glance python-glanceclient -y
(3).执行以下命令可以快速配置glance-api.conf
-
[root@openstack01 tools]# openstack-config --set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:glance@controller/glance [root@openstack01 tools]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken www_authenticate_uri http://controller:5000 [root@openstack01 tools]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://controller:5000 [root@openstack01 tools]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken memcached_servers controller:11211 [root@openstack01 tools]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_type password [root@openstack01 tools]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_name Default [root@openstack01 tools]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_name Default [root@openstack01 tools]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name service [root@openstack01 tools]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username glance [root@openstack01 tools]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password glance [root@openstack01 tools]# openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone [root@openstack01 tools]# openstack-config --set /etc/glance/glance-api.conf glance_store stores file,http [root@openstack01 tools]# openstack-config --set /etc/glance/glance-api.conf glance_store default_store file [root@openstack01 tools]# openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/
(4).执行以下命令可以快速配置glance-registry.conf
-
[root@openstack01 tools]# openstack-config --set /etc/glance/glance-registry.conf database connection mysql+pymysql://glance:glance@controller/glance [root@openstack01 tools]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken www_authenticate_uri http://controller:5000 [root@openstack01 tools]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_url http://controller:5000 [root@openstack01 tools]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken memcached_servers controller:11211 [root@openstack01 tools]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_type password [root@openstack01 tools]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_domain_name Default [root@openstack01 tools]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken user_domain_name Default [root@openstack01 tools]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_name service [root@openstack01 tools]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken username glance [root@openstack01 tools]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken password glance [root@openstack01 tools]# openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone
查看生效的配置
-
[root@openstack01 tools]# grep '^[a-z]' /etc/glance/glance-api.conf connection = mysql+pymysql://glance:glance@controller/glance stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = glance flavor = keystone [root@openstack01 tools]# grep '^[a-z]' /etc/glance/glance-registry.conf connection = mysql+pymysql://glance:glance@controller/glance www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = glance flavor = keystone
至此,glance服务安装完毕,该服务需要启动
4.同步glance数据库
(1).为glance镜像服务初始化同步数据库
-
[root@openstack01 tools]# su -s /bin/sh -c "glance-manage db_sync" glance /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:1352: OsloDBDeprecationWarning: EngineFacade is deprecated; please use oslo_db.sqlalchemy.enginefacade expire_on_commit=expire_on_commit, _conf=conf) INFO [alembic.runtime.migration] Context impl MySQLImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. INFO [alembic.runtime.migration] Running upgrade -> liberty, liberty initial INFO [alembic.runtime.migration] Running upgrade liberty -> mitaka01, add index on created_at and updated_at columns of 'images' table INFO [alembic.runtime.migration] Running upgrade mitaka01 -> mitaka02, update metadef os_nova_server INFO [alembic.runtime.migration] Running upgrade mitaka02 -> ocata_expand01, add visibility to images INFO [alembic.runtime.migration] Running upgrade ocata_expand01 -> pike_expand01, empty expand for symmetry with pike_contract01 INFO [alembic.runtime.migration] Running upgrade pike_expand01 -> queens_expand01 INFO [alembic.runtime.migration] Running upgrade queens_expand01 -> rocky_expand01, add os_hidden column to images table INFO [alembic.runtime.migration] Running upgrade rocky_expand01 -> rocky_expand02, add os_hash_algo and os_hash_value columns to images table INFO [alembic.runtime.migration] Context impl MySQLImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. Upgraded database to: rocky_expand02, current revision(s): rocky_expand02 INFO [alembic.runtime.migration] Context impl MySQLImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. INFO [alembic.runtime.migration] Context impl MySQLImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. Database migration is up to date. No migration needed. INFO [alembic.runtime.migration] Context impl MySQLImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. INFO [alembic.runtime.migration] Context impl MySQLImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. INFO [alembic.runtime.migration] Running upgrade mitaka02 -> ocata_contract01, remove is_public from images INFO [alembic.runtime.migration] Running upgrade ocata_contract01 -> pike_contract01, drop glare artifacts tables INFO [alembic.runtime.migration] Running upgrade pike_contract01 -> queens_contract01 INFO [alembic.runtime.migration] Running upgrade queens_contract01 -> rocky_contract01 INFO [alembic.runtime.migration] Running upgrade rocky_contract01 -> rocky_contract02 INFO [alembic.runtime.migration] Context impl MySQLImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. Upgraded database to: rocky_contract02, current revision(s): rocky_contract02 INFO [alembic.runtime.migration] Context impl MySQLImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. Database is synced successfully.
(2).同步完成进行连接测试
-
[root@openstack01 tools]# mysql -h192.168.51.138 -uglance -pglance -e "use glance;show tables;" +----------------------------------+ | Tables_in_glance | +----------------------------------+ | alembic_version | | image_locations | | image_members | | image_properties | | image_tags | | images | | metadef_namespace_resource_types | | metadef_namespaces | | metadef_objects | | metadef_properties | | metadef_resource_types | | metadef_tags | | migrate_version | | task_info | | tasks | +----------------------------------+
5.启动glance镜像服务
-
[root@openstack01 tools]# systemctl start openstack-glance-api.service openstack-glance-registry.service [root@openstack01 tools]# systemctl status openstack-glance-api.service openstack-glance-registry.service ● openstack-glance-api.service - OpenStack Image Service (code-named Glance) API server Loaded: loaded (/usr/lib/systemd/system/openstack-glance-api.service; disabled; vendor preset: disabled) Active: active (running) since 四 2021-11-18 22:39:02 CST; 5s ago ● openstack-glance-registry.service - OpenStack Image Service (code-named Glance) Registry server Loaded: loaded (/usr/lib/systemd/system/openstack-glance-registry.service; disabled; vendor preset: disabled) Active: active (running) since 四 2021-11-18 22:39:02 CST; 5s ago [root@openstack01 tools]# systemctl enable openstack-glance-api.service openstack-glance-registry.service Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-glance-api.service to /usr/lib/systemd/system/openstack-glance-api.service. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-glance-registry.service to /usr/lib/systemd/system/openstack-glance-registry.service. [root@openstack01 tools]# systemctl list-unit-files |grep openstack-glance* openstack-glance-api.service enabled openstack-glance-registry.service enabled openstack-glance-scrubber.service disabled
6.检查确认glance安装正确
(1).下载镜像
-
[root@openstack01 tools]# wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img
(2).获取管理员权限
-
[root@openstack01 tools]# source keystone-admin-pass.sh
(3).上传镜像到glance
-
[root@openstack01 tools]# openstack image create "cirros" --file cirros-0.3.5-x86_64-disk.img --disk-format qcow2 --container-format bare --public +------------------+------------------------------------------------------+ | Field | Value +------------------+-------------------------------------------------------+ | checksum | f8ab98ff5e73ebab884d80c9dc9c7290 | container_format | bare | created_at | 2021-11-18T14:44:11Z | disk_format | qcow2 | file | /v2/images/86fd8541-c758-4c6e-a543-48f871b48c3b/file | id | 86fd8541-c758-4c6e-a543-48f871b48c3b | min_disk | 0 | min_ram | 0 | name | cirros | owner | 5461c6241b4743ef82b48c62efe275c0 | properties | os_hash_algo='sha512', os_hash_value='f0fd1b50420dce4ca382ccfbb528eef3a38bbeff00b54e95e3876b9bafe7ed2d6f919ca35d9046d437c6d2d8698b1174a335fbd66035bb3edc525d2cdb187232', os_hidden='False' | | protected | False | schema | /v2/schemas/image | size | 13267968 | status | active | tags | | updated_at | 2021-11-18T14:44:11Z | virtual_size | None | visibility | public +------------------+---------------------------------------------------------+
(4).查看镜像
-
[root@openstack01 tools]# openstack image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | 86fd8541-c758-4c6e-a543-48f871b48c3b | cirros | active | +--------------------------------------+--------+--------+
至此,glance镜像服务安装完成,启动成功
四、安装Nova计算服务(控制节点)
1.在控制节点安装nova计算服务
(1).创建nova相关数据库
-
[root@openstack01 ~]# mysql -u root -p123456 Welcome to the MariaDB monitor. Commands end with ; or \g. MariaDB [(none)]> CREATE DATABASE nova_api; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> CREATE DATABASE nova; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> CREATE DATABASE nova_cell0; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> CREATE DATABASE placement; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'nova'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'nova'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'nova'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'nova'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'placement'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'placement'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> flush privileges; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> show databases; +--------------------+ | Database | +--------------------+ | glance | | information_schema | | keystone | | mysql | | nova | | nova_api | | nova_cell0 | | performance_schema | | placement | +--------------------+ 9 rows in set (0.00 sec) MariaDB [(none)]> select user,host from mysql.user; +-----------+-----------+ | user | host | +-----------+-----------+ | glance | % | | keystone | % | | nova | % | | placement | % | | root | 127.0.0.1 | | root | ::1 | | glance | localhost | | keystone | localhost | | nova | localhost | | placement | localhost | | root | localhost | +-----------+-----------+ 11 rows in set (0.00 sec) MariaDB [(none)]> exit Bye
2.在keystone上面注册nova服务
(1).在keystone上面创建nova用户
-
[root@openstack01 ~]# cd /server/tools [root@openstack01 tools]# source keystone-admin-pass.sh [root@openstack01 tools]# openstack user create --domain default --password=nova nova +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | e154782ba6504b25acc95e2b716e31b2 | | name | nova | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ [root@openstack01 tools]# openstack user list +----------------------------------+--------+ | ID | Name | +----------------------------------+--------+ | 1912bf1a2c074889a45f4bd2193387cd | myuser | | dd93119899a44130b0ea8bc1aa51bb79 | glance | | e154782ba6504b25acc95e2b716e31b2 | nova | | f86be32740d94614860933ec034adf0b | admin | +----------------------------------+--------+
(2).在keystone上将nova用户配置为admin角色并添加到service服务
-
[root@openstack01 tools]# openstack role add --project service --user nova admin
(3).创建nova计算服务实体
-
[root@openstack01 tools]# openstack service create --name nova --description "OpenStack Compute" compute +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Compute | | enabled | True | | id | 568332c6ec2543bab55dad29446b4e73 | | name | nova | | type | compute | +-------------+----------------------------------+ [root@openstack01 tools]# openstack service list +----------------------------------+----------+----------+ | ID | Name | Type | +----------------------------------+----------+----------+ | 2034ba4ebefa45f5862e63762826267f | glance | image | | 517bf50f40b94aadb635b3595cca3ac3 | keystone | identity | | 568332c6ec2543bab55dad29446b4e73 | nova | compute | +----------------------------------+----------+----------+
(4).创建计算服务的API端点(endpoint)
-
[root@openstack01 tools]# openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 802fe1e5d15a403aa3317eef55ddfd1d | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 568332c6ec2543bab55dad29446b4e73 | | service_name | nova | | service_type | compute | | url | http://controller:8774/v2.1 | +--------------+----------------------------------+ [root@openstack01 tools]# openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 38c9310128d8492b98b6b6718a91ab5c | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 568332c6ec2543bab55dad29446b4e73 | | service_name | nova | | service_type | compute | | url | http://controller:8774/v2.1 | +--------------+----------------------------------+ [root@openstack01 tools]# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | a3b9161a9ade41c0bd1c25859ff291eb | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 568332c6ec2543bab55dad29446b4e73 | | service_name | nova | | service_type | compute | | url | http://controller:8774/v2.1 | +--------------+----------------------------------+ [root@openstack01 tools]# openstack endpoint list +----------------------------------+-----------+--------------+--------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+--------------+ | 0ab772fbfded46998eae4f3c84310837 | RegionOne | keystone | identity | True | public | http://controller:5000/v3/ | | 26b3cb41a2f64fefb5873976505723a2 | RegionOne | keystone | identity | True | admin | http://controller:5000/v3/ | | 2f991fdda3c6475c8478649956eb3bac | RegionOne | glance | image | True | admin | http://controller:9292 | | 38c9310128d8492b98b6b6718a91ab5c | RegionOne | nova | compute | True | internal | http://controller:8774/v2.1 | | 6a0d05d369dd430d9f044f66f1125230 | RegionOne | glance | image | True | public | http://controller:9292 | | 7dcf92c2b3474b1fad4459f2596c796c | RegionOne | glance | image | True | internal | http://controller:9292 | | 802fe1e5d15a403aa3317eef55ddfd1d | RegionOne | nova | compute | True | public | http://controller:8774/v2.1 | | 8b704463162648ea985832fd45ac91ff | RegionOne | keystone | identity | True | internal | http://controller:5000/v3/ | | a3b9161a9ade41c0bd1c25859ff291eb | RegionOne | nova | compute | True | admin | http://controller:8774/v2.1 | +----------------------------------+-----------+--------------+--------------+
(5).这个版本的nova增加了placement希昂木
同样,创建并注册该项目的服务证书
-
[root@openstack01 tools]# openstack user create --domain default --password=placement placement +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | 51942708c7454755a6be5bea9e81c3b0 | | name | placement | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ [root@openstack01 tools]# openstack role add --project service --user placement admin [root@openstack01 tools]# openstack service create --name placement --description "Placement API" placement +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | Placement API | | enabled | True | | id | e4bd96f879fe48df9d066e83eecfd9fe | | name | placement | | type | placement | +-------------+----------------------------------+
创建placement项目的endpoint(API端口)
-
[root@openstack01 tools]# openstack endpoint create --region RegionOne placement public http://controller:8778 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | d628d13bf4e8437f922d8e99dadb5700 | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | e4bd96f879fe48df9d066e83eecfd9fe | | service_name | placement | | service_type | placement | | url | http://controller:8778 | +--------------+----------------------------------+ [root@openstack01 tools]# openstack endpoint create --region RegionOne placement internal http://controller:8778 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 620a6b673a724f09bfd4234575374bd3 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | e4bd96f879fe48df9d066e83eecfd9fe | | service_name | placement | | service_type | placement | | url | http://controller:8778 | +--------------+----------------------------------+ [root@openstack01 tools]# openstack endpoint create --region RegionOne placement admin http://controller:8778 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 4d6500b9939a4e7da9e5db58fee25532 | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | e4bd96f879fe48df9d066e83eecfd9fe | | service_name | placement | | service_type | placement | | url | http://controller:8778 | +--------------+----------------------------------+ [root@openstack01 tools]# openstack endpoint list +----------------------------------+-----------+--------------+-------------- | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+--------------+ | 0ab772fbfded46998eae4f3c84310837 | RegionOne | keystone | identity | True | public | http://controller:5000/v3/ | | 26b3cb41a2f64fefb5873976505723a2 | RegionOne | keystone | identity | True | admin | http://controller:5000/v3/ | | 2f991fdda3c6475c8478649956eb3bac | RegionOne | glance | image | True | admin | http://controller:9292 | | 38c9310128d8492b98b6b6718a91ab5c | RegionOne | nova | compute | True | internal | http://controller:8774/v2.1 | | 4d6500b9939a4e7da9e5db58fee25532 | RegionOne | placement | placement | True | admin | http://controller:8778 | | 620a6b673a724f09bfd4234575374bd3 | RegionOne | placement | placement | True | internal | http://controller:8778 | | 6a0d05d369dd430d9f044f66f1125230 | RegionOne | glance | image | True | public | http://controller:9292 | | 7dcf92c2b3474b1fad4459f2596c796c | RegionOne | glance | image | True | internal | http://controller:9292 | | 802fe1e5d15a403aa3317eef55ddfd1d | RegionOne | nova | compute | True | public | http://controller:8774/v2.1 | | 8b704463162648ea985832fd45ac91ff | RegionOne | keystone | identity | True | internal | http://controller:5000/v3/ | | a3b9161a9ade41c0bd1c25859ff291eb | RegionOne | nova | compute | True | admin | http://controller:8774/v2.1 | | d628d13bf4e8437f922d8e99dadb5700 | RegionOne | placement | placement | True | public | http://controller:8778 | +----------------------------------+-----------+--------------+--------------
3.在控制节点安装nova相关服务
(1).安装nova相关软件包
-
[root@openstack01 tools]# yum install openstack-nova-api openstack-nova-conductor \ > openstack-nova-console openstack-nova-novncproxy \ > openstack-nova-scheduler openstack-nova-placement-api -y
(2).快速修改nova配置
-
[root@openstack01 tools]# openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata [root@openstack01 tools]# openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.51.138 [root@openstack01 tools]# openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron true [root@openstack01 tools]# openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver [root@openstack01 tools]#openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:openstack@controller [root@openstack01 tools]#openstack-config --set /etc/nova/nova.conf api_database connection mysql+pymysql://nova:nova@controller/nova_api [root@openstack01 tools]#openstack-config --set /etc/nova/nova.conf database connection mysql+pymysql://nova:nova@controller/nova [root@openstack01 tools]#openstack-config --set /etc/nova/nova.conf placement_database connection mysql+pymysql://placement:placement@controller/placement [root@openstack01 tools]#openstack-config --set /etc/nova/nova.conf api auth_strategy keystone [root@openstack01 tools]#openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:5000/v3 [root@openstack01 tools]#openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller:11211 [root@openstack01 tools]#openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password [root@openstack01 tools]#openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name default [root@openstack01 tools]#openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name default [root@openstack01 tools]#openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service [root@openstack01 tools]#openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova [root@openstack01 tools]#openstack-config --set /etc/nova/nova.conf keystone_authtoken password nova [root@openstack01 tools]#openstack-config --set /etc/nova/nova.conf vnc enabled true [root@openstack01 tools]#openstack-config --set /etc/nova/nova.conf vnc server_listen '$my_ip' [root@openstack01 tools]#openstack-config --set /etc/nova/nova.conf vnc server_proxyclient_address '$my_ip' [root@openstack01 tools]#openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292 [root@openstack01 tools]#openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp [root@openstack01 tools]#[root@openstack01 tools]#openstack-config --set /etc/nova/nova.conf placement region_name RegionOne [root@openstack01 tools]#openstack-config --set /etc/nova/nova.conf placement project_domain_name Default [root@openstack01 tools]#openstack-config --set /etc/nova/nova.conf placement project_name service [root@openstack01 tools]#openstack-config --set /etc/nova/nova.conf placement auth_type password [root@openstack01 tools]#openstack-config --set /etc/nova/nova.conf placement user_domain_name Default [root@openstack01 tools]#openstack-config --set /etc/nova/nova.conf placement auth_url http://controller:5000/v3 [root@openstack01 tools]#openstack-config --set /etc/nova/nova.conf placement username placement [root@openstack01 tools]#openstack-config --set /etc/nova/nova.conf placement password placement [root@openstack01 tools]#openstack-config --set /etc/nova/nova.conf scheduler discover_hosts_in_cells_interval 300
检查生效的nova配置
-
[root@openstack01 tools]# egrep -v "^#|^$" /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata my_ip = 192.168.51.138 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver transport_url = rabbit://openstack:openstack@controller [api] auth_strategy = keystone [api_database] connection = mysql+pymysql://nova:nova@controller/nova_api [barbican] [cache] [cells] [cinder] [compute] [conductor] [console] [consoleauth] [cors] [database] connection = mysql+pymysql://nova:nova@controller/nova [devices] [ephemeral_storage_encryption] [filter_scheduler] [glance] api_servers = http://controller:9292 [guestfs] [healthcheck] [hyperv] [ironic] [key_manager] [keystone] [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = nova [libvirt] [matchmaker_redis] [metrics] [mks] [neutron] [notifications] [osapi_v21] [oslo_concurrency] lock_path = /var/lib/nova/tmp [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [pci] [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = placement [placement_database] connection = mysql+pymysql://placement:placement@controller/placement [powervm] [profiler] [quota] [rdp] [remote_debug] [scheduler] discover_hosts_in_cells_interval = 300 [serial_console] [service_user] [spice] [upgrade_levels] [vault] [vendordata_dynamic_auth] [vmware] [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip [workarounds] [wsgi] [xenserver] [xvp] [zvm]
(3).修改nova的虚拟主机配置文件
-
[root@openstack01 tools]# vim /etc/httpd/conf.d/00-nova-placement-api.conf Listen 8778 <VirtualHost *:8778> WSGIProcessGroup nova-placement-api WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On WSGIDaemonProcess nova-placement-api processes=3 threads=1 user=nova group=nova WSGIScriptAlias / /usr/bin/nova-placement-api <IfVersion >= 2.4> ErrorLogFormat "%M" </IfVersion> ErrorLog /var/log/nova/nova-placement-api.log #SSLEngine On #SSLCertificateFile ... #SSLCertificateKeyFile ... </VirtualHost> Alias /nova-placement-api /usr/bin/nova-placement-api <Location /nova-placement-api> SetHandler wsgi-script Options +ExecCGI WSGIProcessGroup nova-placement-api WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On </Location> <Directory /usr/bin> <IfVersion >= 2.4> Require all granted </IfVersion> <IfVersion < 2.4> Order allow,deny Allow from all </IfVersion> </Directory>
修改完毕重启httpd服务
-
[root@openstack01 tools]# systemctl restart httpd [root@openstack01 tools]# systemctl status httpd ● httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled) Active: active (running) since 五 2021-11-19 11:36:14 CST; 3s ago
4.同步nova数据(注意同步顺序)
(1).初始化nova-api和placement数据库
[root@openstack01 tools]# su -s /bin/sh -c "nova-manage api_db sync" nova
验证数据库
[root@openstack01 tools]# mysql -h192.168.51.138 -unova -pnova -e "use nova_api;show tables;"
+------------------------------+
| Tables_in_nova_api |
+------------------------------+
| aggregate_hosts |
| aggregate_metadata |
| aggregates |
| allocations |
| build_requests |
| cell_mappings |
| consumers |
| flavor_extra_specs |
| flavor_projects |
| flavors |
| host_mappings |
| instance_group_member |
| instance_group_policy |
| instance_groups |
| instance_mappings |
| inventories |
| key_pairs |
| migrate_version |
| placement_aggregates |
| project_user_quotas |
| projects |
| quota_classes |
| quota_usages |
| quotas |
| request_specs |
| reservations |
| resource_classes |
| resource_provider_aggregates |
| resource_provider_traits |
| resource_providers |
| traits |
| users |
+------------------------------+
[root@openstack01 tools]# mysql -h192.168.51.138 -uplacement -pplacement -e "use placement;show tables;"
+------------------------------+
| Tables_in_placement |
+------------------------------+
| aggregate_hosts |
| aggregate_metadata |
| aggregates |
| allocations |
| build_requests |
| cell_mappings |
| consumers |
| flavor_extra_specs |
| flavor_projects |
| flavors |
| host_mappings |
| instance_group_member |
| instance_group_policy |
| instance_groups |
| instance_mappings |
| inventories |
| key_pairs |
| migrate_version |
| placement_aggregates |
| project_user_quotas |
| projects |
| quota_classes |
| quota_usages |
| quotas |
| request_specs |
| reservations |
| resource_classes |
| resource_provider_aggregates |
| resource_provider_traits |
| resource_providers |
| traits |
| users |
+------------------------------+
(2).初始化nova-cell0和nova数据库
注册cell0数据库
[root@openstack01 tools]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
创建cell1单元
[root@openstack01 tools]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
1156474d-5b45-4703-a590-29925505c6af
初始化数据库
[root@openstack01 tools]# su -s /bin/sh -c "nova-manage db sync" nova
/usr/lib/python2.7/site-packages/pymysql/cursors.py:170: Warning: (1831, u'Duplicate index `block_device_mapping_instance_uuid_virtual_name_device_name_idx`. This is deprecated and will be disallowed in a future release.')
result = self._query(query)
/usr/lib/python2.7/site-packages/pymysql/cursors.py:170: Warning: (1831, u'Duplicate index `uniq_instances0uuid`. This is deprecated and will be disallowed in a future release.')
result = self._query(query)
验证数据库
-
[root@openstack01 tools]# mysql -h192.168.51.138 -unova -pnova -e "use nova_cell0;show tables;" +--------------------------------------------+ | Tables_in_nova_cell0 | +--------------------------------------------+ | agent_builds | | aggregate_hosts | | aggregate_metadata | | aggregates | | allocations | | block_device_mapping | | bw_usage_cache | | cells | | certificates | | compute_nodes | | console_auth_tokens | | console_pools | | consoles | | dns_domains | | fixed_ips | | floating_ips | | instance_actions | | instance_actions_events | | instance_extra | | instance_faults | | instance_group_member | | instance_group_policy | | instance_groups | | instance_id_mappings | | instance_info_caches | | instance_metadata | | instance_system_metadata | | instance_type_extra_specs | | instance_type_projects | | instance_types | | instances | | inventories | | key_pairs | | migrate_version | | migrations | | networks | | pci_devices | | project_user_quotas | | provider_fw_rules | | quota_classes | | quota_usages | | quotas | | reservations | | resource_provider_aggregates | | resource_providers | | s3_images | | security_group_default_rules | | security_group_instance_association | | security_group_rules | | security_groups | | services | | shadow_agent_builds | | shadow_aggregate_hosts | | shadow_aggregate_metadata | | shadow_aggregates | | shadow_block_device_mapping | | shadow_bw_usage_cache | | shadow_cells | | shadow_certificates | | shadow_compute_nodes | | shadow_console_pools | | shadow_consoles | | shadow_dns_domains | | shadow_fixed_ips | | shadow_floating_ips | | shadow_instance_actions | | shadow_instance_actions_events | | shadow_instance_extra | | shadow_instance_faults | | shadow_instance_group_member | | shadow_instance_group_policy | | shadow_instance_groups | | shadow_instance_id_mappings | | shadow_instance_info_caches | | shadow_instance_metadata | | shadow_instance_system_metadata | | shadow_instance_type_extra_specs | | shadow_instance_type_projects | | shadow_instance_types | | shadow_instances | | shadow_key_pairs | | shadow_migrate_version | | shadow_migrations | | shadow_networks | | shadow_pci_devices | | shadow_project_user_quotas | | shadow_provider_fw_rules | | shadow_quota_classes | | shadow_quota_usages | | shadow_quotas | | shadow_reservations | | shadow_s3_images | | shadow_security_group_default_rules | | shadow_security_group_instance_association | | shadow_security_group_rules | | shadow_security_groups | | shadow_services | | shadow_snapshot_id_mappings | | shadow_snapshots | | shadow_task_log | | shadow_virtual_interfaces | | shadow_volume_id_mappings | | shadow_volume_usage_cache | | snapshot_id_mappings | | snapshots | | tags | | task_log | | virtual_interfaces | | volume_id_mappings | | volume_usage_cache | +--------------------------------------------+ [root@openstack01 tools]# mysql -h192.168.51.138 -unova -pnova -e "use nova;show tables;" +--------------------------------------------+ | Tables_in_nova | +--------------------------------------------+ | agent_builds | | aggregate_hosts | | aggregate_metadata | | aggregates | | allocations | | block_device_mapping | | bw_usage_cache | | cells | | certificates | | compute_nodes | | console_auth_tokens | | console_pools | | consoles | | dns_domains | | fixed_ips | | floating_ips | | instance_actions | | instance_actions_events | | instance_extra | | instance_faults | | instance_group_member | | instance_group_policy | | instance_groups | | instance_id_mappings | | instance_info_caches | | instance_metadata | | instance_system_metadata | | instance_type_extra_specs | | instance_type_projects | | instance_types | | instances | | inventories | | key_pairs | | migrate_version | | migrations | | networks | | pci_devices | | project_user_quotas | | provider_fw_rules | | quota_classes | | quota_usages | | quotas | | reservations | | resource_provider_aggregates | | resource_providers | | s3_images | | security_group_default_rules | | security_group_instance_association | | security_group_rules | | security_groups | | services | | shadow_agent_builds | | shadow_aggregate_hosts | | shadow_aggregate_metadata | | shadow_aggregates | | shadow_block_device_mapping | | shadow_bw_usage_cache | | shadow_cells | | shadow_certificates | | shadow_compute_nodes | | shadow_console_pools | | shadow_consoles | | shadow_dns_domains | | shadow_fixed_ips | | shadow_floating_ips | | shadow_instance_actions | | shadow_instance_actions_events | | shadow_instance_extra | | shadow_instance_faults | | shadow_instance_group_member | | shadow_instance_group_policy | | shadow_instance_groups | | shadow_instance_id_mappings | | shadow_instance_info_caches | | shadow_instance_metadata | | shadow_instance_system_metadata | | shadow_instance_type_extra_specs | | shadow_instance_type_projects | | shadow_instance_types | | shadow_instances | | shadow_key_pairs | | shadow_migrate_version | | shadow_migrations | | shadow_networks | | shadow_pci_devices | | shadow_project_user_quotas | | shadow_provider_fw_rules | | shadow_quota_classes | | shadow_quota_usages | | shadow_quotas | | shadow_reservations | | shadow_s3_images | | shadow_security_group_default_rules | | shadow_security_group_instance_association | | shadow_security_group_rules | | shadow_security_groups | | shadow_services | | shadow_snapshot_id_mappings | | shadow_snapshots | | shadow_task_log | | shadow_virtual_interfaces | | shadow_volume_id_mappings | | shadow_volume_usage_cache | | snapshot_id_mappings | | snapshots | | tags | | task_log | | virtual_interfaces | | volume_id_mappings | | volume_usage_cache | +--------------------------------------------+
(5).检查确定cell0和cell1注册成功
-
[root@openstack01 tools]# su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova +-------+--------------------------------------+------------------------------- | 名称 | UUID | Transport URL | 数据库连接 | Disabled | +-------+--------------------------------------+------------------------------- | cell0 | 00000000-0000-0000-0000-000000000000 | none:/ | mysql+pymysql://nova:****@controller/nova_cell0 | False | | cell1 | 1156474d-5b45-4703-a590-29925505c6af | rabbit://openstack:****@controller | mysql+pymysql://nova:****@controller/nova | False | +-------+--------------------------------------+------------------------------
5.启动nova服务
(1).启动nova服务并设置为开机启动
-
[root@openstack01 tools]# systemctl start openstack-nova-api.service openstack-nova-consoleauth.service \ > openstack-nova-scheduler.service openstack-nova-conductor.service \ > openstack-nova-novncproxy.service [root@openstack01 tools]# systemctl status openstack-nova-api.service openstack-nova-consoleauth.service \ > openstack-nova-scheduler.service openstack-nova-conductor.service \ > openstack-nova-novncproxy.service [root@openstack01 tools]# systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service \ > openstack-nova-scheduler.service openstack-nova-conductor.service \ > openstack-nova-novncproxy.service Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-api.service to /usr/lib/systemd/system/openstack-nova-api.service. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-consoleauth.service to /usr/lib/systemd/system/openstack-nova-consoleauth.service. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-scheduler.service to /usr/lib/systemd/system/openstack-nova-scheduler.service. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-conductor.service to /usr/lib/systemd/system/openstack-nova-conductor.service. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-novncproxy.service to /usr/lib/systemd/system/openstack-nova-novncproxy.service. [root@openstack01 tools]# systemctl list-unit-files |grep openstack-nova* |grep enabled openstack-nova-api.service enabled openstack-nova-conductor.service enabled openstack-nova-consoleauth.service enabled openstack-nova-novncproxy.service enabled openstack-nova-scheduler.service enabled
至此,在控制节点安装nova计算服务就完成了。
五、安装一个nova计算节点实例
1.配置域名解析
(1).配置主机名
-
[root@localhost ~]# hostname openstack02.zuiyoujie.com [root@localhost ~]# hostname openstack02.zuiyoujie.com [root@localhost ~]# echo "openstack02.zuiyoujie.com"> /etc/hostname [root@localhost ~]# cat /etc/hostname openstack02.zuiyoujie.com
(2).配置主机名解析
-
[root@localhost ~]# vi /etc/hosts 192.168.51.138 openstack01.zuiyoujie.com controller 192.168.51.139 openstack02.zuiyoujie.com compute02 block02 object02
2.关闭防火墙和selinux
(1).关闭iptables
-
[root@localhost ~]# systemctl stop firewalld.service [root@localhost ~]# systemctl disable firewalld.service Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. [root@localhost ~]# systemctl status firewalld.service ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled) Active: inactive (dead)
(2).关闭selinux
-
[root@localhost ~]# setenforce 0 [root@localhost ~]# getenforce Permissive [root@localhost ~]# sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux [root@localhost ~]# grep SELINUX=disabled /etc/sysconfig/selinux SELINUX=disabled
3.配置时间同步
(1).在计算节点配置时间同步服务
安装时间同步软件
-
[root@localhost ~]# yum install chrony -y
(2).编辑配置文件确定有以下配置
-
[root@localhost ~]# vi /etc/chrony.conf server 192.168.51.138 iburst
(3).重启chronyd服务,并配置开机自启动
-
[root@localhost ~]# systemctl restart chronyd.service [root@localhost ~]# systemctl status chronyd.service ● chronyd.service - NTP client/server Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled) Active: active (running) since 五 2021-11-19 13:37:01 CST; 3s ago [root@localhost ~]# systemctl enable chronyd.service [root@localhost ~]# systemctl list-unit-files |grep chronyd.service chronyd.service enabled
(4).设置时区,首次同步时间
-
[root@localhost ~]# timedatectl set-timezone Asia/Shanghai [root@localhost ~]# chronyc sources 210 Number of sources = 5 MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^* openstack01.zuiyoujie.com 3 6 37 1 +411ns[ +473us] +/- 30ms ^? de-user.deepinid.deepin.> 0 6 0 - +0ns[ +0ns] +/- 0ns ^- 139.199.214.202 2 6 17 64 +4741us[+5005us] +/- 37ms ^? time.cloudflare.com 0 7 0 - +0ns[ +0ns] +/- 0ns ^? ntp8.flashdance.cx 2 6 107 58 +32ms[ +32ms] +/- 189ms [root@localhost ~]# timedatectl status Local time: 五 2021-11-19 13:38:18 CST Universal time: 五 2021-11-19 05:38:18 UTC RTC time: 五 2021-11-19 05:38:18 Time zone: Asia/Shanghai (CST, +0800) NTP enabled: yes NTP synchronized: yes RTC in local TZ: no DST active: n/a
(5).虚拟机重启,使配置生效
-
[root@localhost ~]# reboot
4.配置相关yum源
(1).配置阿里云的base和epel源
-
[root@localhost ~]# mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup [root@localhost ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo [root@localhost ~]# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo [root@localhost ~]# mv /etc/yum.repos.d/epel.repo /etc/yum.repos.d/epel.repo.backup
(2).安装openstack-rocky的仓库
-
[root@localhost ~]# yum install centos-release-openstack-rocky -y [root@localhost ~]# yum clean all [root@localhost ~]# yum makecache
(3).更新软件包
-
[root@localhost ~]# yum update -y
(4).安装openstack客户端相关软件
-
[root@localhost ~]# yum install python-openstackclient openstack-selinux -y
5.安装nova计算节点相关软件包
(1).计算节点安装nova软件包
-
[root@localhost ~]# mkdir -p /server/tools [root@localhost ~]# cd /server/tools/ [root@localhost tools]# yum install openstack-nova-compute python-openstackclient openstack-utils -y
(2).快速修改配置文件(/etc/nova/nova.conf)
-
[root@localhost tools]# openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.51.139 [root@localhost tools]#openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True [root@localhost tools]#openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver [root@localhost tools]#openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata [root@localhost tools]#openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:openstack@controller [root@localhost tools]#openstack-config --set /etc/nova/nova.conf api auth_strategy keystone [root@localhost tools]#openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:5000/v3 [root@localhost tools]#openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller:11211 [root@localhost tools]#openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password [root@localhost tools]#openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name default [root@localhost tools]#openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name default [root@localhost tools]#openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service [root@localhost tools]#openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova [root@localhost tools]#openstack-config --set /etc/nova/nova.conf keystone_authtoken password nova [root@localhost tools]#openstack-config --set /etc/nova/nova.conf vnc enabled True [root@localhost tools]#openstack-config --set /etc/nova/nova.conf vnc server_listen 0.0.0.0 [root@localhost tools]#openstack-config --set /etc/nova/nova.conf vnc server_proxyclient_address '$my_ip' [root@localhost tools]#openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://controller:6080/vnc_auto.html [root@localhost tools]#openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292 [root@localhost tools]#openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp [root@localhost tools]#openstack-config --set /etc/nova/nova.conf placement region_name RegionOne [root@localhost tools]#openstack-config --set /etc/nova/nova.conf placement project_domain_name Default [root@localhost tools]#openstack-config --set /etc/nova/nova.conf placement project_name service [root@localhost tools]#openstack-config --set /etc/nova/nova.conf placement auth_type password [root@localhost tools]#openstack-config --set /etc/nova/nova.conf placement user_domain_name Default [root@localhost tools]#openstack-config --set /etc/nova/nova.conf placement auth_url http://controller:5000/v3 [root@localhost tools]#openstack-config --set /etc/nova/nova.conf placement username placement [root@localhost tools]#openstack-config --set /etc/nova/nova.conf placement password placement
查看生效的配置
-
[root@localhost tools]# egrep -v "^#|^$" /etc/nova/nova.conf [DEFAULT] my_ip = 192.168.51.139 use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:openstack@controller [api] auth_strategy = keystone [api_database] [barbican] [cache] [cells] [cinder] [compute] [conductor] [console] [consoleauth] [cors] [database] [devices] [ephemeral_storage_encryption] [filter_scheduler] [glance] api_servers = http://controller:9292 [guestfs] [healthcheck] [hyperv] [ironic] [key_manager] [keystone] [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = nova [libvirt] [matchmaker_redis] [metrics] [mks] [neutron] [notifications] [osapi_v21] [oslo_concurrency] lock_path = /var/lib/nova/tmp [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [pci] [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = placement [placement_database] [powervm] [profiler] [quota] [rdp] [remote_debug] [scheduler] [serial_console] [service_user] [spice] [upgrade_levels] [vault] [vendordata_dynamic_auth] [vmware] [vnc] enabled = True server_listen = 0.0.0.0 server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html [workarounds] [wsgi] [xenserver] [xvp] [zvm]
(3).配置虚拟机硬件加速
首先确定您的计算节点是否支持虚拟机的硬件加速
-
[root@localhost tools]# egrep -c '(vmx|svm)' /proc/cpuinfo 0
如果返回位0,表示计算节点不支持硬件加速,需要配置libvirt使用QEMU方式管理虚拟机,使用以下命令:
-
[root@localhost tools]# openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu [root@localhost tools]# egrep -v "^#|^$" /etc/nova/nova.conf|grep 'virt_type' virt_type = qemu
如果返回为其他值,表示计算节点支持硬件加速且不需要额外的配置,使用以下命令:
-
[root@localhost tools]# openstack-config --set /etc/nova/nova.conf libvirt virt_type kvm [root@localhost tools]# egrep -v "^#|^$" /etc/nova/nova.conf|grep 'virt_type'
(4).启动nova相关服务,并设置为开机自启动
-
[root@localhost tools]# systemctl start libvirtd.service openstack-nova-compute.service [root@localhost tools]# systemctl status libvirtd.service openstack-nova-compute.service ● libvirtd.service - Virtualization daemon Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled) Active: active (running) since 五 2021-11-19 14:46:49 CST; 19min ago ● openstack-nova-compute.service - OpenStack Nova Compute Server Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; disabled; vendor preset: disabled) Active: active (running) since 五 2021-11-19 15:06:29 CST; 1s ago [root@localhost tools]# systemctl enable libvirtd.service openstack-nova-compute.service Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-compute.service to /usr/lib/systemd/system/openstack-nova-compute.service. [root@localhost tools]# systemctl list-unit-files |grep libvirtd.service libvirtd.service enabled [root@localhost tools]# systemctl list-unit-files |grep openstack-nova-compute.service openstack-nova-compute.service enabled
(5).将计算节点增加到cell数据库(在控制节点进行)
-
[root@openstack01 tools]# source keystone-admin-pass.sh [root@openstack01 tools]# openstack compute service list --service nova-compute +----+--------------+---------------------------+------+---------+-------+---- | ID | Binary | Host | Zone | Status | State | Updated At | +----+--------------+---------------------------+------+---------+-------+---- | 13 | nova-compute | openstack02.zuiyoujie.com | nova | enabled | up | 2021-11-19T07:08:41.000000 | +----+--------------+---------------------------+------+---------+-------+----
手动将新的计算节点添加到openstack集群
-
[root@openstack01 tools]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova Found 2 cell mappings. Skipping cell0 since it does not contain hosts. Getting computes from cell 'cell1': 1156474d-5b45-4703-a590-29925505c6af Found 0 unmapped computes in cell: 1156474d-5b45-4703-a590-29925505c6af
至此,计算节点安装完毕,接下来需要进行测试,见擦汗nova节点状态
6.在控制节点进行验证
(1).应用管理员环境变量脚本
-
[root@openstack01 tools]# source keystone-admin-pass.sh
(2).列表查看安装的nova服务组件
-
[root@openstack01 tools]# openstack compute service list +----+------------------+---------------------------+----------+---------+----- | ID | Binary | Host | Zone | Status | State | Updated At | +----+------------------+---------------------------+----------+---------+---- | 1 | nova-consoleauth | openstack01.zuiyoujie.com | internal | enabled | up | 2021-11-19T07:14:40.000000 | | 2 | nova-scheduler | openstack01.zuiyoujie.com | internal | enabled | up | 2021-11-19T07:14:40.000000 | | 6 | nova-conductor | openstack01.zuiyoujie.com | internal | enabled | up | 2021-11-19T07:14:36.000000 | | 13 | nova-compute | openstack02.zuiyoujie.com | nova | enabled | up | 2021-11-19T07:14:41.000000 | +----+------------------+---------------------------+----------+---------+-----
(3).在身份认证服务中列出API端点已验证
-
[root@openstack01 tools]# openstack catalog list +-----------+-----------+-----------------------------------------+ | Name | Type | Endpoints | +-----------+-----------+-----------------------------------------+ | glance | image | RegionOne | | | | admin: http://controller:9292 | | | | RegionOne | | | | public: http://controller:9292 | | | | RegionOne | | | | internal: http://controller:9292 | | | | | | keystone | identity | RegionOne | | | | public: http://controller:5000/v3/ | | | | RegionOne | | | | admin: http://controller:5000/v3/ | | | | RegionOne | | | | internal: http://controller:5000/v3/ | | | | | | nova | compute | RegionOne | | | | internal: http://controller:8774/v2.1 | | | | RegionOne | | | | public: http://controller:8774/v2.1 | | | | RegionOne | | | | admin: http://controller:8774/v2.1 | | | | | | placement | placement | RegionOne | | | | admin: http://controller:8778 | | | | RegionOne | | | | internal: http://controller:8778 | | | | RegionOne | | | | public: http://controller:8778 | | | | | +-----------+-----------+-----------------------------------------+
(4).在镜像服务中列出已有镜像以检查镜像服务的连接性
-
[root@openstack01 tools]# openstack image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | 86fd8541-c758-4c6e-a543-48f871b48c3b | cirros | active | +--------------------------------------+--------+--------+
(5).检查nova各组件的状态
-
[root@openstack01 tools]# nova-status upgrade check +-------------------------------+ | 升级检查结果 | +-------------------------------+ | 检查: Cells v2 | | 结果: 成功 | | 详情: None | +-------------------------------+ | 检查: Placement API | | 结果: 成功 | | 详情: None | +-------------------------------+ | 检查: Resource Providers | | 结果: 成功 | | 详情: None | +-------------------------------+ | 检查: Ironic Flavor Migration | | 结果: 成功 | | 详情: None | +-------------------------------+ | 检查: API Service Version | | 结果: 成功 | | 详情: None | +-------------------------------+ | 检查: Request Spec Migration | | 结果: 成功 | | 详情: None | +-------------------------------+ | 检查: Console Auths | | 结果: 成功 | | 详情: None | +-------------------------------+
六、安装Neutron网络服务(控制节点)
1.主机网络配置及测试
(1).控制节点配置
-
[root@openstack01 tools]# cd [root@openstack01 ~]# vim /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.51.138 openstack01.zuiyoujie.com controller 192.168.51.139 openstack02.zuiyoujie.com compute02 block02 object02
(2).计算节点配置
-
[root@localhost ~]# vim /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.51.138 openstack01.zuiyoujie.com controller 192.168.51.139 openstack02.zuiyoujie.com compute02 block02 object02
(3).检测个节点到控制节点和公网的连通性
控制节点
-
[root@openstack01 ~]# ping -c 4 www.baidu.com [root@openstack01 ~]# ping -c 4 compute02 [root@openstack01 ~]# ping -c 4 block02
计算节点
-
[root@localhost ~]# ping -c 4 www.baidu.com [root@localhost ~]# ping -c 4 controller
6.在keystone数据库中注册neutron相关服务
(1).创建neutron数据库,授予合适的访问权限
-
[root@openstack01 ~]# mysql -p123456 Welcome to the MariaDB monitor. Commands end with ; or \g. MariaDB [(none)]> CREATE DATABASE neutron; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron'; Query OK, 0 rows affected (0.16 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> exit Bye
(2).在keystone上创建neutron用户
-
[root@openstack01 ~]# cd /server/tools [root@openstack01 tools]# source keystone-admin-pass.sh [root@openstack01 tools]# openstack user create --domain default --password=neutron neutron +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | 7f6c9cef605f4b39b14771638b295a30 | | name | neutron | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ [root@openstack01 tools]# openstack user list +----------------------------------+-----------+ | ID | Name | +----------------------------------+-----------+ | 1912bf1a2c074889a45f4bd2193387cd | myuser | | 51942708c7454755a6be5bea9e81c3b0 | placement | | 7f6c9cef605f4b39b14771638b295a30 | neutron | | dd93119899a44130b0ea8bc1aa51bb79 | glance | | e154782ba6504b25acc95e2b716e31b2 | nova | | f86be32740d94614860933ec034adf0b | admin | +----------------------------------+-----------+
(3).将neutron添加到service项目并授予admin角色
-
[root@openstack01 tools]# openstack role add --project service --user neutron admin
(4).创建neutron服务实体
-
[root@openstack01 tools]# openstack service create --name neutron --description "OpenStack Networking" network +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Networking | | enabled | True | | id | 7676ab0484664d9a97d32b4d76899fdc | | name | neutron | | type | network | +-------------+----------------------------------+ [root@openstack01 tools]# openstack service list +----------------------------------+-----------+-----------+ | ID | Name | Type | +----------------------------------+-----------+-----------+ | 2034ba4ebefa45f5862e63762826267f | glance | image | | 517bf50f40b94aadb635b3595cca3ac3 | keystone | identity | | 568332c6ec2543bab55dad29446b4e73 | nova | compute | | 7676ab0484664d9a97d32b4d76899fdc | neutron | network | | e4bd96f879fe48df9d066e83eecfd9fe | placement | placement | +----------------------------------+-----------+-----------+
(5).创建neutron网络服务的API端点(endpoint)
-
[root@openstack01 tools]# openstack endpoint create --region RegionOne network public http://controller:9696 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 3aa5f612a36d46978cbd49aec6be85c3 | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 7676ab0484664d9a97d32b4d76899fdc | | service_name | neutron | | service_type | network | | url | http://controller:9696 | +--------------+----------------------------------+ [root@openstack01 tools]# openstack endpoint create --region RegionOne network internal http://controller:9696 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | e48502d745bc48c398eec4d885a5e85b | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 7676ab0484664d9a97d32b4d76899fdc | | service_name | neutron | | service_type | network | | url | http://controller:9696 | +--------------+----------------------------------+ [root@openstack01 tools]# openstack endpoint create --region RegionOne network admin http://controller:9696 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 5f56b415b160403cb805825bc1652de1 | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 7676ab0484664d9a97d32b4d76899fdc | | service_name | neutron | | service_type | network | | url | http://controller:9696 | +--------------+----------------------------------+ [root@openstack01 tools]# openstack endpoint list +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------+ | 0ab772fbfded46998eae4f3c84310837 | RegionOne | keystone | identity | True | public | http://controller:5000/v3/ | | 26b3cb41a2f64fefb5873976505723a2 | RegionOne | keystone | identity | True | admin | http://controller:5000/v3/ | | 2f991fdda3c6475c8478649956eb3bac | RegionOne | glance | image | True | admin | http://controller:9292 | | 38c9310128d8492b98b6b6718a91ab5c | RegionOne | nova | compute | True | internal | http://controller:8774/v2.1 | | 3aa5f612a36d46978cbd49aec6be85c3 | RegionOne | neutron | network | True | public | http://controller:9696 | | 4d6500b9939a4e7da9e5db58fee25532 | RegionOne | placement | placement | True | admin | http://controller:8778 | | 5f56b415b160403cb805825bc1652de1 | RegionOne | neutron | network | True | admin | http://controller:9696 | | 620a6b673a724f09bfd4234575374bd3 | RegionOne | placement | placement | True | internal | http://controller:8778 | | 6a0d05d369dd430d9f044f66f1125230 | RegionOne | glance | image | True | public | http://controller:9292 | | 7dcf92c2b3474b1fad4459f2596c796c | RegionOne | glance | image | True | internal | http://controller:9292 | | 802fe1e5d15a403aa3317eef55ddfd1d | RegionOne | nova | compute | True | public | http://controller:8774/v2.1 | | 8b704463162648ea985832fd45ac91ff | RegionOne | keystone | identity | True | internal | http://controller:5000/v3/ | | a3b9161a9ade41c0bd1c25859ff291eb | RegionOne | nova | compute | True | admin | http://controller:8774/v2.1 | | d628d13bf4e8437f922d8e99dadb5700 | RegionOne | placement | placement | True | public | http://controller:8778 | | e48502d745bc48c398eec4d885a5e85b | RegionOne | neutron | network | True | internal | http://controller:9696 | +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------+
3.在控制节点安装neutron网络组件
(1).安装neutron软件包
-
[root@openstack01 tools]# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y
(2).快速配置/etc/neutron/neutron.conf
-
[root@openstack01 tools]#openstack-config --set /etc/neutron/neutron.conf database connection mysql+pymysql://neutron:neutron@controller/neutron [root@openstack01 tools]#openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2 [root@openstack01 tools]#openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins [root@openstack01 tools]#openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:openstack@controller [root@openstack01 tools]#openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone [root@openstack01 tools]#openstack-config --set /etc/neutron/neutron.conf keystone_authtoken www_authenticate_uri http://controller:5000 [root@openstack01 tools]#openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:5000 [root@openstack01 tools]#openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller:11211 [root@openstack01 tools]#openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password [root@openstack01 tools]#openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default [root@openstack01 tools]#openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default [root@openstack01 tools]#openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service [root@openstack01 tools]#openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron [root@openstack01 tools]#openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password neutron [root@openstack01 tools]#openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes True [root@openstack01 tools]#openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes True [root@openstack01 tools]#openstack-config --set /etc/neutron/neutron.conf nova auth_url http://controller:5000 [root@openstack01 tools]#openstack-config --set /etc/neutron/neutron.conf nova auth_type password [root@openstack01 tools]#openstack-config --set /etc/neutron/neutron.conf nova project_domain_name default [root@openstack01 tools]#openstack-config --set /etc/neutron/neutron.conf nova user_domain_name default [root@openstack01 tools]#openstack-config --set /etc/neutron/neutron.conf nova region_name RegionOne [root@openstack01 tools]#openstack-config --set /etc/neutron/neutron.conf nova project_name service [root@openstack01 tools]#openstack-config --set /etc/neutron/neutron.conf nova username nova [root@openstack01 tools]#openstack-config --set /etc/neutron/neutron.conf nova password nova [root@openstack01 tools]#openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
查看生效的配置
-
[root@openstack01 tools]# egrep -v '(^$|^#)' /etc/neutron/neutron.conf [DEFAULT] core_plugin = ml2 service_plugins = transport_url = rabbit://openstack:openstack@controller auth_strategy = keystone notify_nova_on_port_status_changes = True notify_nova_on_port_data_changes = True [agent] [cors] [database] connection = mysql+pymysql://neutron:neutron@controller/neutron [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = neutron [matchmaker_redis] [nova] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = nova password = nova [oslo_concurrency] lock_path = /var/lib/neutron/tmp [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [quotas] [ssl]
(3).快速配置/etc/neutron/plugins/ml2/ml2_conf.ini
-
[root@openstack01 tools]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan [root@openstack01 tools]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types [root@openstack01 tools]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers linuxbridge [root@openstack01 tools]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security [root@openstack01 tools]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks provider [root@openstack01 tools]# openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True
查看生效的配置
-
[root@openstack01 tools]# egrep -v '(^$|^#)' /etc/neutron/plugins/ml2/ml2_conf.ini [DEFAULT] [l2pop] [ml2] type_drivers = flat,vlan tenant_network_types = mechanism_drivers = linuxbridge extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_geneve] [ml2_type_gre] [ml2_type_vlan] [ml2_type_vxlan] [securitygroup] enable_ipset = True
(4).快速配置/etc/neutron/plugins/ml2/linuxbridge_agent.ini
-
[root@openstack01 tools]#openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:eno16777736 [root@openstack01 tools]#openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan False [root@openstack01 tools]#openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group True [root@openstack01 tools]#openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
查看生效的配置
-
[root@openstack01 tools]# egrep -v '(^$|^#)' /etc/neutron/plugins/ml2/linuxbridge_agent.ini [DEFAULT] [agent] [linux_bridge] physical_interface_mappings = provider:eno16777736 [network_log] [securitygroup] enable_security_group = True firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver [vxlan] enable_vxlan = False
(5).快速配置/etc/neutron/dhcp_agent.ini
-
[root@openstack01 tools]# openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver linuxbridge [root@openstack01 tools]# openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq [root@openstack01 tools]# openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata True
查看生效的配置
-
[root@openstack01 tools]# egrep -v '(^$|^#)' /etc/neutron/dhcp_agent.ini [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = True [agent] [ovs]
(6).快速配置/etc/neutron/metadata_agent.ini
-
[root@openstack01 tools]# openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_host controller [root@openstack01 tools]# openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret neutron
查看生效的配置
-
[root@openstack01 tools]# egrep -v '(^$|^#)' /etc/neutron/metadata_agent.ini [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = neutron [agent] [cache]
(7).配置计算服务使用网络服务
-
[root@openstack01 tools]# openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696 [root@openstack01 tools]# openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:5000 [root@openstack01 tools]# openstack-config --set /etc/nova/nova.conf neutron auth_type password [root@openstack01 tools]# openstack-config --set /etc/nova/nova.conf neutron project_domain_name default [root@openstack01 tools]# openstack-config --set /etc/nova/nova.conf neutron user_domain_name default [root@openstack01 tools]# openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne [root@openstack01 tools]# openstack-config --set /etc/nova/nova.conf neutron project_name service [root@openstack01 tools]# openstack-config --set /etc/nova/nova.conf neutron username neutron [root@openstack01 tools]# openstack-config --set /etc/nova/nova.conf neutron password neutron [root@openstack01 tools]# openstack-config --set /etc/nova/nova.conf neutron service_metadata_proxy true [root@openstack01 tools]# openstack-config --set /etc/nova/nova.conf neutron metadata_proxy_shared_secret neutron
查看生效的配置
-
[root@openstack01 tools]# egrep -v '(^$|^#)' /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata my_ip = 192.168.51.138 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver transport_url = rabbit://openstack:openstack@controller [api] auth_strategy = keystone [api_database] connection = mysql+pymysql://nova:nova@controller/nova_api [barbican] [cache] [cells] [cinder] [compute] [conductor] [console] [consoleauth] [cors] [database] connection = mysql+pymysql://nova:nova@controller/nova [devices] [ephemeral_storage_encryption] [filter_scheduler] [glance] api_servers = http://controller:9292 [guestfs] [healthcheck] [hyperv] [ironic] [key_manager] [keystone] [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = nova [libvirt] [matchmaker_redis] [metrics] [mks] [neutron] url = http://controller:9696 auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = neutron service_metadata_proxy = true metadata_proxy_shared_secret = neutron [notifications] [osapi_v21] [oslo_concurrency] lock_path = /var/lib/nova/tmp [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [pci] [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = placement [placement_database] connection = mysql+pymysql://placement:placement@controller/placement [powervm] [profiler] [quota] [rdp] [remote_debug] [scheduler] discover_hosts_in_cells_interval = 300 [serial_console] [service_user] [spice] [upgrade_levels] [vault] [vendordata_dynamic_auth] [vmware] [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip [workarounds] [wsgi] [xenserver] [xvp] [zvm]
(8).初始化安装网络插件
-
[root@openstack01 tools]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
(9).同步数据库
-
[root@openstack01 tools]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ > --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron INFO [alembic.runtime.migration] Context impl MySQLImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. 正在对 neutron 运行 upgrade... INFO [alembic.runtime.migration] Context impl MySQLImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. INFO [alembic.runtime.migration] Running upgrade -> kilo INFO [alembic.runtime.migration] Running upgrade kilo -> 354db87e3225 INFO [alembic.runtime.migration] Running upgrade 354db87e3225 -> 599c6a226151 INFO [alembic.runtime.migration] Running upgrade 599c6a226151 -> 52c5312f6baf INFO [alembic.runtime.migration] Running upgrade 52c5312f6baf -> 313373c0ffee INFO [alembic.runtime.migration] Running upgrade 313373c0ffee -> 8675309a5c4f INFO [alembic.runtime.migration] Running upgrade 8675309a5c4f -> 45f955889773 INFO [alembic.runtime.migration] Running upgrade 45f955889773 -> 26c371498592 INFO [alembic.runtime.migration] Running upgrade 26c371498592 -> 1c844d1677f7 INFO [alembic.runtime.migration] Running upgrade 1c844d1677f7 -> 1b4c6e320f79 INFO [alembic.runtime.migration] Running upgrade 1b4c6e320f79 -> 48153cb5f051 INFO [alembic.runtime.migration] Running upgrade 48153cb5f051 -> 9859ac9c136 INFO [alembic.runtime.migration] Running upgrade 9859ac9c136 -> 34af2b5c5a59 INFO [alembic.runtime.migration] Running upgrade 34af2b5c5a59 -> 59cb5b6cf4d INFO [alembic.runtime.migration] Running upgrade 59cb5b6cf4d -> 13cfb89f881a INFO [alembic.runtime.migration] Running upgrade 13cfb89f881a -> 32e5974ada25 INFO [alembic.runtime.migration] Running upgrade 32e5974ada25 -> ec7fcfbf72ee INFO [alembic.runtime.migration] Running upgrade ec7fcfbf72ee -> dce3ec7a25c9 INFO [alembic.runtime.migration] Running upgrade dce3ec7a25c9 -> c3a73f615e4 INFO [alembic.runtime.migration] Running upgrade c3a73f615e4 -> 659bf3d90664 INFO [alembic.runtime.migration] Running upgrade 659bf3d90664 -> 1df244e556f5 INFO [alembic.runtime.migration] Running upgrade 1df244e556f5 -> 19f26505c74f INFO [alembic.runtime.migration] Running upgrade 19f26505c74f -> 15be73214821 INFO [alembic.runtime.migration] Running upgrade 15be73214821 -> b4caf27aae4 INFO [alembic.runtime.migration] Running upgrade b4caf27aae4 -> 15e43b934f81 INFO [alembic.runtime.migration] Running upgrade 15e43b934f81 -> 31ed664953e6 INFO [alembic.runtime.migration] Running upgrade 31ed664953e6 -> 2f9e956e7532 INFO [alembic.runtime.migration] Running upgrade 2f9e956e7532 -> 3894bccad37f INFO [alembic.runtime.migration] Running upgrade 3894bccad37f -> 0e66c5227a8a INFO [alembic.runtime.migration] Running upgrade 0e66c5227a8a -> 45f8dd33480b INFO [alembic.runtime.migration] Running upgrade 45f8dd33480b -> 5abc0278ca73 INFO [alembic.runtime.migration] Running upgrade 5abc0278ca73 -> d3435b514502 INFO [alembic.runtime.migration] Running upgrade d3435b514502 -> 30107ab6a3ee INFO [alembic.runtime.migration] Running upgrade 30107ab6a3ee -> c415aab1c048 INFO [alembic.runtime.migration] Running upgrade c415aab1c048 -> a963b38d82f4 INFO [alembic.runtime.migration] Running upgrade kilo -> 30018084ec99 INFO [alembic.runtime.migration] Running upgrade 30018084ec99 -> 4ffceebfada INFO [alembic.runtime.migration] Running upgrade 4ffceebfada -> 5498d17be016 INFO [alembic.runtime.migration] Running upgrade 5498d17be016 -> 2a16083502f3 INFO [alembic.runtime.migration] Running upgrade 2a16083502f3 -> 2e5352a0ad4d INFO [alembic.runtime.migration] Running upgrade 2e5352a0ad4d -> 11926bcfe72d INFO [alembic.runtime.migration] Running upgrade 11926bcfe72d -> 4af11ca47297 INFO [alembic.runtime.migration] Running upgrade 4af11ca47297 -> 1b294093239c INFO [alembic.runtime.migration] Running upgrade 1b294093239c -> 8a6d8bdae39 INFO [alembic.runtime.migration] Running upgrade 8a6d8bdae39 -> 2b4c2465d44b INFO [alembic.runtime.migration] Running upgrade 2b4c2465d44b -> e3278ee65050 INFO [alembic.runtime.migration] Running upgrade e3278ee65050 -> c6c112992c9 INFO [alembic.runtime.migration] Running upgrade c6c112992c9 -> 5ffceebfada INFO [alembic.runtime.migration] Running upgrade 5ffceebfada -> 4ffceebfcdc INFO [alembic.runtime.migration] Running upgrade 4ffceebfcdc -> 7bbb25278f53 INFO [alembic.runtime.migration] Running upgrade 7bbb25278f53 -> 89ab9a816d70 INFO [alembic.runtime.migration] Running upgrade 89ab9a816d70 -> c879c5e1ee90 INFO [alembic.runtime.migration] Running upgrade c879c5e1ee90 -> 8fd3918ef6f4 INFO [alembic.runtime.migration] Running upgrade 8fd3918ef6f4 -> 4bcd4df1f426 INFO [alembic.runtime.migration] Running upgrade 4bcd4df1f426 -> b67e765a3524 INFO [alembic.runtime.migration] Running upgrade a963b38d82f4 -> 3d0e74aa7d37 INFO [alembic.runtime.migration] Running upgrade 3d0e74aa7d37 -> 030a959ceafa INFO [alembic.runtime.migration] Running upgrade 030a959ceafa -> a5648cfeeadf INFO [alembic.runtime.migration] Running upgrade a5648cfeeadf -> 0f5bef0f87d4 INFO [alembic.runtime.migration] Running upgrade 0f5bef0f87d4 -> 67daae611b6e INFO [alembic.runtime.migration] Running upgrade 67daae611b6e -> 6b461a21bcfc INFO [alembic.runtime.migration] Running upgrade 6b461a21bcfc -> 5cd92597d11d INFO [alembic.runtime.migration] Running upgrade 5cd92597d11d -> 929c968efe70 INFO [alembic.runtime.migration] Running upgrade 929c968efe70 -> a9c43481023c INFO [alembic.runtime.migration] Running upgrade a9c43481023c -> 804a3c76314c INFO [alembic.runtime.migration] Running upgrade 804a3c76314c -> 2b42d90729da INFO [alembic.runtime.migration] Running upgrade 2b42d90729da -> 62c781cb6192 INFO [alembic.runtime.migration] Running upgrade 62c781cb6192 -> c8c222d42aa9 INFO [alembic.runtime.migration] Running upgrade c8c222d42aa9 -> 349b6fd605a6 INFO [alembic.runtime.migration] Running upgrade 349b6fd605a6 -> 7d32f979895f INFO [alembic.runtime.migration] Running upgrade 7d32f979895f -> 594422d373ee INFO [alembic.runtime.migration] Running upgrade 594422d373ee -> 61663558142c INFO [alembic.runtime.migration] Running upgrade 61663558142c -> 867d39095bf4, port forwarding INFO [alembic.runtime.migration] Running upgrade b67e765a3524 -> a84ccf28f06a INFO [alembic.runtime.migration] Running upgrade a84ccf28f06a -> 7d9d8eeec6ad INFO [alembic.runtime.migration] Running upgrade 7d9d8eeec6ad -> a8b517cff8ab INFO [alembic.runtime.migration] Running upgrade a8b517cff8ab -> 3b935b28e7a0 INFO [alembic.runtime.migration] Running upgrade 3b935b28e7a0 -> b12a3ef66e62 INFO [alembic.runtime.migration] Running upgrade b12a3ef66e62 -> 97c25b0d2353 INFO [alembic.runtime.migration] Running upgrade 97c25b0d2353 -> 2e0d7a8a1586 INFO [alembic.runtime.migration] Running upgrade 2e0d7a8a1586 -> 5c85685d616d 确定
(10).重启nova_api服务
[root@openstack01 tools]# systemctl restart openstack-nova-api.service
(11).启动neutron服务并设置开机启动
-
[root@openstack01 tools]# systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service [root@openstack01 tools]# systemctl status neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service ● neutron-server.service - OpenStack Neutron Server Loaded: loaded (/usr/lib/systemd/system/neutron-server.service; disabled; vendor preset: disabled) Active: active (running) since 五 2021-11-19 16:03:25 CST; 1s ago ● neutron-linuxbridge-agent.service - OpenStack Neutron Linux Bridge Agent Loaded: loaded (/usr/lib/systemd/system/neutron-linuxbridge-agent.service; disabled; vendor preset: disabled) Active: active (running) since 五 2021-11-19 16:03:22 CST; 5s ago ● neutron-dhcp-agent.service - OpenStack Neutron DHCP Agent Loaded: loaded (/usr/lib/systemd/system/neutron-dhcp-agent.service; disabled; vendor preset: disabled) Active: active (running) since 五 2021-11-19 16:03:21 CST; 5s ago ● neutron-metadata-agent.service - OpenStack Neutron Metadata Agent Loaded: loaded (/usr/lib/systemd/system/neutron-metadata-agent.service; disabled; vendor preset: disabled) Active: active (running) since 五 2021-11-19 16:03:21 CST; 5s ago [root@openstack01 tools]# systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-server.service to /usr/lib/systemd/system/neutron-server.service. Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service. Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-dhcp-agent.service to /usr/lib/systemd/system/neutron-dhcp-agent.service. Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-metadata-agent.service to /usr/lib/systemd/system/neutron-metadata-agent.service. [root@openstack01 tools]# systemctl list-unit-files |grep neutron* |grep enabled neutron-dhcp-agent.service enabled neutron-linuxbridge-agent.service enabled neutron-metadata-agent.service enabled neutron-server.service enabled
4.在计算节点安装neutron网络组件
(1).安装neutron组件
-
[root@localhost ~]# yum install openstack-neutron-linuxbridge ebtables ipset -y
(2).快速配置/etc/neutron/neutron.conf
-
[root@localhost ~]#openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:openstack@controller [root@localhost ~]#openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone [root@localhost ~]#openstack-config --set /etc/neutron/neutron.conf keystone_authtoken www_authenticate_uri http://controller:5000 [root@localhost ~]#openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:5000 [root@localhost ~]#openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller:11211 [root@localhost ~]#openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password [root@localhost ~]#openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default [root@localhost ~]#openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default [root@localhost ~]#openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service [root@localhost ~]#openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron [root@localhost ~]#openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password neutron [root@localhost ~]#openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
查看生效的配置
-
[root@localhost ~]# egrep -v '(^$|^#)' /etc/neutron/neutron.conf [DEFAULT] transport_url = rabbit://openstack:openstack@controller auth_strategy = keystone [agent] [cors] [database] [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = neutron [matchmaker_redis] [nova] [oslo_concurrency] lock_path = /var/lib/neutron/tmp [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [quotas] [ssl]
(3).快速配置/etc/neutron/plugins/ml2/linuxbridge_agent.ini
-
[root@localhost ~]# openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:ens33 [root@localhost ~]# openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan false [root@localhost ~]# openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group true [root@localhost ~]# openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
查看生效的配置
-
[root@localhost ~]# egrep -v '(^$|^#)' /etc/neutron/plugins/ml2/linuxbridge_agent.ini [DEFAULT] [agent] [linux_bridge] physical_interface_mappings = provider:ens33 [network_log] [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver [vxlan] enable_vxlan = false
(4).配置nova计算服务与neutron网络服务协同服务
-
[root@localhost ~]# openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696 [root@localhost ~]# openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:5000 [root@localhost ~]# openstack-config --set /etc/nova/nova.conf neutron auth_type password [root@localhost ~]# openstack-config --set /etc/nova/nova.conf neutron project_domain_name default [root@localhost ~]# openstack-config --set /etc/nova/nova.conf neutron user_domain_name default [root@localhost ~]# openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne [root@localhost ~]# openstack-config --set /etc/nova/nova.conf neutron project_name service [root@localhost ~]# openstack-config --set /etc/nova/nova.conf neutron username neutron [root@localhost ~]# openstack-config --set /etc/nova/nova.conf neutron password neutron
查看生效的配置
-
[root@localhost ~]# egrep -v '(^$|^#)' /etc/nova/nova.conf [DEFAULT] my_ip = 192.168.51.139 use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:openstack@controller [api] auth_strategy = keystone [api_database] [barbican] [cache] [cells] [cinder] [compute] [conductor] [console] [consoleauth] [cors] [database] [devices] [ephemeral_storage_encryption] [filter_scheduler] [glance] api_servers = http://controller:9292 [guestfs] [healthcheck] [hyperv] [ironic] [key_manager] [keystone] [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = nova [libvirt] virt_type = qemu [matchmaker_redis] [metrics] [mks] [neutron] url = http://controller:9696 auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = neutron [notifications] [osapi_v21] [oslo_concurrency] lock_path = /var/lib/nova/tmp [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [pci] [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = placement [placement_database] [powervm] [profiler] [quota] [rdp] [remote_debug] [scheduler] [serial_console] [service_user] [spice] [upgrade_levels] [vault] [vendordata_dynamic_auth] [vmware] [vnc] enabled = True server_listen = 0.0.0.0 server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html [workarounds] [wsgi] [xenserver] [xvp] [zvm]
(5).重启计算节点
-
[root@localhost ~]# systemctl restart openstack-nova-compute.service [root@localhost ~]# systemctl status openstack-nova-compute.service ● openstack-nova-compute.service - OpenStack Nova Compute Server Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; enabled; vendor preset: disabled) Active: active (running) since 五 2021-11-19 16:13:40 CST; 6s ago
(6).启动neutron网络组件,并配置开机自启动
-
[root@localhost ~]# systemctl restart neutron-linuxbridge-agent.service [root@localhost ~]# systemctl status neutron-linuxbridge-agent.service ● neutron-linuxbridge-agent.service - OpenStack Neutron Linux Bridge Agent Loaded: loaded (/usr/lib/systemd/system/neutron-linuxbridge-agent.service; disabled; vendor preset: disabled) Active: active (running) since 五 2021-11-19 16:14:46 CST; 3s ago [root@localhost ~]# systemctl enable neutron-linuxbridge-agent.service Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service. [root@localhost ~]# systemctl list-unit-files |grep neutron* |grep enabled neutron-linuxbridge-agent.service enabled
5.在控制节点检查确认neutron服务安装成功
(1).获取管理权限
-
[root@openstack01 tools]# source keystone-admin-pass.sh
(2).列表查看加载的网络配件
-
[root@openstack01 tools]# openstack extension list --network
(3).查看网络代理列表
-
[root@openstack01 tools]# openstack network agent list
neutron服务安装完成
七、安装horizon服务组件(控制节点dashboard)
1.安装dashboard WEB 控制台
(1).安装dashboard软件包
-
[root@openstack01 tools]# yum install openstack-dashboard -y
(2).修改配置文件/etc/openstack-dashboard/local_settings
检查确定有以下配置
-
[root@openstack01 tools]# vim /etc/openstack-dashboard/local_settings ALLOWED_HOSTS = ['*', ] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' OPENSTACK_API_VERSIONS = { "identity": 3, "image": 2, "volume": 2, } OPENSTACK_HOST = "controller" OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "default" CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_NEUTRON_NETWORK = { 'enable_router': False, 'enable_quotas': False, 'enable_distributed_router': False, 'enable_ha_router': False, 'enable_fip_topology_check': False, 'enable_lb': False, 'enable_firewall': False, 'enable_vpn': False, } TIME_ZONE = "Asia/Shanghai"
文件具体内容可以通过此链接获取:(百度网盘)
链接:https://pan.baidu.com/s/1ZvO9v5yh7ooL5cifFhHk9whttps://pan.baidu.com/s/1ZvO9v5yh7ooL5cifFhHk9w
提取码:0egh
(3).修改/etc/httpd/conf.d/openstack-dashboard.conf
-
[root@openstack01 tools]# vim /etc/httpd/conf.d/openstack-dashboard.conf #将该句添加到该文件中 WSGIApplicationGroup %{GLOBAL}
(4).重启web服务器以及会话存储服务
-
[root@openstack01 tools]# systemctl restart httpd.service memcached.service [root@openstack01 tools]# systemctl status httpd.service memcached.service ● httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled) Drop-In: /usr/lib/systemd/system/httpd.service.d └─openstack-dashboard.conf Active: active (running) since 五 2021-11-19 16:51:56 CST; 9s ago ● memcached.service - memcached daemon Loaded: loaded (/usr/lib/systemd/system/memcached.service; enabled; vendor preset: disabled) Active: active (running) since 五 2021-11-19 16:51:36 CST; 30s ago
(5).检查dashboard是否可用
在浏览器中输入下面的地址:用户名 admin 和密码之前创建为 123456(keystone 创建),域名用 default
http://192.168.51.138/dashboard
浏览器页面如下:
登录后显示如下所示的摘要信息
新安装的默认里面都是空的,需要后续添加扩充。
八、启动一个虚拟机实例
1.创建provider网络
(1).在控制节点上,创建网络接口
-
[root@openstack01 ~]# cd /server/tools/ [root@openstack01 tools]# source keystone-admin-pass.sh [root@openstack01 tools]# openstack network create --share --external --provider-physical-network provider --provider-network-type flat provider +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | UP | | availability_zone_hints | | | availability_zones | | | created_at | 2021-11-19T09:16:52Z | | description | | | dns_domain | None | | id | 5469cf87-b24b-45c7-8469-b6d96ecd6400 | | ipv4_address_scope | None | | ipv6_address_scope | None | | is_default | None | | is_vlan_transparent | None | | mtu | 1500 | | name | provider | | port_security_enabled | True | | project_id | 5461c6241b4743ef82b48c62efe275c0 | | provider:network_type | flat | | provider:physical_network | provider | | provider:segmentation_id | None | | qos_policy_id | None | | revision_number | 1 | | router:external | External | | segments | None | | shared | True | | status | ACTIVE | | subnets | | | tags | | | updated_at | 2021-11-19T09:16:52Z | +---------------------------+--------------------------------------+ [root@openstack01 tools]# openstack network list +--------------------------------------+----------+---------+ | ID | Name | Subnets | +--------------------------------------+----------+---------+ | 5469cf87-b24b-45c7-8469-b6d96ecd6400 | provider | | +--------------------------------------+----------+---------+
(2).检查网络配置
确定ml2_conf.ini以下配置选项,上面的命令—provider-network-type float网络名称与provider与此对应。
-
[root@openstack01 tools]# vim /etc/neutron/plugins/ml2/ml2_conf.ini [linux_bridge] physical_interface_mappings = provider:eno16777736
(3).创建provider子网
-
[root@openstack01 tools]# openstack subnet create --network provider --no-dhcp --allocation-pool start=192.168.1.210,end=192.168.1.220 --dns-nameserver 4.4.4.4 --gateway 192.168.1.1 --subnet-range 192.168.1.0/24 provider-subnet01 +-------------------+--------------------------------------+ | Field | Value | +-------------------+--------------------------------------+ | allocation_pools | 192.168.1.210-192.168.1.220 | | cidr | 192.168.1.0/24 | | created_at | 2021-11-19T09:24:57Z | | description | | | dns_nameservers | 4.4.4.4 | | enable_dhcp | False | | gateway_ip | 192.168.1.1 | | host_routes | | | id | b154bd2c-6094-4b3c-84bb-2d54ae5d8e6e | | ip_version | 4 | | ipv6_address_mode | None | | ipv6_ra_mode | None | | name | provider-subnet01 | | network_id | 5469cf87-b24b-45c7-8469-b6d96ecd6400 | | project_id | 5461c6241b4743ef82b48c62efe275c0 | | revision_number | 0 | | segment_id | None | | service_types | | | subnetpool_id | None | | tags | | | updated_at | 2021-11-19T09:24:57Z | +-------------------+--------------------------------------+ [root@openstack01 tools]# openstack subnet create --network provider --dhcp --subnet-range 192.168.2.0/24 provider-subnet02 +-------------------+--------------------------------------+ | Field | Value | +-------------------+--------------------------------------+ | allocation_pools | 192.168.2.2-192.168.2.254 | | cidr | 192.168.2.0/24 | | created_at | 2021-11-19T09:25:26Z | | description | | | dns_nameservers | | | enable_dhcp | True | | gateway_ip | 192.168.2.1 | | host_routes | | | id | b0ae0193-2596-4fed-a9b3-45b568e85bb1 | | ip_version | 4 | | ipv6_address_mode | None | | ipv6_ra_mode | None | | name | provider-subnet02 | | network_id | 5469cf87-b24b-45c7-8469-b6d96ecd6400 | | project_id | 5461c6241b4743ef82b48c62efe275c0 | | revision_number | 0 | | segment_id | None | | service_types | | | subnetpool_id | None | | tags | | | updated_at | 2021-11-19T09:25:26Z | +-------------------+--------------------------------------+ [root@openstack01 tools]# openstack subnet list +--------------------------------------+-------------------+--------------------------------------+----------------+ | ID | Name | Network | Subnet | +--------------------------------------+-------------------+--------------------------------------+----------------+ | b0ae0193-2596-4fed-a9b3-45b568e85bb1 | provider-subnet02 | 5469cf87-b24b-45c7-8469-b6d96ecd6400 | 192.168.2.0/24 | | b154bd2c-6094-4b3c-84bb-2d54ae5d8e6e | provider-subnet01 | 5469cf87-b24b-45c7-8469-b6d96ecd6400 | 192.168.1.0/24 | +--------------------------------------+-------------------+--------------------------------------+----------------+
至此,provider网络创建完成,可以拆关键虚拟机
2.在控制节点使用普通用户myuser创建密钥对
(1).使用普通用户myuser的权限
-
[root@openstack01 tools]# source keystone-myuser-pass.sh
(2).生成密钥对(直接输入回车,不用输入文件夹)
-
[root@openstack01 tools]# ssh-keygen -q -N "" Enter file in which to save the key (/root/.ssh/id_rsa):
(3).添加公钥到openstack密钥系统
-
[root@openstack01 tools]# openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey +-------------+-------------------------------------------------+ | Field | Value | +-------------+-------------------------------------------------+ | fingerprint | 52:b3:87:3b:d8:38:81:22:69:86:0b:2c:82:e6:43:3c | | name | mykey | | user_id | f86be32740d94614860933ec034adf0b | +-------------+-------------------------------------------------+
(4).查看可用的公钥(验证公钥的添加)
-
[root@openstack01 tools]# openstack keypair list +-------+-------------------------------------------------+ | Name | Fingerprint | +-------+-------------------------------------------------+ | mykey | 52:b3:87:3b:d8:38:81:22:69:86:0b:2c:82:e6:43:3c | +-------+-------------------------------------------------+
3.在控制节点为示例项目myproject增加安全组规则
(1).使用普通用户myuser的权限
-
[root@openstack01 tools]# source keystone-admin-pass.sh
(2).允许ICMP(ping)
-
[root@openstack01 tools]# openstack security group rule create --proto icmp default +-------------------+--------------------------------------+ | Field | Value | +-------------------+--------------------------------------+ | created_at | 2021-11-19T11:28:28Z | | description | | | direction | ingress | | ether_type | IPv4 | | id | 53d9cc9e-734e-4a8e-8ca6-c5b6c20d636b | | name | None | | port_range_max | None | | port_range_min | None | | project_id | 5461c6241b4743ef82b48c62efe275c0 | | protocol | icmp | | remote_group_id | None | | remote_ip_prefix | 0.0.0.0/0 | | revision_number | 0 | | security_group_id | 4a064a35-d29f-48f2-bbe2-947590727f43 | | updated_at | 2021-11-19T11:28:28Z | +-------------------+--------------------------------------+
(3).允许安全shell(SSH)的访问
-
[root@openstack01 tools]# openstack security group rule create --proto tcp --dst-port 22 default +-------------------+--------------------------------------+ | Field | Value | +-------------------+--------------------------------------+ | created_at | 2021-11-19T11:29:21Z | | description | | | direction | ingress | | ether_type | IPv4 | | id | d5cb9672-bcff-414b-89e0-e6d1dc2923c8 | | name | None | | port_range_max | 22 | | port_range_min | 22 | | project_id | 5461c6241b4743ef82b48c62efe275c0 | | protocol | tcp | | remote_group_id | None | | remote_ip_prefix | 0.0.0.0/0 | | revision_number | 0 | | security_group_id | 4a064a35-d29f-48f2-bbe2-947590727f43 | | updated_at | 2021-11-19T11:29:21Z | +-------------------+--------------------------------------+
(4).查看安全组和相关的规则
-
[root@openstack01 tools]# openstack security group list [root@openstack01 tools]# openstack security group rule list
4.在控制节点使用 普通用户在provider网络创建虚拟机实例
(1).控制机:使用admin用户创建主机模板
列表查看实例配置模板
-
[root@openstack01 tools]# source keystone-admin-pass.sh [root@openstack01 tools]# openstack flavor list
使用admin用户创建自定义配置的主机模板flavor
-
[root@openstack01 tools]# openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano +----------------------------+---------+ | Field | Value | +----------------------------+---------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 1 | | id | 0 | | name | m1.nano | | os-flavor-access:is_public | True | | properties | | | ram | 64 | | rxtx_factor | 1.0 | | swap | | | vcpus | 1 | +----------------------------+---------+ [root@openstack01 tools]# openstack flavor create --id 1 --vcpus 1 --ram 1024 --disk 50 m1.tiny +----------------------------+---------+ | Field | Value | +----------------------------+---------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 50 | | id | 1 | | name | m1.tiny | | os-flavor-access:is_public | True | | properties | | | ram | 1024 | | rxtx_factor | 1.0 | | swap | | | vcpus | 1 | +----------------------------+---------+ [root@openstack01 tools]# openstack flavor create --id 2 --vcpus 1 --ram 2048 --disk 500 m1.small +----------------------------+----------+ | Field | Value | +----------------------------+----------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 500 | | id | 2 | | name | m1.small | | os-flavor-access:is_public | True | | properties | | | ram | 2048 | | rxtx_factor | 1.0 | | swap | | | vcpus | 1 | +----------------------------+----------+ [root@openstack01 tools]# openstack flavor create --id 3 --vcpus 2 --ram 4096 --disk 500 m1.medium +----------------------------+-----------+ | Field | Value | +----------------------------+-----------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 500 | | id | 3 | | name | m1.medium | | os-flavor-access:is_public | True | | properties | | | ram | 4096 | | rxtx_factor | 1.0 | | swap | | | vcpus | 2 | +----------------------------+-----------+ [root@openstack01 tools]# openstack flavor create --id 4 --vcpus 4 --ram 8192 --disk 500 m1.large +----------------------------+----------+ | Field | Value | +----------------------------+----------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 500 | | id | 4 | | name | m1.large | | os-flavor-access:is_public | True | | properties | | | ram | 8192 | | rxtx_factor | 1.0 | | swap | | | vcpus | 4 | +----------------------------+----------+ [root@openstack01 tools]# openstack flavor create --id 5 --vcpus 8 --ram 16384 --disk 500 m1.xlarge +----------------------------+-----------+ | Field | Value | +----------------------------+-----------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 500 | | id | 5 | | name | m1.xlarge | | os-flavor-access:is_public | True | | properties | | | ram | 16384 | | rxtx_factor | 1.0 | | swap | | | vcpus | 8 | +----------------------------+-----------+ [root@openstack01 tools]# openstack flavor list +----+-----------+-------+------+-----------+-------+-----------+ | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public | +----+-----------+-------+------+-----------+-------+-----------+ | 0 | m1.nano | 64 | 1 | 0 | 1 | True | | 1 | m1.tiny | 1024 | 50 | 0 | 1 | True | | 2 | m1.small | 2048 | 500 | 0 | 1 | True | | 3 | m1.medium | 4096 | 500 | 0 | 2 | True | | 4 | m1.large | 8192 | 500 | 0 | 4 | True | | 5 | m1.xlarge | 16384 | 500 | 0 | 8 | True | +----+-----------+-------+------+-----------+-------+-----------+
以下为常用命令,在此列出
-
## 查看可用的虚拟机配置模板 openstack flavor list ## 查看可用的镜像 openstack image list # 查看可用的网络 openstack network list openstack subnet list ## 查看可用的公钥(验证公钥的添加) openstack keypair list ## 查看可用的安全组 openstack security group list openstack security group rule list
(2).控制机:使用普通用户创建一台虚拟机实例
-
[root@openstack01 tools]# openstack server create --flavor m1.nano --image cirros --nic net-id=provider --security-group default --key-name mykey cirros-01 [root@openstack01 tools]# openstack network list +--------------------------------------+----------+----------------------------------------------------------------------------+ | ID | Name | Subnets | 934e5a00-a2ba-4a46-91d0-6821fff5bca6 | provider | 1d10ed8b-7f7b-4781-acd2-3073886e91a3, 44c72a82-5cc3-4eb3-b08d-37096fce2eed | +--------------------------------------+----------+----------------------------------------------------------------------------+ #此处的ID用于下一个命令的net-id= [root@openstack01 tools]# openstack server create --flavor m1.nano --image cirros --nic net-id=934e5a00-a2ba-4a46-91d0-6821fff5bca6 --security-group default --key-name mykey cirros-02 [root@openstack01 tools]# openstack server create --flavor m1.nano --image cirros --security-group default --key-name mykey cirros-03
检查实例的状态
-
root@openstack01 tools]# openstack server list +--------------------------------------+-----------+--------+------------------------+--------+---------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+-----------+--------+------------------------+--------+---------+ | e54fa767-a56c-4501-8cbb-732b83dd702a | cirros-03 | ACTIVE | provider=192.168.1.219 | cirros | m1.nano | | e91ad056-8f16-439f-9605-044a4233ba2e | cirros-02 | ACTIVE | provider=192.168.1.211 | cirros | m1.nano | | 9b4335c2-bb8b-4a0a-bc31-a67c6dd5199a | cirros-01 | ACTIVE | provider=192.168.1.214 | cirros | m1.nano | | 1549cb7e-8337-4c40-bf93-4d0e02b3dfe2 | cirros-01 | ACTIVE | provider=192.168.1.210 | cirros | m1.nano | +--------------------------------------+-----------+--------+------------------------+--------+---------+
注意:此处实例的状态若显示为ERROR,需要先删除,再次新建
例:
- [root@openstack01 tools]# openstack server delete +ID
(3).显示主机的novnc地址(vnc控制台)
得出的地址可以直接使用浏览器进行访问,并管理相应用主机,此时Host:192.168.51.138 Port:admin Password:123456.访问地址为(192.168.51.138:6080)。界面如图所示:
也可以在192.168.51.138/dashboard里面访问,界面如图所示:
至此,启动虚拟机实例成功。
九、安装Cinder存储服务组件(控制节点)
1.在控制节点安装cinder存储服务
(1).创建cinder数据库
-
[root@openstack01 tools]# cd [root@openstack01 ~]# mysql -u root -p123456 Welcome to the MariaDB monitor. Commands end with ; or \g. MariaDB [(none)]> CREATE DATABASE cinder; Query OK, 1 row affected (0.10 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder'; Query OK, 0 rows affected (0.17 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> flush privileges; Query OK, 0 rows affected (0.18 sec) MariaDB [(none)]> show databases; +--------------------+ | Database | +--------------------+ | cinder | | glance | | information_schema | | keystone | | mysql | | neutron | | nova | | nova_api | | nova_cell0 | | performance_schema | | placement | +--------------------+ 11 rows in set (0.08 sec) MariaDB [(none)]> select user,host from mysql.user; +-----------+-----------+ | user | host | +-----------+-----------+ | cinder | % | | glance | % | | keystone | % | | neutron | % | | nova | % | | placement | % | | root | 127.0.0.1 | | root | ::1 | | cinder | localhost | | glance | localhost | | keystone | localhost | | neutron | localhost | | nova | localhost | | placement | localhost | | root | localhost | +-----------+-----------+ 15 rows in set (0.01 sec) MariaDB [(none)]> exit Bye
(2).在keystone上面注册cinder服务
在keystone上创建cinder用户
-
[root@openstack01 ~]# cd /server/tools [root@openstack01 tools]# source keystone-admin-pass.sh [root@openstack01 tools]# openstack user create --domain default --password=cinder cinder +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | bdeda5884952454f80d11633d0b3cf3a | | name | cinder | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ [root@openstack01 tools]# openstack user list +----------------------------------+-----------+ | ID | Name | +----------------------------------+-----------+ | 1912bf1a2c074889a45f4bd2193387cd | myuser | | 51942708c7454755a6be5bea9e81c3b0 | placement | | 7f6c9cef605f4b39b14771638b295a30 | neutron | | bdeda5884952454f80d11633d0b3cf3a | cinder | | dd93119899a44130b0ea8bc1aa51bb79 | glance | | e154782ba6504b25acc95e2b716e31b2 | nova | | f86be32740d94614860933ec034adf0b | admin | +----------------------------------+-----------+
在keystone上将cinder用户配置为admin角色并添加进service项目,以下命令无输出
-
[root@openstack01 tools]# openstack role add --project service --user cinder admin
创建cinder服务的实体
-
[root@openstack01 tools]# openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2 +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Block Storage | | enabled | True | | id | 292d35747f484727a7b2d22e03db2167 | | name | cinderv2 | | type | volumev2 | +-------------+----------------------------------+ [root@openstack01 tools]# openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3 +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Block Storage | | enabled | True | | id | ccdb9fbed1194811a50934da5826099a | | name | cinderv3 | | type | volumev3 | +-------------+----------------------------------+ [root@openstack01 tools]# openstack service list +----------------------------------+-----------+-----------+ | ID | Name | Type | +----------------------------------+-----------+-----------+ | 2034ba4ebefa45f5862e63762826267f | glance | image | | 292d35747f484727a7b2d22e03db2167 | cinderv2 | volumev2 | | 517bf50f40b94aadb635b3595cca3ac3 | keystone | identity | | 568332c6ec2543bab55dad29446b4e73 | nova | compute | | 7676ab0484664d9a97d32b4d76899fdc | neutron | network | | ccdb9fbed1194811a50934da5826099a | cinderv3 | volumev3 | | e4bd96f879fe48df9d066e83eecfd9fe | placement | placement | +----------------------------------+-----------+-----------+
创建cinder服务的API端点(endpoint)
-
[root@openstack01 tools]#openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s [root@openstack01 tools]#openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s [root@openstack01 tools]#openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s [root@openstack01 tools]#openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s [root@openstack01 tools]#openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s [root@openstack01 tools]#openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s [root@openstack01 tools]#openstack endpoint list
(3).安装cinder相关软件包
[root@openstack01 tools]# yum install openstack-cinder -y
(4).快速修改cinder配置
-
[root@openstack01 tools]# openstack-config --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:cinder@controller/cinder [root@openstack01 tools]# openstack-config --set /etc/cinder/cinder.conf DEFAULT transport_url rabbit://openstack:openstack@controller [root@openstack01 tools]# openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone [root@openstack01 tools]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://controller:5000 [root@openstack01 tools]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://controller:5000 [root@openstack01 tools]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers controller:11211 [root@openstack01 tools]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_type password [root@openstack01 tools]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name default [root@openstack01 tools]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name default [root@openstack01 tools]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name service [root@openstack01 tools]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username cinder [root@openstack01 tools]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password cinder [root@openstack01 tools]# openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 192.168.51.138 [root@openstack01 tools]# openstack-config --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/nova/tmp
检查生效的cinder配置
-
[root@openstack01 tools]# egrep -v "^#|^$" /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:openstack@controller auth_strategy = keystone my_ip = 192.168.51.138 [backend] [backend_defaults] [barbican] [brcd_fabric_example] [cisco_fabric_example] [coordination] [cors] [database] connection = mysql+pymysql://cinder:cinder@controller/cinder [fc-zone-manager] [healthcheck] [key_manager] [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = cinder [matchmaker_redis] [nova] [oslo_concurrency] lock_path = /var/lib/nova/tmp [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [oslo_reports] [oslo_versionedobjects] [profiler] [sample_remote_file_source] [service_user] [ssl] [vault] [root@openstack01 tools]# grep '^[a-z]' /etc/cinder/cinder.conf transport_url = rabbit://openstack:openstack@controller auth_strategy = keystone my_ip = 192.168.51.138 connection = mysql+pymysql://cinder:cinder@controller/cinder auth_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = cinder lock_path = /var/lib/nova/tmp
同步cinder数据库
-
[root@openstack01 tools]# su -s /bin/sh -c "cinder-manage db sync" cinder Deprecated: Option "logdir" from group "DEFAULT" is deprecated. Use option "log-dir" from group "DEFAULT". [root@openstack01 tools]# mysql -h192.168.51.138 -ucinder -pcinder -e "use cinder;show tables;" +----------------------------+ | Tables_in_cinder | +----------------------------+ | attachment_specs | | backup_metadata | | backups | | cgsnapshots | | clusters | | consistencygroups | | driver_initiator_data | | encryption | | group_snapshots | | group_type_projects | | group_type_specs | | group_types | | group_volume_type_mapping | | groups | | image_volume_cache_entries | | messages | | migrate_version | | quality_of_service_specs | | quota_classes | | quota_usages | | quotas | | reservations | | services | | snapshot_metadata | | snapshots | | transfers | | volume_admin_metadata | | volume_attachment | | volume_glance_metadata | | volume_metadata | | volume_type_extra_specs | | volume_type_projects | | volume_types | | volumes | | workers | +----------------------------+
(6).修改nova配置文件
-
[root@openstack01 tools]# openstack-config --set /etc/nova/nova.conf cinder os_region_name RegionOne [root@openstack01 tools]# grep '^[a-z]' /etc/nova/nova.conf |grep os_region_name os_region_name = RegionOne
(7).重启nova-api服务
-
[root@openstack01 tools]# systemctl restart openstack-nova-api.service [root@openstack01 tools]# systemctl status openstack-nova-api.service ● openstack-nova-api.service - OpenStack Nova API Server Loaded: loaded (/usr/lib/systemd/system/openstack-nova-api.service; enabled; vendor preset: disabled) Active: active (running) since 五 2021-11-19 22:24:29 CST; 3s ago
(8).启动cinder存储服务
-
[root@openstack01 tools]# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service [root@openstack01 tools]# systemctl status openstack-cinder-api.service openstack-cinder-scheduler.service [root@openstack01 tools]# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-api.service to /usr/lib/systemd/system/openstack-cinder-api.service. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-scheduler.service to /usr/lib/systemd/system/openstack-cinder-scheduler.service. [root@openstack01 tools]# systemctl list-unit-files |grep openstack-cinder |grep enabled openstack-cinder-api.service enabled openstack-cinder-scheduler.service enabled
至此,控制端的cinder服务安装完毕,在dashboard上面可以看到项目目录中多了一个卷服务。
2.在存储节点服务器cinder存储服务
存储节点建议单独部署服务器(最好是物理机),测试时也可以部署在控制节点或者计算节点(建议在计算节点进行配置)
(1).安装LVM相关软件包
[root@localhost ~]# yum install lvm2 device-mapper-persistent-data -y
(2).启动LVM的metadata服务并配置开机自启动
-
[root@localhost ~]# systemctl start lvm2-lvmetad.service [root@localhost ~]# systemctl status lvm2-lvmetad.service ● lvm2-lvmetad.service - LVM2 metadata daemon Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmetad.service; static; vendor preset: enabled) Active: active (running) since 五 2021-11-19 14:46:20 CST; 7h ago [root@localhost ~]# systemctl enable lvm2-lvmetad.service [root@localhost ~]# systemctl list-unit-files |grep lvm2-lvmetad |grep enabled lvm2-lvmetad.socket enabled
(3).创建LVM逻辑卷
检查磁盘状态
-
[root@localhost ~]# fdisk -l
创建LVM物理卷/dev/sdb(需要先在虚拟机上增加一个新的硬盘)
*如何在虚拟机上新建一个新的硬盘
1).关闭虚拟机,点击编辑虚拟机设置
2).选择硬盘,然后进行添加,最后选择要添加的大小,完成添加。
3).重新启动虚拟器,完成。
-
[root@openstack02 ~]# pvcreate /dev/sdb Physical volume "/dev/sdb" successfully created.
创建LVM卷组cinder-volumers,块存储服务会在这个卷组中创建逻辑卷
-
[root@openstack02 ~]# vgcreate cinder-volumes /dev/sdb Volume group "cinder-volumes" successfully created
(4).配置过滤器,防止系统出错
默认只会有openstack实例访问块存储卷组,不过,底层的操作系统也会管理这些设备并尝试将逻辑卷与系统关联。
#默认情况下LVM卷扫描工具会扫描整个/dev目录,查找所有包含lvm卷的块存储设备。如果其他项目在某个磁盘设备sda,sdc等上使用了lvm卷,那么扫描工具检测到这些卷时会尝试缓存这些lvm卷,可能导致底层操作系统或者其他服务无法正常调用他们的lvm卷组,从而产生各种问题,需要手动配置LVM,让LVM卷扫描工具只扫描包含"cinder-volume"卷组的设备/dev/sdb,我这边磁盘分区都是格式化的手工分区,目前不存在这个问题,以下是配置演示
-
vim /etc/lvm/lvm.conf devices { filter = [ "a/sdb/", "r/.*/"] }
(5).在存储节点安装配置cinder组件
[root@openstack02 ~]# yum install openstack-cinder targetcli python-keystone -y
(6).在存储节点快速修改cinder配置
-
[root@openstack02 ~]# openstack-config --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:cinder@controller/cinder [root@openstack02 ~]# openstack-config --set /etc/cinder/cinder.conf DEFAULT transport_url rabbit://openstack:openstack@controller [root@openstack02 ~]# openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone [root@openstack02 ~]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken www_authenticate_uri http://controller:5000 [root@openstack02 ~]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://controller:5000 [root@openstack02 ~]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers controller:11211 [root@openstack02 ~]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_type password [root@openstack02 ~]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name default [root@openstack02 ~]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name default [root@openstack02 ~]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name service [root@openstack02 ~]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username cinder [root@openstack02 ~]# openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password cinder [root@openstack02 ~]# openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 192.168.5.139 [root@openstack02 ~]# openstack-config --set /etc/cinder/cinder.conf lvm volume_driver cinder.volume.drivers.lvm.LVMVolumeDriver [root@openstack02 ~]# openstack-config --set /etc/cinder/cinder.conf lvm volume_group cinder-volumes [root@openstack02 ~]# openstack-config --set /etc/cinder/cinder.conf lvm iscsi_protocol iscsi [root@openstack02 ~]# openstack-config --set /etc/cinder/cinder.conf lvm iscsi_helper lioadm [root@openstack02 ~]# openstack-config --set /etc/cinder/cinder.conf DEFAULT enabled_backends lvm [root@openstack02 ~]# openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_api_servers http://controller:9292 [root@openstack02 ~]# openstack-config --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp
检查生效的cinder配置
-
[root@openstack02 ~]# egrep -v "^#|^$" /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:openstack@controller auth_strategy = keystone my_ip = 192.168.5.139 enabled_backends = lvm glance_api_servers = http://controller:9292 [backend] [backend_defaults] [barbican] [brcd_fabric_example] [cisco_fabric_example] [coordination] [cors] [database] connection = mysql+pymysql://cinder:cinder@controller/cinder [fc-zone-manager] [healthcheck] [key_manager] [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = cinder [matchmaker_redis] [nova] [oslo_concurrency] lock_path = /var/lib/cinder/tmp [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [oslo_reports] [oslo_versionedobjects] [profiler] [sample_remote_file_source] [service_user] [ssl] [vault] [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes iscsi_protocol = iscsi iscsi_helper = lioadm
也可以使用另外的方式检查生效的配置
-
[root@openstack02 ~]# grep '^[a-z]' /etc/cinder/cinder.conf transport_url = rabbit://openstack:openstack@controller auth_strategy = keystone my_ip = 192.168.5.139 enabled_backends = lvm glance_api_servers = http://controller:9292 connection = mysql+pymysql://cinder:cinder@controller/cinder www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = cinder lock_path = /var/lib/cinder/tmp volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes iscsi_protocol = iscsi iscsi_helper = lioadm
(7).在存储节点启动cinder服务并配置开机自启动
-
[root@openstack02 ~]# systemctl start openstack-cinder-volume.service target.service [root@openstack02 ~]# systemctl status openstack-cinder-volume.service target.service ● openstack-cinder-volume.service - OpenStack Cinder Volume Server Loaded: loaded (/usr/lib/systemd/system/openstack-cinder-volume.service; disabled; vendor preset: disabled) Active: active (running) since 六 2021-11-20 20:27:37 CST; 5s ago ● target.service - Restore LIO kernel target configuration Loaded: loaded (/usr/lib/systemd/system/target.service; disabled; vendor preset: disabled) Active: active (exited) since 六 2021-11-20 20:27:38 CST; 4s ago [root@openstack02 ~]# systemctl enable openstack-cinder-volume.service target.service Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-volume.service to /usr/lib/systemd/system/openstack-cinder-volume.service. Created symlink from /etc/systemd/system/multi-user.target.wants/target.service to /usr/lib/systemd/system/target.service. [root@openstack02 ~]# systemctl list-unit-files |grep openstack-cinder |grep enabled openstack-cinder-volume.service enabled [root@openstack02 ~]# systemctl list-unit-files |grep target.service |grep enabled target.service
至此,在存储节点安装cinder服务就完成了
3.在控制节点进行验证
(1).获取管理员变量
-
[root@openstack01 ~]# cd /server/tools/ [root@openstack01 tools]# source keystone-admin-pass.sh
(2).查看存储卷列表
-
[root@openstack01 tools]# openstack volume service list +------------------+-------------------------------+------+---------+-------+----------------------------+ | Binary | Host | Zone | Status | State | Updated At | +------------------+-------------------------------+------+---------+-------+----------------------------+ | cinder-scheduler | openstack01.zuiyoujie.com | nova | enabled | up | 2021-11-20T12:30:49.000000 | | cinder-volume | openstack02.zuiyoujie.com@lvm | nova | enabled | up | 2021-11-20T12:30:48.000000 | +------------------+-------------------------------+------+---------+-------+----------------------------+
返回以上信息,表示cinder相关节点安装成功。
即OpenStack服务安装成功。