Centos7-OpenStack-T

来源

本文章是根据官方的搭建教程

环境准备

主机名IP磁盘CPU内存
controller网卡1:ens33 192.168.31.10 网卡2:ens34 不分配地址100G28G
compute1网卡1:ens33 192.168.31.11 网卡2:ens34 不分配地址100G28G

主机网络设置

controller

vim /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=87b84e38-10cb-4882-929f-c174eccc8fbd
DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.31.10
PREFIX=24
GATEWAY=192.168.31.2
DNS1=114.114.114.114
DNS2=8.8.8.8
vim /etc/sysconfig/network-scripts/ifcfg-ens34
DEVICE=ens34
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=none
hostnamectl set-hostname controller
vim /etc/hosts
192.168.31.10   controller
192.168.31.11   compute1
systemctl stop firewalld
systemctl disable firewalld
vim /etc/selinux/config
SELINUX=disabled

compute1

vim /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=87b84e38-10cb-4882-929f-c174eccc8fbd
DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.31.11
PREFIX=24
GATEWAY=192.168.31.2
DNS1=114.114.114.114
DNS2=8.8.8.8
vim /etc/sysconfig/network-scripts/ifcfg-ens34
DEVICE=ens34
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=none
hostnamectl set-hostname compute1
vim /etc/hosts
192.168.31.10   controller
192.168.31.11   compute1
systemctl stop firewalld
systemctl disable firewalld
vim /etc/selinux/config
SELINUX=disabled

验证

controller

ping -c 4 www.baidu.com
ping -c 4 compute1

compute1

ping -c 4 www.baidu.com
ping -c 4 controller

NTP

controller

yum install chrony
vim /etc/chrony.conf
allow   192.168.31.0/24
server  192.168.31.10   iburst
systemctl start chronyd.service
systemctl enable chronyd.service

compute1

yum install chrony
vim /etc/chrony.conf
server  192.168.31.10   iburst

注释所有server

systemctl start chronyd.service
systemctl enable chronyd.service

验证

chronyc sources

OpenStack packages

所有节点

yum install centos-release-openstack-train
yum install https://rdoproject.org/repos/rdo-release.rpm
yum upgrade
yum install python-openstackclient

SQL database

controller

yum install mariadb mariadb-server python2-PyMySQL
vim /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 192.168.31.10

default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
systemctl start mariadb.service
systemctl enable mariadb.service
mysql_secure_installation

Message queue

controller

yum install rabbitmq-server
systemctl start rabbitmq-server.service
systemctl enable rabbitmq-server.service
rabbitmqctl add_user openstack 123456
rabbitmqctl set_permissions openstack ".*" ".*" ".*"

Memcached

controller

yum install memcached python-memcached
vim /etc/sysconfig/memcached
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS="-l 192.168.31.10,::1"
systemctl start memcached.service
systemctl enable memcached.service

Etcd

controller

yum install etcd
vim /etc/etcd/etcd.conf
 ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
 ETCD_LISTEN_PEER_URLS="http://192.168.31.10:2380"
 ETCD_LISTEN_CLIENT_URLS="http://192.168.31.10:2379"
 ETCD_NAME="controller"
 ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.31.10:2380"
 ETCD_ADVERTISE_CLIENT_URLS="http://192.168.31.10:2379"
 ETCD_INITIAL_CLUSTER="controller=http://192.168.31.10:2380"
 ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
 ETCD_INITIAL_CLUSTER_STATE="new"
systemctl start etcd
systemctl enable etcd

Identity service

controller

Install and configure

mysql -u root -p
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY '123456';
exit
yum install openstack-keystone httpd mod_wsgi
vim /etc/keystone/keystone.conf
[database]
connection = mysql+pymysql://keystone:123456@controller/keystone
[token]
provider = fernet
su -s /bin/sh -c "keystone-manage db_sync" keystone
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
keystone-manage bootstrap --bootstrap-password 123456 \
  --bootstrap-admin-url http://controller:5000/v3/ \
  --bootstrap-internal-url http://controller:5000/v3/ \
  --bootstrap-public-url http://controller:5000/v3/ \
  --bootstrap-region-id RegionOne
vim /etc/httpd/conf/httpd.conf
ServerName controller
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
systemctl start httpd.service
systemctl enable httpd.service
export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3

Create a domain, projects, users, and roles

openstack domain create --description "An Example Domain" example
openstack project create --domain default \
  --description "Service Project" service
openstack project create --domain default \
  --description "Demo Project" myproject
openstack user create --domain default \
  --password-prompt myuser
openstack role create myrole
openstack role add --project myproject --user myuser myrole

Verify operation

unset OS_AUTH_URL OS_PASSWORD
openstack --os-auth-url http://controller:5000/v3 \
  --os-project-domain-name Default --os-user-domain-name Default \
  --os-project-name admin --os-username admin token issue
openstack --os-auth-url http://controller:5000/v3 \
  --os-project-domain-name Default --os-user-domain-name Default \
  --os-project-name myproject --os-username myuser token issue

Create OpenStack client environment scripts

vim admin-openrc.sh
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
vim demo-openrc.sh
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=myproject
export OS_USERNAME=myuser
export OS_PASSWORD=123456
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
chmod u+x admin-openrc.sh
chmod u+x demo-openrc.sh

Image service

controller

mysql -u root -p
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
  IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
  IDENTIFIED BY '123456';
exit
source admin-openrc.sh
openstack user create --domain default --password-prompt glance
openstack role add --project service --user glance admin
openstack service create --name glance \
  --description "OpenStack Image" image
openstack endpoint create --region RegionOne \
  image public http://controller:9292
openstack endpoint create --region RegionOne \
  image internal http://controller:9292
openstack endpoint create --region RegionOne \
  image admin http://controller:9292
yum install openstack-glance
vim /etc/glance/glance-api.conf
[database]
connection = mysql+pymysql://glance:123456@controller/glance
[keystone_authtoken]
www_authenticate_uri  = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = 123456
[paste_deploy]
flavor = keystone
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
su -s /bin/sh -c "glance-manage db_sync" glance
systemctl start openstack-glance-api.service
systemctl enable openstack-glance-api.service

Verify operation

source admin-openrc.sh
glance image-create --name "cirros" \
  --file /root/cirros-0.4.0-x86_64-disk.img \
  --disk-format qcow2 --container-format bare \
  --visibility public
glance image-list

Placement service

controller

mysql -u root -p
CREATE DATABASE placement;
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \
  IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \
  IDENTIFIED BY '123456';
exit
source admin-openrc.sh
openstack user create --domain default --password-prompt placement
openstack role add --project service --user placement admin
openstack service create --name placement \
  --description "Placement API" placement
openstack endpoint create --region RegionOne \
  placement public http://controller:8778
openstack endpoint create --region RegionOne \
  placement internal http://controller:8778
openstack endpoint create --region RegionOne \
  placement admin http://controller:8778
yum install openstack-placement-api
vim /etc/placement/placement.conf
[placement_database]
connection = mysql+pymysql://placement:123456@controller/placement
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = 123456
su -s /bin/sh -c "placement-manage db sync" placement
systemctl restart httpd

Verify Installation

source admin-openrc.sh
placement-status upgrade check

Compute service

controller

mysql -u root -p
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
  IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
  IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
  IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
  IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
  IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
  IDENTIFIED BY '123456';
exit
source admin-openrc.sh
openstack user create --domain default --password-prompt nova
openstack role add --project service --user nova admin
openstack service create --name nova \
  --description "OpenStack Compute" compute
openstack endpoint create --region RegionOne \
  compute public http://controller:8774/v2.1
openstack endpoint create --region RegionOne \
  compute internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne \
  compute admin http://controller:8774/v2.1
yum install openstack-nova-api openstack-nova-conductor \
  openstack-nova-novncproxy openstack-nova-scheduler
vim /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@controller:5672/
my_ip = 192.168.31.10
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api_database]
connection = mysql+pymysql://nova:123456@controller/nova_api
[database]
connection = mysql+pymysql://nova:123456@controller/nova
[api]
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = 123456
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456
service_metadata_proxy = true
metadata_proxy_shared_secret = 123456
[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = 123456
su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
su -s /bin/sh -c "nova-manage db sync" nova
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
systemctl start \
    openstack-nova-api.service \
    openstack-nova-scheduler.service \
    openstack-nova-conductor.service \
    openstack-nova-novncproxy.service
systemctl enable \
    openstack-nova-api.service \
    openstack-nova-scheduler.service \
    openstack-nova-conductor.service \
    openstack-nova-novncproxy.service
vim /etc/httpd/conf.d/00-placement-api.conf
<Directory /usr/bin>
  <IfVersion >= 2.4>
    Require all granted
  </IfVersion>
  <IfVersion < 2.4>
    Order allow,deny
    Allow from all
  </IfVersion>
</Directory>
 systemctl restart httpd

compute1

yum install openstack-nova-compute
vim /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@controller
my_ip = 192.168.31.11
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
vif_plugging_is_fatal = False
vif_plugging_timeout = 0
[api]
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = 123456
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456
[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://192.168.31.10:6080/vnc_auto.html
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = 123456
[libvirt]
virt_type = qemu
systemctl start libvirtd.service openstack-nova-compute.service
systemctl enable libvirtd.service openstack-nova-compute.service

Add the compute node to the cell database

controller

source admin-openrc.sh
openstack compute service list --service nova-compute
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

Verify operation

controller

source admin-openrc.sh
openstack compute service list
openstack catalog list
openstack image list
nova-status upgrade check

Networking service

controller

mysql -u root -p
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
  IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
  IDENTIFIED BY '123456';
exit
source admin-openrc.sh
openstack user create --domain default --password-prompt neutron
openstack role add --project service --user neutron admin
openstack service create --name neutron \
  --description "OpenStack Networking" network
openstack endpoint create --region RegionOne \
  network public http://controller:9696
openstack endpoint create --region RegionOne \
  network internal http://controller:9696
openstack endpoint create --region RegionOne \
  network admin http://controller:9696
yum install openstack-neutron openstack-neutron-ml2 \
  openstack-neutron-linuxbridge ebtables
vim /etc/neutron/neutron.conf
[experimental]
linuxbridge = true
[database]
connection = mysql+pymysql://neutron:123456@controller/neutron
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:123456@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456
[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = 123456
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
vim /etc/neutron/plugins/ml2/ml2_conf.ini
[DEFAULT]


[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security

[ml2_type_flat]
flat_networks = provider

[ml2_type_vxlan]
vni_ranges = 1:1000

[securitygroup]
enable_ipset = true
vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[DEFAULT]


[linux_bridge]
physical_interface_mappings = provider:ens34

[vxlan]
enable_vxlan = true
local_ip = 192.168.31.10
l2_population = true


[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
vim /etc/sysctl.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
modprobe br_netfilter
sysctl -p
vim /etc/neutron/l3_agent.ini
[DEFAULT]
interface_driver = linuxbridge
vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
nterface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
vim /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = 123456
vim /etc/nova/nova.conf
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456
service_metadata_proxy = true
metadata_proxy_shared_secret = 123456
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
systemctl restart openstack-nova-api.service
systemctl start neutron-server.service \
  neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
  neutron-metadata-agent.service
systemctl enable neutron-server.service \
  neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
  neutron-metadata-agent.service
systemctl start neutron-l3-agent.service
systemctl enable neutron-l3-agent.service

compute1

yum install openstack-neutron-linuxbridge ebtables ipset
vim /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:123456@controller
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[experimental]
linuxbridge = true
vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[DEFAULT]


[linux_bridge]
physical_interface_mappings = provider:ens34

[vxlan]
enable_vxlan = true
local_ip = 192.168.31.11
l2_population = true

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
vim /etc/sysctl.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
modprobe br_netfilter
sysctl -p
vim /etc/nova/nova.conf
[libvirt]
virt_type = qemu
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456
systemctl restart openstack-nova-compute.service
systemctl start neutron-linuxbridge-agent.service
systemctl enable neutron-linuxbridge-agent.service

Verify operation

openstack network agent list

Dashboard

controller

Install and configure

yum install openstack-dashboard
vim /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "controller"
ALLOWED_HOSTS = ['*']
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'controller:11211',
    }
}
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 3,
}
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_NEUTRON_NETWORK = {
    'enable_auto_allocated_network': False,
    'enable_distributed_router': False,
    'enable_fip_topology_check': True,
    'enable_ha_router': False,
    'enable_ipv6': True,
    # TODO(amotoki): Drop OPENSTACK_NEUTRON_NETWORK completely from here.
    # enable_quotas has the different default value here.
    'enable_quotas': True,
    'enable_rbac_policy': True,
    'enable_router': True,

    'default_dns_nameservers': [],
    'supported_provider_types': ['*'],
    'segmentation_id_range': {},
    'extra_provider_types': {},
    'supported_vnic_types': ['*'],
    'physical_networks': [],

}
WEBROOT = "/dashboard"
vim /etc/httpd/conf.d/openstack-dashboard.conf
WSGIApplicationGroup %{GLOBAL}
systemctl restart httpd.service memcached.service
<think>好的,我需要帮助用户在CentOS 7上安装OpenStack的最新稳定版。根据用户提供的引用内容,我需要先整理现有的信息,并补充可能缺失的步骤。 首先,引用[3]提到CentOS下安装OpenStack的硬件要求,比如内存至少8GB,建议16GB,CPU四核,硬盘80GB以上,网络桥接模式。这些是安装前的必要条件,必须确保用户的环境符合。 引用[2]和[4]都涉及CentOS7安装OpenStack的教程,但用户需要的是最新稳定版。然而,引用[1]指出,从Ussuri版本开始,OpenStack不再支持Python2,而CentOS8默认使用Python3。但用户希望继续在CentOS7上安装,这可能存在兼容性问题,因为CentOS7默认的Python版本是2.7,而最新OpenStack版本可能需要Python3。不过,根据引用[4],用户手动搭建了T版(即Train版本),可能说明在CentOS7上仍可行,但需要处理Python环境。 接下来,我需要确定如何在CentOS7上安装最新稳定版。但根据OpenStack的官方文档,最新的版本可能已经不再支持CentOS7,需要确认。例如,Victoria版本可能需要CentOS8。但用户引用的资料中有在CentOS7上安装T版(Train)的例子,所以可能用户需要安装较旧的稳定版,或者找到适配CentOS7的方法。 因此,安装步骤可能包括: 1. 环境准备:硬件、网络配置,安装必要依赖。 2. 配置仓库:可能使用RDO或Packstack。 3. 安装Packstack工具。 4. 生成应答文件,调整配置(如网络设置)。 5. 运行安装命令。 6. 验证安装。 需要提醒用户CentOS7可能无法安装最新版本,建议升级到CentOS8,但若坚持使用CentOS7,可能需要选择较旧的OpenStack版本,如Train或Ussuri,并处理Python依赖问题。同时,引用[1]提到Packstack在CentOS8.2成功安装Victoria,但CentOS7可能需要不同步骤。 可能的问题:如何在CentOS7上处理Python3依赖?可能需要安装Python3.6及以上,并配置虚拟环境。此外,网络配置部分需要指导Neutron的设置,如引用[4]提到的网络服务。 最后,生成相关问题需要考虑用户可能遇到的后续问题,如版本兼容性、网络配置、故障排除等。</think>### 在 CentOS 7 上安装 OpenStack 最新稳定版指南 虽然 OpenStack 最新版本(如 *Victoria* 或 *Wallaby*)官方推荐使用 CentOS 8 或更高版本[^1],但通过手动配置仍可在 CentOS 7 上安装较新的稳定版(如 *Train* 或 *Ussuri*)。以下是具体步骤: --- #### **1. 环境准备** - **硬件要求**:内存 ≥8GB(建议 16GB)、CPU 四核、硬盘 ≥80GB、网络桥接模式[^3]。 - **系统配置**: ```bash # 关闭防火墙和 SELinux systemctl stop firewalld systemctl disable firewalld setenforce 0 sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config # 安装依赖 yum install -y centos-release-openstack-train # 选择 Train 版本仓库 yum upgrade -y ``` --- #### **2. 安装 Packstack 工具** Packstack 是快速部署 OpenStack 的自动化工具。 ```bash yum install -y openstack-packstack ``` --- #### **3. 生成应答文件并配置** 生成默认应答文件并修改关键参数: ```bash packstack --gen-answer-file=answer.txt ``` 编辑 `answer.txt`: - 设置网络接口(如 `CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-ex:eth0`)[^4] - 配置管理员密码(`CONFIG_KEYSTONE_ADMIN_PW`) - 启用必要服务(如 Neutron、Nova 等) --- #### **4. 执行安装** ```bash packstack --answer-file=answer.txt ``` 安装完成后,会输出控制台 URL 和登录凭证。 --- #### **5. 验证安装** 访问 `http://<服务器IP>/dashboard`,使用 `admin` 和设置的密码登录 Horizon 控制台[^4]。 --- #### **注意事项** - **Python 兼容性**:CentOS 7 默认使用 Python 2.7,需手动安装 Python 3.6+ 并配置虚拟环境以支持新版本 OpenStack- **版本限制**:CentOS 7 上建议安装 *Train*(T 版)或 *Ussuri*(U 版),更高版本可能需 CentOS 8+。 ---
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

想看一次满天星

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值