http://blog.youkuaiyun.com/xuensong520/article/details/8897710
本文主要介绍multihost模式下的openstack多节点部署,这种模式可以避免随着节点的增多,流量膨胀一个网络节点无法满足需求的情况。当然如果只是自己搞一台host,在上面虚拟几台VM做实验,或者小型创业公司,通过在五台十台机器上的虚拟化,创建一些VM给公司内部开发测试团队使用,那么使用多节点模式即可,即一个控制节点、一个网络节点、多个计算节点。在Essxi中nova-network的mutihost可以很好分担网络节点的负载,同样在Quantum中也有类似的功能,本文主要参考一位网友Geek的blog所写,做了适当修改,在我的环境里验证没有问题,这里记录下来也仅供大家参考!
环境需求:
管理网络: 172.16.0.0/16
业务网络: 10.10.10.0/24
外部网络: 192.168.80.0/24
我这里使用的是三台机器,你也可以横向扩展,增加计算节点的数量。
Node Role: | NICs |
Control Node: | eth0 (192.168.80.21), eth1 (172.16.0.51) |
Compute1 Node: | eth0(192.168.80.22),eth1(172.16.0.52),eth2(10.10.10.52) |
Compute2 Node: | eth0(192.168.80.23),eth1(172.16.0.53),eth2(10.10.10.53) |
控制节点
网络设置
- cat /etc/network/interfaces
- # This file describes the network interfaces available on your system
- # and how to activate them. For more information, see interfaces(5).
- # The loopback network interface
- auto lo
- iface lo inet loopback
- # The primary network interface
- auto eth0
- iface eth0 inet static
- address 192.168.80.21
- netmask 255.255.255.0
- gateway 192.168.80.1
- dns-nameservers 8.8.8.8
- auto eth1
- iface eth1 inet static
- address 172.16.0.51
- netmask 255.255.0.0
添加源
添加 Grizzly 源,并升级系统
- cat > /etc/apt/sources.list.d/grizzly.list << _ESXU_
- deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main
- deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-proposed/grizzly main
- _ESXU_
- apt-get install ubuntu-cloud-keyring
- apt-get update
- apt-get upgrade
MySQL & RabbitMQ
- 安装 MySQL:
- apt-get install mysql-server python-mysqldb
- 使用sed编辑 /etc/mysql/my.cnf文件的更改绑定地址(0.0.0.0)从本地主机(127.0.0.1)
禁止 mysql 做域名解析,防止连接 mysql出现错误和远程连接 mysql慢的现象。
然后重新启动mysql服务.
安装 RabbitMQ:
- sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf
- sed -i '44 i skip-name-resolve' /etc/mysql/my.cnf
- /etc/init.d/mysql restart
apt-get install rabbitmq-server
NTP
- 安装 NTP 服务
配置 NTP服务器计算节点控制器节点之间的同步:
- apt-get install ntp
- sed -i 's/server ntp.ubuntu.com/server ntp.ubuntu.com\nserver 127.127.1.0\nfudge 127.127.1.0 stratum 10/g' /etc/ntp.conf
- service ntp restart
- 开启路由转发
- vim /etc/sysctl.conf
- net.ipv4.ip_forward=1
Keystone
- 安装 Keystone
- apt-get install keystone
在 mysql 里创建 keystone 数据库并授权:
- mysql -uroot -p
- create database keystone;
- grant all on keystone.* to 'keystone'@'%' identified by 'keystone';
- quit;
修改 /etc/keystone/keystone.conf 配置文件:
- admin_token = www.longgeek.com
- debug = True
- verbose = True
- [sql]
- connection = mysql://keystone:keystone@172.16.0.51/keystone #必须写到 [sql] 下面
- [signing]
- token_format = UUID
启动 keystone 然后同步数据库
- /etc/init.d/keystone restart
- keystone-manage db_sync
用脚本导入数据:
用脚本来创建 user、role、tenant、service、endpoint,下载脚本:
- wget http://192.168.80.8/ubuntu/keystone.sh
修改脚本内容:
- ADMIN_PASSWORD=${ADMIN_PASSWORD:-password}
- SERVICE_PASSWORD=${SERVICE_PASSWORD:-password}
- export SERVICE_TOKEN="admin"
- export SERVICE_ENDPOINT="http://172.16.0.51:35357/v2.0"
- SERVICE_TENANT_NAME=${SERVICE_TENANT_NAME:-service}
- KEYSTONE_REGION=RegionOne
- # If you need to provide the service, please to open keystone_wlan_ip and swift_wlan_ip
- # of course you are a multi-node architecture, and swift service
- # corresponding ip address set the following variables
- KEYSTONE_IP="172.16.0.51"
- #KEYSTONE_WLAN_IP="172.16.0.51"
- SWIFT_IP="172.16.0.51"
- #SWIFT_WLAN_IP="172.16.0.51"
- COMPUTE_IP=$KEYSTONE_IP
- EC2_IP=$KEYSTONE_IP
- GLANCE_IP=$KEYSTONE_IP
- VOLUME_IP=$KEYSTONE_IP
- QUANTUM_IP=$KEYSTONE_IP
在这里更改你的管理员密码即可,IP地址也可根据自己环境更改下!
执行脚本:
设置环境变量:
- sh keystone.sh
这里变量对于 keystone.sh 里的设置:
- cat > /etc/profile << _ESXU_
- export OS_TENANT_NAME=admin #这里如果设置为 service 其它服务会无法验证.
- export OS_USERNAME=admin
- export OS_PASSWORD=password
- export OS_AUTH_URL=http://172.16.0.51:5000/v2.0/
- export OS_REGION_NAME=RegionOne
- export SERVICE_TOKEN=admin
- export SERVICE_ENDPOINT=http://172.16.0.51:35357/v2.0/
- _ESXU_
- # source /root/profile #使环境变量生效
Glance
- 安装 Glance
- apt-get install glance
- 创建一个 glance 数据库并授权:
- mysql -uroot -p
- create database glance;
- grant all on glance.* to 'glance'@'%' identified by 'glance';
- 更新 /etc/glance/glance-api.conf 文件:
- verbose = True
- debug = True
- sql_connection = mysql://glance:glance@172.16.0.51/glance
- workers = 4
- registry_host = 172.16.0.51
- notifier_strategy = rabbit
- rabbit_host = 172.16.0.51
- rabbit_userid = guest
- rabbit_password = guest
- [keystone_authtoken]
- auth_host = 172.16.0.51
- auth_port = 35357
- auth_protocol = http
- admin_tenant_name = service
- admin_user = glance
- admin_password = password
- [paste_deploy]
- config_file = /etc/glance/glance-api-paste.ini
- flavor = keystone
- 更新 /etc/glance/glance-registry.conf 文件:
- verbose = True
- debug = True
- sql_connection = mysql://glance:glance@172.16.0.51/glance
- [keystone_authtoken]
- auth_host = 172.16.0.51
- auth_port = 35357
- auth_protocol = http
- admin_tenant_name = service
- admin_user = glance
- admin_password = password
- [paste_deploy]
- config_file = /etc/glance/glance-registry-paste.ini
- flavor = keystone
启动 glance-api 和 glance-registry 服务并同步到数据库:
- /etc/init.d/glance-api restart
- /etc/init.d/glance-registry restart
- glance-manage version_control 0
- glance-manage db_sync
测试 glance 的安装,上传一个镜像。下载 Cirros 镜像并上传:
- wget https://launchpad.net/cirros/trunk/0.3.0/+download/
- cirros-0.3.0-x86_64-disk.img
- glance image-create --name='cirros' --public --container-format=ovf --disk-format=qcow2 < ./cirros-0.3.0-x86_64-disk.img
- 查看上传的镜像:
- glance image-list
Quantum
- 安装 Quantum server和 OpenVSwitch包:
- apt-get install quantum-server quantum-plugin-openvswitch
- 创建 quantum 数据库并授权用户访问:
编辑 /etc/quantum/quantum.conf文件
- mysql -uroot -p
- create database quantum;
- grant all on quantum.* to 'quantum'@'%' identified by 'quantum';
- quit;
- [DEFAULT]
- debug = True
- verbose = True
- state_path = /var/lib/quantum
- lock_path = $state_path/lock
- bind_host = 0.0.0.0
- bind_port = 9696
- core_plugin = quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2
- api_paste_config = /etc/quantum/api-paste.ini
- control_exchange = quantum
- rabbit_host = 172.16.0.51
- rabbit_password = guest
- rabbit_port = 5672
- rabbit_userid = guest
- notification_driver = quantum.openstack.common.notifier.rpc_notifier
- default_notification_level = INFO
- notification_topics = notifications
- [QUOTAS]
- [DEFAULT_SERVICETYPE]
- [AGENT]
- root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf
- [keystone_authtoken]
- auth_host = 172.16.0.51
- auth_port = 35357
- auth_protocol = http
- admin_tenant_name = service
- admin_user = quantum
- admin_password = password
- signing_dir = /var/lib/quantum/keystone-signing
- 编辑 OVS 插件配置文件 /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini:
- [DATABASE]
- sql_connection = mysql://quantum:quantum@172.16.0.51/quantum
- reconnect_interval = 2
- [OVS]
- tenant_network_type = gre
- enable_tunneling = True
- tunnel_id_ranges = 1:1000
- [AGENT]
- polling_interval = 2
- [SECURITYGROUP]
启动 quantum 服务:
- /etc/init.d/quantum-server restart
Nova
- 安装 Nova 相关软件包:
- apt-get install nova-api nova-cert novnc nova-conductor nova-consoleauth nova-scheduler nova-novncproxy
创建 nova 数据库,授权 nova 用户访问它:
- mysql -uroot -p
- create database nova;
- grant all on nova.* to 'nova'@'%' identified by 'nova';
- quit;
在 /etc/nova/api-paste.ini 中修改 autotoken 验证部分:
- [filter:authtoken]
- paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
- auth_host = 172.16.0.51
- auth_port = 35357
- auth_protocol = http
- admin_tenant_name = service
- admin_user = nova
- admin_password = password
- signing_dir = /tmp/keystone-signing-nova
- # Workaround for https://bugs.launchpad.net/nova/+bug/1154809
- auth_version = v2.0
修改 /etc/nova/nova.conf, 类似下面这样:
- [DEFAULT]
- # LOGS/STATE
- debug = False
- verbose = True
- logdir = /var/log/nova
- state_path = /var/lib/nova
- lock_path = /var/lock/nova
- rootwrap_config = /etc/nova/rootwrap.conf
- dhcpbridge = /usr/bin/nova-dhcpbridge
- # SCHEDULER
- compute_scheduler_driver = nova.scheduler.filter_scheduler.FilterScheduler
- ## VOLUMES
- volume_api_class = nova.volume.cinder.API
- # DATABASE
- sql_connection = mysql://nova:nova@172.16.0.51/nova
- # COMPUTE
- libvirt_type = kvm
- compute_driver = libvirt.LibvirtDriver
- instance_name_template = instance-%08x
- api_paste_config = /etc/nova/api-paste.ini
- # COMPUTE/APIS: if you have separate configs for separate services
- # this flag is required for both nova-api and nova-compute
- allow_resize_to_same_host = True
- # APIS
- osapi_compute_extension = nova.api.openstack.compute.contrib.standard_extensions
- ec2_dmz_host = 172.16.0.51
- s3_host = 172.16.0.51
- metadata_host = 172.16.0.51
- metadata_listen = 0.0.0.0
- # RABBITMQ
- rabbit_host = 172.16.0.51
- rabbit_password = guest
- # GLANCE
- image_service = nova.image.glance.GlanceImageService
- glance_api_servers = 172.16.0.51:9292
- # NETWORK
- network_api_class = nova.network.quantumv2.api.API
- quantum_url = http://172.16.0.51:9696
- quantum_auth_strategy = keystone
- quantum_admin_tenant_name = service
- quantum_admin_username = quantum
- quantum_admin_password = password
- quantum_admin_auth_url = http://172.16.0.51:35357/v2.0
- service_quantum_metadata_proxy = True
- libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
- linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
- firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver
- # NOVNC CONSOLE
- novncproxy_base_url = http://192.168.8.51:6080/vnc_auto.html
- # Change vncserver_proxyclient_address and vncserver_listen to match each compute host
- vncserver_proxyclient_address = 192.168.8.51
- vncserver_listen = 0.0.0.0
- # AUTHENTICATION
- auth_strategy = keystone
- [keystone_authtoken]
- auth_host = 172.16.0.51
- auth_port = 35357
- auth_protocol = http
- admin_tenant_name = service
- admin_user = nova
- admin_password = password
- signing_dir = /tmp/keystone-signing-nova
同步数据库,启动 nova 相关服务:
- nova-manage db sync
- cd /etc/init.d/; for i in $( ls nova-* ); do sudo /etc/init.d/$i restart; done
检查 nova 相关服务笑脸
Horizon
- nova-manage service list
- # nova-manage service list
- Binary Host Zone Status State Updated_At
- nova-cert control internal enabled :-) 2013-05-07 07:09:56
- nova-conductor control internal enabled :-) 2013-05-07 07:09:55
- nova-consoleauth control internal enabled :-) 2013-05-07 07:09:56
- nova-scheduler control internal enabled :-) 2013-05-07 07:09:56
- 安装 horizon:
- apt-get install openstack-dashboard memcached
如果你不喜欢 Ubuntu 的主题,可以禁用它,使用默认界面:
重新加载 apache2 和 memcache:
- sed -i 's/127.0.0.1/172.16.0.51/g' /etc/openstack-dashboard/local_settings.py
- sed -i 's/127.0.0.1/172.16.0.51/g' /etc/memcached.conf
- vim /etc/openstack-dashboard/local_settings.py
- DEBUG = True
- # Enable the Ubuntu theme if it is present.
- #try:
- # from ubuntu_theme import *
- #except ImportError:
- # pass
- /etc/init.d/apache2 restart
- /etc/init.d/memcached restart
现在可以通过浏览器 http://192.168.8.51/horizon 使用 admin:password 来登录界面。
所有计算节点
网络设置
所有计算节点安装方法相同,只需要替换 IP 地址即可:
- # cat /etc/network/interfaces
- # This file describes the network interfaces available on your system
- # and how to activate them. For more information, see interfaces(5).
- # The loopback network interface
- auto lo
- iface lo inet loopback
- # The primary network interface
- auto eth0
- iface eth0 inet manual
- up ifconfig $IFACE 0.0.0.0 up
- up ip link set $IFACE promisc on
- down ip link set $IFACE promisc off
- down ifconfig $IFACE down
- auto br-ex
- iface br-ex inet static
- address 192.168.80.22
- netmask 255.255.255.0
- gateway 192.168.80.1
- dns-nameservers 8.8.8.8
- #address 192.168.80.22
- #netmask 255.255.255.0
- #gateway 192.168.80.1
- #dns-nameservers 8.8.8.8
- auto eth1
- iface eth1 inet static
- address 172.16.0.52
- netmask 255.255.0.0
- auto eth2
- iface eth2 inet static
- address 10.10.10.52
- netmask 255.255.255.0
添加源
- 添加 Grizzly 源,并升级系统
- cat > /etc/apt/sources.list.d/grizzly.list << _GEEK_
- deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main
- deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-proposed/grizzly main
- _GEEK_
- apt-get update
- apt-get upgrade
- apt-get install ubuntu-cloud-keyring
- 设置 ntp 和开启路由转发:
OpenVSwitch
- # apt-get install ntp
- # sed -i 's/server ntp.ubuntu.com/server 172.16.0.51/g' /etc/ntp.conf
- # service ntp restart
- # vim /etc/sysctl.conf
- net.ipv4.ip_forward=1
- # sysctl -p
- 安装 openVSwitch:
必须按照下面安装顺序:
设置 ovs-brcompatd 启动:
- apt-get install openvswitch-datapath-source
- module-assistant auto-install openvswitch-datapath
- apt-get install openvswitch-switch openvswitch-brcompat
启动 openvswitch-switch:
- sed -i 's/# BRCOMPAT=no/BRCOMPAT=yes/g' /etc/default/openvswitch-switch
- echo 'brcompat' >> /etc/modules
- /etc/init.d/openvswitch-switch restart
- * ovs-brcompatd is not running # brcompatd 没有启动,尝试再次启动.
- * ovs-vswitchd is not running
- * ovsdb-server is not running
- * Inserting openvswitch module
- * /etc/openvswitch/conf.db does not exist
- * Creating empty database /etc/openvswitch/conf.db
- * Starting ovsdb-server
- * Configuring Open vSwitch system IDs
- * Starting ovs-vswitchd
- * Enabling gre with iptables
再次启动,直到 ovs-brcompatd、ovs-vswitchd、ovsdb-server等服务都启动:
如果还是启动不了的话,用下面命令:
- # /etc/init.d/openvswitch-switch restart
- # lsmod | grep brcompat
- brcompat 13512 0
- openvswitch 84038 7 brcompat
创建网桥:
- /etc/init.d/openvswitch-switch force-reload-kmod
- ovs-vsctl add-br br-int # br-int 用于 vm 整合
- ovs-vsctl add-br br-ex # br-ex 用于从互联网上访问 vm
- ovs-vsctl add-port br-ex eth0 # br-ex 桥接到 eth0
- 重启网卡可能会出现:
- /etc/init.d/networking restart
- RTNETLINK answers: File exists
- Failed to bring up br-ex.
br-ex 可能有 ip 地址,但没有网关和 DNS,需要手工配置一下,或者重启机器. 重启机器后就正常了
- 查看桥接的网络
- ovs-vsctl list-br
- ovs-vsctl show
- 安装 Quantum openvswitch agent, metadata-agent l3 agent 和 dhcp agent:
- apt-get install quantum-plugin-openvswitch-agent quantum-dhcp-agent quantum-l3-agent quantum-metadata-agent
- 编辑 /etc/quantum/quantum.conf 文件:
编辑 OVS 插件配置文件 /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini:
- [DEFAULT]
- debug = True
- verbose = True
- state_path = /var/lib/quantum
- lock_path = $state_path/lock
- bind_host = 0.0.0.0
- bind_port = 9696
- core_plugin = quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2
- api_paste_config = /etc/quantum/api-paste.ini
- control_exchange = quantum
- rabbit_host = 172.16.0.51
- rabbit_password = guest
- rabbit_port = 5672
- rabbit_userid = guest
- notification_driver = quantum.openstack.common.notifier.rpc_notifier
- default_notification_level = INFO
- notification_topics = notifications
- [QUOTAS]
- [DEFAULT_SERVICETYPE]
- [AGENT]
- root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf
- [keystone_authtoken]
- auth_host = 172.16.0.51
- auth_port = 35357
- auth_protocol = http
- admin_tenant_name = service
- admin_user = quantum
- admin_password = password
- signing_dir = /var/lib/quantum/keystone-signing
- [DATABASE]
- sql_connection = mysql://quantum:quantum@172.16.0.51/quantum
- reconnect_interval = 2
- [OVS]
- enable_tunneling = True
- tenant_network_type = gre
- tunnel_id_ranges = 1:1000
- local_ip = 10.10.10.52
- integration_bridge = br-int
- tunnel_bridge = br-tun
- [AGENT]
- polling_interval = 2
- [SECURITYGROUP]
编辑 /etc/quantum/l3_agent.ini:
- [DEFAULT]
- debug = True
- verbose = True
- use_namespaces = True
- external_network_bridge = br-ex
- signing_dir = /var/cache/quantum
- admin_tenant_name = service
- admin_user = quantum
- admin_password = password
- auth_url = http://172.16.0.51:35357/v2.0
- l3_agent_manager = quantum.agent.l3_agent.L3NATAgentWithStateReport
- root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf
- interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver
- enable_multi_host = True
编辑 /etc/quantum/dhcp_agent.ini:
- [DEFAULT]
- debug = True
- verbose = True
- use_namespaces = True
- signing_dir = /var/cache/quantum
- admin_tenant_name = service
- admin_user = quantum
- admin_password = password
- auth_url = http://172.16.0.51:35357/v2.0
- dhcp_agent_manager = quantum.agent.dhcp_agent.DhcpAgentWithStateReport
- root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf
- state_path = /var/lib/quantum
- interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver
- dhcp_driver = quantum.agent.linux.dhcp.Dnsmasq
- enable_multi_host = True
- enable_isolated_metadata = False
编辑 /etc/quantum/metadata_agent.ini:
- [DEFAULT]
- debug = True
- auth_url = http://172.16.0.51:35357/v2.0
- auth_region = RegionOne
- admin_tenant_name = service
- admin_user = quantum
- admin_password = password
- state_path = /var/lib/quantum
- nova_metadata_ip = 172.16.0.51
- nova_metadata_port = 8775
启动 quantum 所有服务:
- service quantum-plugin-openvswitch-agent restart
- service quantum-dhcp-agent restart
- service quantum-l3-agent restart
- service quantum-metadata-agent restart
- 安装 Cinder 需要的包:
配置 iscsi 并启动服务:
- apt-get install cinder-api cinder-common cinder-scheduler cinder-volume python-cinderclient iscsitarget open-iscsi iscsitarget-dkms
- sed -i 's/false/true/g' /etc/default/iscsitarget
- /etc/init.d/iscsitarget restart
- /etc/init.d/open-iscsi restart
创建 cinder 数据库并授权用户访问:
- mysql -uroot -p
- create database cinder;
- grant all on cinder.* to 'cinder'@'%' identified by 'cinder';
- quit;
修改 /etc/cinder/cinder.conf:
- cat /etc/cinder/cinder.conf
- [DEFAULT]
- # LOG/STATE
- verbose = True
- debug = False
- iscsi_helper = ietadm
- auth_strategy = keystone
- volume_group = cinder-volumes
- volume_name_template = volume-%s
- state_path = /var/lib/cinder
- volumes_dir = /var/lib/cinder/volumes
- rootwrap_config = /etc/cinder/rootwrap.conf
- api_paste_config = /etc/cinder/api-paste.ini
- # RPC
- rabbit_host = 172.16.0.51
- rabbit_password = guest
- rpc_backend = cinder.openstack.common.rpc.impl_kombu
- # DATABASE
- sql_connection = mysql://cinder:cinder@172.16.0.51/cinder
- # API
- osapi_volume_extension = cinder.api.contrib.standard_extensions
修改 /etc/cinder/api-paste.ini 文件末尾 [filter:authtoken] 字段 :
- [filter:authtoken]
- paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
- service_protocol = http
- service_host = 172.16.0.51
- service_port = 5000
- auth_host = 172.16.0.51
- auth_port = 35357
- auth_protocol = http
- admin_tenant_name = service
- admin_user = cinder
- admin_password = password
- signing_dir = /var/lib/cinder
- 创建一个卷组,命名为 cinder-volumes:
创建一个普通分区,我这里用的sdb,创建了一个主分区,大小为所有空间
- # fdisk /dev/sdb
- n
- p
- 1
- Enter
- Enter
- t
- 8e
- w
- # partx -a /dev/sdb
- # pvcreate /dev/sdb1
- # vgcreate cinder-volumes /dev/sdb1
- # vgs
- VG #PV #LV #SN Attr VSize VFree
- cinder-volumes 1 7 0 wz--n- 1.64t 75.50g
- 同步数据库并重启服务:
- cinder-manage db sync
- /etc/init.d/cinder-api restart
- /etc/init.d/cinder-scheduler restart
- /etc/init.d/cinder-volume restart
最后,我们需要执行:
- /etc/init.d/iscsitarget stop
具体请看这里
Nova
- 安装 nova-compute:
在 /etc/nova/api-paste.ini 中修改 autotoken 验证部分:
- apt-get install nova-compute
修改 /etc/nova/nova.conf,类似下面这样:
- [filter:authtoken]
- paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
- auth_host = 172.16.0.51
- auth_port = 35357
- auth_protocol = http
- admin_tenant_name = service
- admin_user = nova
- admin_password = password
- signing_dir = /tmp/keystone-signing-nova
- # Workaround for https://bugs.launchpad.net/nova/+bug/1154809
- auth_version = v2.0
启动 nova-compute 服务:
- cat /etc/nova/nova.conf
- [DEFAULT]
- dhcpbridge_flagfile=/etc/nova/nova.conf
- dhcpbridge=/usr/bin/nova-dhcpbridge
- logdir=/var/log/nova
- state_path=/var/lib/nova
- lock_path=/var/lock/nova
- force_dhcp_release=True
- iscsi_helper=tgtadm
- #iscsi_helper = ietadm
- libvirt_use_virtio_for_bridges=True
- connection_type=libvirt
- root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
- verbose=True
- ec2_private_dns_show_ip=True
- #api_paste_config=/etc/nova/api-paste.ini
- volumes_path=/var/lib/nova/volumes
- enabled_apis=ec2,osapi_compute,metadata
- # SCHEDULER
- compute_scheduler_driver = nova.scheduler.filter_scheduler.FilterScheduler
- ## VOLUMES
- volume_api_class = nova.volume.cinder.API
- osapi_volume_listen_port=5900
- iscsi_ip_prefix=192.168.80
- iscsi_ip_address=192.168.80.22
- # DATABASE
- sql_connection = mysql://nova:nova@172.16.0.51/nova
- # COMPUTE
- libvirt_type = kvm
- compute_driver = libvirt.LibvirtDriver
- instance_name_template = instance-%08x
- api_paste_config = /etc/nova/api-paste.ini
- # COMPUTE/APIS: if you have separate configs for separate services
- # this flag is required for both nova-api and nova-compute
- allow_resize_to_same_host = True
- # APIS
- osapi_compute_extension = nova.api.openstack.compute.contrib.standard_extensions
- ec2_dmz_host = 172.16.0.51
- s3_host = 172.16.0.51
- metadata_host=172.16.0.51
- metadata_listen=0.0.0.0
- # RABBITMQ
- rabbit_host = 172.16.0.51
- rabbit_password = guest
- # GLANCE
- image_service = nova.image.glance.GlanceImageService
- glance_api_servers = 172.16.0.51:9292
- # NETWORK
- network_api_class = nova.network.quantumv2.api.API
- quantum_url = http://172.16.0.51:9696
- quantum_auth_strategy = keystone
- quantum_admin_tenant_name = service
- quantum_admin_username = quantum
- quantum_admin_password = password
- quantum_admin_auth_url = http://172.16.0.51:35357/v2.0
- service_quantum_metadata_proxy = True
- libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
- linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
- firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver
- # NOVNC CONSOLE
- novncproxy_base_url = http://192.168.80.21:6080/vnc_auto.html
- # Change vncserver_proxyclient_address and vncserver_listen to match each compute host
- vncserver_proxyclient_address = 192.168.80.22
- vncserver_listen = 192.168.80.22
- # AUTHENTICATION
- auth_strategy = keystone
- [keystone_authtoken]
- auth_host = 172.16.0.51
- auth_port = 35357
- auth_protocol = http
- admin_tenant_name = service
- admin_user = nova
- admin_password = password
- signing_dir = /tmp/keystone-signing-nova
- service nova-compute restart
检查 nova 相关服务笑脸:
发现 compute 节点已经加入:
- # nova-manage service list
- Binary Host Zone Status State Updated_At
- nova-cert control internal enabled :-) 2013-05-07 07:09:56
- nova-conductor control internal enabled :-) 2013-05-07 07:09:55
- nova-consoleauth control internal enabled :-) 2013-05-07 07:09:56
- nova-scheduler control internal enabled :-) 2013-05-07 07:09:56
- nova-compute node-02 nova enabled :-) 2013-05-07 07:10:03
- nova-compute node-03 nova enabled :-) 2013-05-07 07:10:03
到这里,计算节点就成功加入了,我们可以尝试新建节点验证是否成功,具体horizon的使用方法,后面再写文章详细介绍,主要就是网络的创建这块稍微有点复杂,需要稍微注意下!
本文主要介绍multihost模式下的openstack多节点部署,这种模式可以避免随着节点的增多,流量膨胀一个网络节点无法满足需求的情况。当然如果只是自己搞一台host,在上面虚拟几台VM做实验,或者小型创业公司,通过在五台十台机器上的虚拟化,创建一些VM给公司内部开发测试团队使用,那么使用多节点模式即可,即一个控制节点、一个网络节点、多个计算节点。在Essxi中nova-network的mutihost可以很好分担网络节点的负载,同样在Quantum中也有类似的功能,本文主要参考一位网友Geek的blog所写,做了适当修改,在我的环境里验证没有问题,这里记录下来也仅供大家参考!
环境需求:
管理网络: 172.16.0.0/16
业务网络: 10.10.10.0/24
外部网络: 192.168.80.0/24
我这里使用的是三台机器,你也可以横向扩展,增加计算节点的数量。
Node Role: | NICs |
Control Node: | eth0 (192.168.80.21), eth1 (172.16.0.51) |
Compute1 Node: | eth0(192.168.80.22),eth1(172.16.0.52),eth2(10.10.10.52) |
Compute2 Node: | eth0(192.168.80.23),eth1(172.16.0.53),eth2(10.10.10.53) |
控制节点
网络设置
- cat /etc/network/interfaces
- # This file describes the network interfaces available on your system
- # and how to activate them. For more information, see interfaces(5).
- # The loopback network interface
- auto lo
- iface lo inet loopback
- # The primary network interface
- auto eth0
- iface eth0 inet static
- address 192.168.80.21
- netmask 255.255.255.0
- gateway 192.168.80.1
- dns-nameservers 8.8.8.8
- auto eth1
- iface eth1 inet static
- address 172.16.0.51
- netmask 255.255.0.0
添加源
添加 Grizzly 源,并升级系统
- cat > /etc/apt/sources.list.d/grizzly.list << _ESXU_
- deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main
- deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-proposed/grizzly main
- _ESXU_
- apt-get install ubuntu-cloud-keyring
- apt-get update
- apt-get upgrade
MySQL & RabbitMQ
- 安装 MySQL:
- apt-get install mysql-server python-mysqldb
- 使用sed编辑 /etc/mysql/my.cnf文件的更改绑定地址(0.0.0.0)从本地主机(127.0.0.1)
禁止 mysql 做域名解析,防止连接 mysql出现错误和远程连接 mysql慢的现象。
然后重新启动mysql服务.
安装 RabbitMQ:
- sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf
- sed -i '44 i skip-name-resolve' /etc/mysql/my.cnf
- /etc/init.d/mysql restart
apt-get install rabbitmq-server
NTP
- 安装 NTP 服务
配置 NTP服务器计算节点控制器节点之间的同步:
- apt-get install ntp
- sed -i 's/server ntp.ubuntu.com/server ntp.ubuntu.com\nserver 127.127.1.0\nfudge 127.127.1.0 stratum 10/g' /etc/ntp.conf
- service ntp restart
- 开启路由转发
- vim /etc/sysctl.conf
- net.ipv4.ip_forward=1
Keystone
- 安装 Keystone
- apt-get install keystone
在 mysql 里创建 keystone 数据库并授权:
- mysql -uroot -p
- create database keystone;
- grant all on keystone.* to 'keystone'@'%' identified by 'keystone';
- quit;
修改 /etc/keystone/keystone.conf 配置文件:
- admin_token = www.longgeek.com
- debug = True
- verbose = True
- [sql]
- connection = mysql://keystone:keystone@172.16.0.51/keystone #必须写到 [sql] 下面
- [signing]
- token_format = UUID
启动 keystone 然后同步数据库
- /etc/init.d/keystone restart
- keystone-manage db_sync
用脚本导入数据:
用脚本来创建 user、role、tenant、service、endpoint,下载脚本:
- wget http://192.168.80.8/ubuntu/keystone.sh
修改脚本内容:
- ADMIN_PASSWORD=${ADMIN_PASSWORD:-password}
- SERVICE_PASSWORD=${SERVICE_PASSWORD:-password}
- export SERVICE_TOKEN="admin"
- export SERVICE_ENDPOINT="http://172.16.0.51:35357/v2.0"
- SERVICE_TENANT_NAME=${SERVICE_TENANT_NAME:-service}
- KEYSTONE_REGION=RegionOne
- # If you need to provide the service, please to open keystone_wlan_ip and swift_wlan_ip
- # of course you are a multi-node architecture, and swift service
- # corresponding ip address set the following variables
- KEYSTONE_IP="172.16.0.51"
- #KEYSTONE_WLAN_IP="172.16.0.51"
- SWIFT_IP="172.16.0.51"
- #SWIFT_WLAN_IP="172.16.0.51"
- COMPUTE_IP=$KEYSTONE_IP
- EC2_IP=$KEYSTONE_IP
- GLANCE_IP=$KEYSTONE_IP
- VOLUME_IP=$KEYSTONE_IP
- QUANTUM_IP=$KEYSTONE_IP
在这里更改你的管理员密码即可,IP地址也可根据自己环境更改下!
执行脚本:
设置环境变量:
- sh keystone.sh
这里变量对于 keystone.sh 里的设置:
- cat > /etc/profile << _ESXU_
- export OS_TENANT_NAME=admin #这里如果设置为 service 其它服务会无法验证.
- export OS_USERNAME=admin
- export OS_PASSWORD=password
- export OS_AUTH_URL=http://172.16.0.51:5000/v2.0/
- export OS_REGION_NAME=RegionOne
- export SERVICE_TOKEN=admin
- export SERVICE_ENDPOINT=http://172.16.0.51:35357/v2.0/
- _ESXU_
- # source /root/profile #使环境变量生效
Glance
- 安装 Glance
- apt-get install glance
- 创建一个 glance 数据库并授权:
- mysql -uroot -p
- create database glance;
- grant all on glance.* to 'glance'@'%' identified by 'glance';
- 更新 /etc/glance/glance-api.conf 文件:
- verbose = True
- debug = True
- sql_connection = mysql://glance:glance@172.16.0.51/glance
- workers = 4
- registry_host = 172.16.0.51
- notifier_strategy = rabbit
- rabbit_host = 172.16.0.51
- rabbit_userid = guest
- rabbit_password = guest
- [keystone_authtoken]
- auth_host = 172.16.0.51
- auth_port = 35357
- auth_protocol = http
- admin_tenant_name = service
- admin_user = glance
- admin_password = password
- [paste_deploy]
- config_file = /etc/glance/glance-api-paste.ini
- flavor = keystone
- 更新 /etc/glance/glance-registry.conf 文件:
- verbose = True
- debug = True
- sql_connection = mysql://glance:glance@172.16.0.51/glance
- [keystone_authtoken]
- auth_host = 172.16.0.51
- auth_port = 35357
- auth_protocol = http
- admin_tenant_name = service
- admin_user = glance
- admin_password = password
- [paste_deploy]
- config_file = /etc/glance/glance-registry-paste.ini
- flavor = keystone
启动 glance-api 和 glance-registry 服务并同步到数据库:
- /etc/init.d/glance-api restart
- /etc/init.d/glance-registry restart
- glance-manage version_control 0
- glance-manage db_sync
测试 glance 的安装,上传一个镜像。下载 Cirros 镜像并上传:
- wget https://launchpad.net/cirros/trunk/0.3.0/+download/
- cirros-0.3.0-x86_64-disk.img
- glance image-create --name='cirros' --public --container-format=ovf --disk-format=qcow2 < ./cirros-0.3.0-x86_64-disk.img
- 查看上传的镜像:
- glance image-list
Quantum
- 安装 Quantum server和 OpenVSwitch包:
- apt-get install quantum-server quantum-plugin-openvswitch
- 创建 quantum 数据库并授权用户访问:
编辑 /etc/quantum/quantum.conf文件
- mysql -uroot -p
- create database quantum;
- grant all on quantum.* to 'quantum'@'%' identified by 'quantum';
- quit;
- [DEFAULT]
- debug = True
- verbose = True
- state_path = /var/lib/quantum
- lock_path = $state_path/lock
- bind_host = 0.0.0.0
- bind_port = 9696
- core_plugin = quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2
- api_paste_config = /etc/quantum/api-paste.ini
- control_exchange = quantum
- rabbit_host = 172.16.0.51
- rabbit_password = guest
- rabbit_port = 5672
- rabbit_userid = guest
- notification_driver = quantum.openstack.common.notifier.rpc_notifier
- default_notification_level = INFO
- notification_topics = notifications
- [QUOTAS]
- [DEFAULT_SERVICETYPE]
- [AGENT]
- root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf
- [keystone_authtoken]
- auth_host = 172.16.0.51
- auth_port = 35357
- auth_protocol = http
- admin_tenant_name = service
- admin_user = quantum
- admin_password = password
- signing_dir = /var/lib/quantum/keystone-signing
- 编辑 OVS 插件配置文件 /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini:
- [DATABASE]
- sql_connection = mysql://quantum:quantum@172.16.0.51/quantum
- reconnect_interval = 2
- [OVS]
- tenant_network_type = gre
- enable_tunneling = True
- tunnel_id_ranges = 1:1000
- [AGENT]
- polling_interval = 2
- [SECURITYGROUP]
启动 quantum 服务:
- /etc/init.d/quantum-server restart
Nova
- 安装 Nova 相关软件包:
- apt-get install nova-api nova-cert novnc nova-conductor nova-consoleauth nova-scheduler nova-novncproxy
创建 nova 数据库,授权 nova 用户访问它:
- mysql -uroot -p
- create database nova;
- grant all on nova.* to 'nova'@'%' identified by 'nova';
- quit;
在 /etc/nova/api-paste.ini 中修改 autotoken 验证部分:
- [filter:authtoken]
- paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
- auth_host = 172.16.0.51
- auth_port = 35357
- auth_protocol = http
- admin_tenant_name = service
- admin_user = nova
- admin_password = password
- signing_dir = /tmp/keystone-signing-nova
- # Workaround for https://bugs.launchpad.net/nova/+bug/1154809
- auth_version = v2.0
修改 /etc/nova/nova.conf, 类似下面这样:
- [DEFAULT]
- # LOGS/STATE
- debug = False
- verbose = True
- logdir = /var/log/nova
- state_path = /var/lib/nova
- lock_path = /var/lock/nova
- rootwrap_config = /etc/nova/rootwrap.conf
- dhcpbridge = /usr/bin/nova-dhcpbridge
- # SCHEDULER
- compute_scheduler_driver = nova.scheduler.filter_scheduler.FilterScheduler
- ## VOLUMES
- volume_api_class = nova.volume.cinder.API
- # DATABASE
- sql_connection = mysql://nova:nova@172.16.0.51/nova
- # COMPUTE
- libvirt_type = kvm
- compute_driver = libvirt.LibvirtDriver
- instance_name_template = instance-%08x
- api_paste_config = /etc/nova/api-paste.ini
- # COMPUTE/APIS: if you have separate configs for separate services
- # this flag is required for both nova-api and nova-compute
- allow_resize_to_same_host = True
- # APIS
- osapi_compute_extension = nova.api.openstack.compute.contrib.standard_extensions
- ec2_dmz_host = 172.16.0.51
- s3_host = 172.16.0.51
- metadata_host = 172.16.0.51
- metadata_listen = 0.0.0.0
- # RABBITMQ
- rabbit_host = 172.16.0.51
- rabbit_password = guest
- # GLANCE
- image_service = nova.image.glance.GlanceImageService
- glance_api_servers = 172.16.0.51:9292
- # NETWORK
- network_api_class = nova.network.quantumv2.api.API
- quantum_url = http://172.16.0.51:9696
- quantum_auth_strategy = keystone
- quantum_admin_tenant_name = service
- quantum_admin_username = quantum
- quantum_admin_password = password
- quantum_admin_auth_url = http://172.16.0.51:35357/v2.0
- service_quantum_metadata_proxy = True
- libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
- linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
- firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver
- # NOVNC CONSOLE
- novncproxy_base_url = http://192.168.8.51:6080/vnc_auto.html
- # Change vncserver_proxyclient_address and vncserver_listen to match each compute host
- vncserver_proxyclient_address = 192.168.8.51
- vncserver_listen = 0.0.0.0
- # AUTHENTICATION
- auth_strategy = keystone
- [keystone_authtoken]
- auth_host = 172.16.0.51
- auth_port = 35357
- auth_protocol = http
- admin_tenant_name = service
- admin_user = nova
- admin_password = password
- signing_dir = /tmp/keystone-signing-nova
同步数据库,启动 nova 相关服务:
- nova-manage db sync
- cd /etc/init.d/; for i in $( ls nova-* ); do sudo /etc/init.d/$i restart; done
检查 nova 相关服务笑脸
Horizon
- nova-manage service list
- # nova-manage service list
- Binary Host Zone Status State Updated_At
- nova-cert control internal enabled :-) 2013-05-07 07:09:56
- nova-conductor control internal enabled :-) 2013-05-07 07:09:55
- nova-consoleauth control internal enabled :-) 2013-05-07 07:09:56
- nova-scheduler control internal enabled :-) 2013-05-07 07:09:56
- 安装 horizon:
- apt-get install openstack-dashboard memcached
如果你不喜欢 Ubuntu 的主题,可以禁用它,使用默认界面:
重新加载 apache2 和 memcache:
- sed -i 's/127.0.0.1/172.16.0.51/g' /etc/openstack-dashboard/local_settings.py
- sed -i 's/127.0.0.1/172.16.0.51/g' /etc/memcached.conf
- vim /etc/openstack-dashboard/local_settings.py
- DEBUG = True
- # Enable the Ubuntu theme if it is present.
- #try:
- # from ubuntu_theme import *
- #except ImportError:
- # pass
- /etc/init.d/apache2 restart
- /etc/init.d/memcached restart
现在可以通过浏览器 http://192.168.8.51/horizon 使用 admin:password 来登录界面。
所有计算节点
网络设置
所有计算节点安装方法相同,只需要替换 IP 地址即可:
- # cat /etc/network/interfaces
- # This file describes the network interfaces available on your system
- # and how to activate them. For more information, see interfaces(5).
- # The loopback network interface
- auto lo
- iface lo inet loopback
- # The primary network interface
- auto eth0
- iface eth0 inet manual
- up ifconfig $IFACE 0.0.0.0 up
- up ip link set $IFACE promisc on
- down ip link set $IFACE promisc off
- down ifconfig $IFACE down
- auto br-ex
- iface br-ex inet static
- address 192.168.80.22
- netmask 255.255.255.0
- gateway 192.168.80.1
- dns-nameservers 8.8.8.8
- #address 192.168.80.22
- #netmask 255.255.255.0
- #gateway 192.168.80.1
- #dns-nameservers 8.8.8.8
- auto eth1
- iface eth1 inet static
- address 172.16.0.52
- netmask 255.255.0.0
- auto eth2
- iface eth2 inet static
- address 10.10.10.52
- netmask 255.255.255.0
添加源
- 添加 Grizzly 源,并升级系统
- cat > /etc/apt/sources.list.d/grizzly.list << _GEEK_
- deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main
- deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-proposed/grizzly main
- _GEEK_
- apt-get update
- apt-get upgrade
- apt-get install ubuntu-cloud-keyring
- 设置 ntp 和开启路由转发:
OpenVSwitch
- # apt-get install ntp
- # sed -i 's/server ntp.ubuntu.com/server 172.16.0.51/g' /etc/ntp.conf
- # service ntp restart
- # vim /etc/sysctl.conf
- net.ipv4.ip_forward=1
- # sysctl -p
- 安装 openVSwitch:
必须按照下面安装顺序:
设置 ovs-brcompatd 启动:
- apt-get install openvswitch-datapath-source
- module-assistant auto-install openvswitch-datapath
- apt-get install openvswitch-switch openvswitch-brcompat
启动 openvswitch-switch:
- sed -i 's/# BRCOMPAT=no/BRCOMPAT=yes/g' /etc/default/openvswitch-switch
- echo 'brcompat' >> /etc/modules
- /etc/init.d/openvswitch-switch restart
- * ovs-brcompatd is not running # brcompatd 没有启动,尝试再次启动.
- * ovs-vswitchd is not running
- * ovsdb-server is not running
- * Inserting openvswitch module
- * /etc/openvswitch/conf.db does not exist
- * Creating empty database /etc/openvswitch/conf.db
- * Starting ovsdb-server
- * Configuring Open vSwitch system IDs
- * Starting ovs-vswitchd
- * Enabling gre with iptables
再次启动,直到 ovs-brcompatd、ovs-vswitchd、ovsdb-server等服务都启动:
如果还是启动不了的话,用下面命令:
- # /etc/init.d/openvswitch-switch restart
- # lsmod | grep brcompat
- brcompat 13512 0
- openvswitch 84038 7 brcompat
创建网桥:
- /etc/init.d/openvswitch-switch force-reload-kmod
- ovs-vsctl add-br br-int # br-int 用于 vm 整合
- ovs-vsctl add-br br-ex # br-ex 用于从互联网上访问 vm
- ovs-vsctl add-port br-ex eth0 # br-ex 桥接到 eth0
- 重启网卡可能会出现:
- /etc/init.d/networking restart
- RTNETLINK answers: File exists
- Failed to bring up br-ex.
br-ex 可能有 ip 地址,但没有网关和 DNS,需要手工配置一下,或者重启机器. 重启机器后就正常了
- 查看桥接的网络
- ovs-vsctl list-br
- ovs-vsctl show
- 安装 Quantum openvswitch agent, metadata-agent l3 agent 和 dhcp agent:
- apt-get install quantum-plugin-openvswitch-agent quantum-dhcp-agent quantum-l3-agent quantum-metadata-agent
- 编辑 /etc/quantum/quantum.conf 文件:
编辑 OVS 插件配置文件 /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini:
- [DEFAULT]
- debug = True
- verbose = True
- state_path = /var/lib/quantum
- lock_path = $state_path/lock
- bind_host = 0.0.0.0
- bind_port = 9696
- core_plugin = quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2
- api_paste_config = /etc/quantum/api-paste.ini
- control_exchange = quantum
- rabbit_host = 172.16.0.51
- rabbit_password = guest
- rabbit_port = 5672
- rabbit_userid = guest
- notification_driver = quantum.openstack.common.notifier.rpc_notifier
- default_notification_level = INFO
- notification_topics = notifications
- [QUOTAS]
- [DEFAULT_SERVICETYPE]
- [AGENT]
- root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf
- [keystone_authtoken]
- auth_host = 172.16.0.51
- auth_port = 35357
- auth_protocol = http
- admin_tenant_name = service
- admin_user = quantum
- admin_password = password
- signing_dir = /var/lib/quantum/keystone-signing
- [DATABASE]
- sql_connection = mysql://quantum:quantum@172.16.0.51/quantum
- reconnect_interval = 2
- [OVS]
- enable_tunneling = True
- tenant_network_type = gre
- tunnel_id_ranges = 1:1000
- local_ip = 10.10.10.52
- integration_bridge = br-int
- tunnel_bridge = br-tun
- [AGENT]
- polling_interval = 2
- [SECURITYGROUP]
编辑 /etc/quantum/l3_agent.ini:
- [DEFAULT]
- debug = True
- verbose = True
- use_namespaces = True
- external_network_bridge = br-ex
- signing_dir = /var/cache/quantum
- admin_tenant_name = service
- admin_user = quantum
- admin_password = password
- auth_url = http://172.16.0.51:35357/v2.0
- l3_agent_manager = quantum.agent.l3_agent.L3NATAgentWithStateReport
- root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf
- interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver
- enable_multi_host = True
编辑 /etc/quantum/dhcp_agent.ini:
- [DEFAULT]
- debug = True
- verbose = True
- use_namespaces = True
- signing_dir = /var/cache/quantum
- admin_tenant_name = service
- admin_user = quantum
- admin_password = password
- auth_url = http://172.16.0.51:35357/v2.0
- dhcp_agent_manager = quantum.agent.dhcp_agent.DhcpAgentWithStateReport
- root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf
- state_path = /var/lib/quantum
- interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver
- dhcp_driver = quantum.agent.linux.dhcp.Dnsmasq
- enable_multi_host = True
- enable_isolated_metadata = False
编辑 /etc/quantum/metadata_agent.ini:
- [DEFAULT]
- debug = True
- auth_url = http://172.16.0.51:35357/v2.0
- auth_region = RegionOne
- admin_tenant_name = service
- admin_user = quantum
- admin_password = password
- state_path = /var/lib/quantum
- nova_metadata_ip = 172.16.0.51
- nova_metadata_port = 8775
启动 quantum 所有服务:
- service quantum-plugin-openvswitch-agent restart
- service quantum-dhcp-agent restart
- service quantum-l3-agent restart
- service quantum-metadata-agent restart
- 安装 Cinder 需要的包:
配置 iscsi 并启动服务:
- apt-get install cinder-api cinder-common cinder-scheduler cinder-volume python-cinderclient iscsitarget open-iscsi iscsitarget-dkms
- sed -i 's/false/true/g' /etc/default/iscsitarget
- /etc/init.d/iscsitarget restart
- /etc/init.d/open-iscsi restart
创建 cinder 数据库并授权用户访问:
- mysql -uroot -p
- create database cinder;
- grant all on cinder.* to 'cinder'@'%' identified by 'cinder';
- quit;
修改 /etc/cinder/cinder.conf:
- cat /etc/cinder/cinder.conf
- [DEFAULT]
- # LOG/STATE
- verbose = True
- debug = False
- iscsi_helper = ietadm
- auth_strategy = keystone
- volume_group = cinder-volumes
- volume_name_template = volume-%s
- state_path = /var/lib/cinder
- volumes_dir = /var/lib/cinder/volumes
- rootwrap_config = /etc/cinder/rootwrap.conf
- api_paste_config = /etc/cinder/api-paste.ini
- # RPC
- rabbit_host = 172.16.0.51
- rabbit_password = guest
- rpc_backend = cinder.openstack.common.rpc.impl_kombu
- # DATABASE
- sql_connection = mysql://cinder:cinder@172.16.0.51/cinder
- # API
- osapi_volume_extension = cinder.api.contrib.standard_extensions
修改 /etc/cinder/api-paste.ini 文件末尾 [filter:authtoken] 字段 :
- [filter:authtoken]
- paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
- service_protocol = http
- service_host = 172.16.0.51
- service_port = 5000
- auth_host = 172.16.0.51
- auth_port = 35357
- auth_protocol = http
- admin_tenant_name = service
- admin_user = cinder
- admin_password = password
- signing_dir = /var/lib/cinder
- 创建一个卷组,命名为 cinder-volumes:
创建一个普通分区,我这里用的sdb,创建了一个主分区,大小为所有空间
- # fdisk /dev/sdb
- n
- p
- 1
- Enter
- Enter
- t
- 8e
- w
- # partx -a /dev/sdb
- # pvcreate /dev/sdb1
- # vgcreate cinder-volumes /dev/sdb1
- # vgs
- VG #PV #LV #SN Attr VSize VFree
- cinder-volumes 1 7 0 wz--n- 1.64t 75.50g
- 同步数据库并重启服务:
- cinder-manage db sync
- /etc/init.d/cinder-api restart
- /etc/init.d/cinder-scheduler restart
- /etc/init.d/cinder-volume restart
最后,我们需要执行:
- /etc/init.d/iscsitarget stop
具体请看这里
Nova
- 安装 nova-compute:
在 /etc/nova/api-paste.ini 中修改 autotoken 验证部分:
- apt-get install nova-compute
修改 /etc/nova/nova.conf,类似下面这样:
- [filter:authtoken]
- paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
- auth_host = 172.16.0.51
- auth_port = 35357
- auth_protocol = http
- admin_tenant_name = service
- admin_user = nova
- admin_password = password
- signing_dir = /tmp/keystone-signing-nova
- # Workaround for https://bugs.launchpad.net/nova/+bug/1154809
- auth_version = v2.0
启动 nova-compute 服务:
- cat /etc/nova/nova.conf
- [DEFAULT]
- dhcpbridge_flagfile=/etc/nova/nova.conf
- dhcpbridge=/usr/bin/nova-dhcpbridge
- logdir=/var/log/nova
- state_path=/var/lib/nova
- lock_path=/var/lock/nova
- force_dhcp_release=True
- iscsi_helper=tgtadm
- #iscsi_helper = ietadm
- libvirt_use_virtio_for_bridges=True
- connection_type=libvirt
- root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
- verbose=True
- ec2_private_dns_show_ip=True
- #api_paste_config=/etc/nova/api-paste.ini
- volumes_path=/var/lib/nova/volumes
- enabled_apis=ec2,osapi_compute,metadata
- # SCHEDULER
- compute_scheduler_driver = nova.scheduler.filter_scheduler.FilterScheduler
- ## VOLUMES
- volume_api_class = nova.volume.cinder.API
- osapi_volume_listen_port=5900
- iscsi_ip_prefix=192.168.80
- iscsi_ip_address=192.168.80.22
- # DATABASE
- sql_connection = mysql://nova:nova@172.16.0.51/nova
- # COMPUTE
- libvirt_type = kvm
- compute_driver = libvirt.LibvirtDriver
- instance_name_template = instance-%08x
- api_paste_config = /etc/nova/api-paste.ini
- # COMPUTE/APIS: if you have separate configs for separate services
- # this flag is required for both nova-api and nova-compute
- allow_resize_to_same_host = True
- # APIS
- osapi_compute_extension = nova.api.openstack.compute.contrib.standard_extensions
- ec2_dmz_host = 172.16.0.51
- s3_host = 172.16.0.51
- metadata_host=172.16.0.51
- metadata_listen=0.0.0.0
- # RABBITMQ
- rabbit_host = 172.16.0.51
- rabbit_password = guest
- # GLANCE
- image_service = nova.image.glance.GlanceImageService
- glance_api_servers = 172.16.0.51:9292
- # NETWORK
- network_api_class = nova.network.quantumv2.api.API
- quantum_url = http://172.16.0.51:9696
- quantum_auth_strategy = keystone
- quantum_admin_tenant_name = service
- quantum_admin_username = quantum
- quantum_admin_password = password
- quantum_admin_auth_url = http://172.16.0.51:35357/v2.0
- service_quantum_metadata_proxy = True
- libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
- linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
- firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver
- # NOVNC CONSOLE
- novncproxy_base_url = http://192.168.80.21:6080/vnc_auto.html
- # Change vncserver_proxyclient_address and vncserver_listen to match each compute host
- vncserver_proxyclient_address = 192.168.80.22
- vncserver_listen = 192.168.80.22
- # AUTHENTICATION
- auth_strategy = keystone
- [keystone_authtoken]
- auth_host = 172.16.0.51
- auth_port = 35357
- auth_protocol = http
- admin_tenant_name = service
- admin_user = nova
- admin_password = password
- signing_dir = /tmp/keystone-signing-nova
- service nova-compute restart
检查 nova 相关服务笑脸:
发现 compute 节点已经加入:
- # nova-manage service list
- Binary Host Zone Status State Updated_At
- nova-cert control internal enabled :-) 2013-05-07 07:09:56
- nova-conductor control internal enabled :-) 2013-05-07 07:09:55
- nova-consoleauth control internal enabled :-) 2013-05-07 07:09:56
- nova-scheduler control internal enabled :-) 2013-05-07 07:09:56
- nova-compute node-02 nova enabled :-) 2013-05-07 07:10:03
- nova-compute node-03 nova enabled :-) 2013-05-07 07:10:03
到这里,计算节点就成功加入了,我们可以尝试新建节点验证是否成功,具体horizon的使用方法,后面再写文章详细介绍,主要就是网络的创建这块稍微有点复杂,需要稍微注意下!
本文主要介绍multihost模式下的openstack多节点部署,这种模式可以避免随着节点的增多,流量膨胀一个网络节点无法满足需求的情况。当然如果只是自己搞一台host,在上面虚拟几台VM做实验,或者小型创业公司,通过在五台十台机器上的虚拟化,创建一些VM给公司内部开发测试团队使用,那么使用多节点模式即可,即一个控制节点、一个网络节点、多个计算节点。在Essxi中nova-network的mutihost可以很好分担网络节点的负载,同样在Quantum中也有类似的功能,本文主要参考一位网友Geek的blog所写,做了适当修改,在我的环境里验证没有问题,这里记录下来也仅供大家参考!
环境需求:
管理网络: 172.16.0.0/16
业务网络: 10.10.10.0/24
外部网络: 192.168.80.0/24
我这里使用的是三台机器,你也可以横向扩展,增加计算节点的数量。
Node Role: | NICs |
Control Node: | eth0 (192.168.80.21), eth1 (172.16.0.51) |
Compute1 Node: | eth0(192.168.80.22),eth1(172.16.0.52),eth2(10.10.10.52) |
Compute2 Node: | eth0(192.168.80.23),eth1(172.16.0.53),eth2(10.10.10.53) |
控制节点
网络设置
- cat /etc/network/interfaces
- # This file describes the network interfaces available on your system
- # and how to activate them. For more information, see interfaces(5).
- # The loopback network interface
- auto lo
- iface lo inet loopback
- # The primary network interface
- auto eth0
- iface eth0 inet static
- address 192.168.80.21
- netmask 255.255.255.0
- gateway 192.168.80.1
- dns-nameservers 8.8.8.8
- auto eth1
- iface eth1 inet static
- address 172.16.0.51
- netmask 255.255.0.0
添加源
添加 Grizzly 源,并升级系统
- cat > /etc/apt/sources.list.d/grizzly.list << _ESXU_
- deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main
- deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-proposed/grizzly main
- _ESXU_
- apt-get install ubuntu-cloud-keyring
- apt-get update
- apt-get upgrade
MySQL & RabbitMQ
- 安装 MySQL:
- apt-get install mysql-server python-mysqldb
- 使用sed编辑 /etc/mysql/my.cnf文件的更改绑定地址(0.0.0.0)从本地主机(127.0.0.1)
禁止 mysql 做域名解析,防止连接 mysql出现错误和远程连接 mysql慢的现象。
然后重新启动mysql服务.
安装 RabbitMQ:
- sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf
- sed -i '44 i skip-name-resolve' /etc/mysql/my.cnf
- /etc/init.d/mysql restart
apt-get install rabbitmq-server
NTP
- 安装 NTP 服务
配置 NTP服务器计算节点控制器节点之间的同步:
- apt-get install ntp
- sed -i 's/server ntp.ubuntu.com/server ntp.ubuntu.com\nserver 127.127.1.0\nfudge 127.127.1.0 stratum 10/g' /etc/ntp.conf
- service ntp restart
- 开启路由转发
- vim /etc/sysctl.conf
- net.ipv4.ip_forward=1
Keystone
- 安装 Keystone
- apt-get install keystone
在 mysql 里创建 keystone 数据库并授权:
- mysql -uroot -p
- create database keystone;
- grant all on keystone.* to 'keystone'@'%' identified by 'keystone';
- quit;
修改 /etc/keystone/keystone.conf 配置文件:
- admin_token = www.longgeek.com
- debug = True
- verbose = True
- [sql]
- connection = mysql://keystone:keystone@172.16.0.51/keystone #必须写到 [sql] 下面
- [signing]
- token_format = UUID
启动 keystone 然后同步数据库
- /etc/init.d/keystone restart
- keystone-manage db_sync
用脚本导入数据:
用脚本来创建 user、role、tenant、service、endpoint,下载脚本:
- wget http://192.168.80.8/ubuntu/keystone.sh
修改脚本内容:
- ADMIN_PASSWORD=${ADMIN_PASSWORD:-password}
- SERVICE_PASSWORD=${SERVICE_PASSWORD:-password}
- export SERVICE_TOKEN="admin"
- export SERVICE_ENDPOINT="http://172.16.0.51:35357/v2.0"
- SERVICE_TENANT_NAME=${SERVICE_TENANT_NAME:-service}
- KEYSTONE_REGION=RegionOne
- # If you need to provide the service, please to open keystone_wlan_ip and swift_wlan_ip
- # of course you are a multi-node architecture, and swift service
- # corresponding ip address set the following variables
- KEYSTONE_IP="172.16.0.51"
- #KEYSTONE_WLAN_IP="172.16.0.51"
- SWIFT_IP="172.16.0.51"
- #SWIFT_WLAN_IP="172.16.0.51"
- COMPUTE_IP=$KEYSTONE_IP
- EC2_IP=$KEYSTONE_IP
- GLANCE_IP=$KEYSTONE_IP
- VOLUME_IP=$KEYSTONE_IP
- QUANTUM_IP=$KEYSTONE_IP
在这里更改你的管理员密码即可,IP地址也可根据自己环境更改下!
执行脚本:
设置环境变量:
- sh keystone.sh
这里变量对于 keystone.sh 里的设置:
- cat > /etc/profile << _ESXU_
- export OS_TENANT_NAME=admin #这里如果设置为 service 其它服务会无法验证.
- export OS_USERNAME=admin
- export OS_PASSWORD=password
- export OS_AUTH_URL=http://172.16.0.51:5000/v2.0/
- export OS_REGION_NAME=RegionOne
- export SERVICE_TOKEN=admin
- export SERVICE_ENDPOINT=http://172.16.0.51:35357/v2.0/
- _ESXU_
- # source /root/profile #使环境变量生效
Glance
- 安装 Glance
- apt-get install glance
- 创建一个 glance 数据库并授权:
- mysql -uroot -p
- create database glance;
- grant all on glance.* to 'glance'@'%' identified by 'glance';
- 更新 /etc/glance/glance-api.conf 文件:
- verbose = True
- debug = True
- sql_connection = mysql://glance:glance@172.16.0.51/glance
- workers = 4
- registry_host = 172.16.0.51
- notifier_strategy = rabbit
- rabbit_host = 172.16.0.51
- rabbit_userid = guest
- rabbit_password = guest
- [keystone_authtoken]
- auth_host = 172.16.0.51
- auth_port = 35357
- auth_protocol = http
- admin_tenant_name = service
- admin_user = glance
- admin_password = password
- [paste_deploy]
- config_file = /etc/glance/glance-api-paste.ini
- flavor = keystone
- 更新 /etc/glance/glance-registry.conf 文件:
- verbose = True
- debug = True
- sql_connection = mysql://glance:glance@172.16.0.51/glance
- [keystone_authtoken]
- auth_host = 172.16.0.51
- auth_port = 35357
- auth_protocol = http
- admin_tenant_name = service
- admin_user = glance
- admin_password = password
- [paste_deploy]
- config_file = /etc/glance/glance-registry-paste.ini
- flavor = keystone
启动 glance-api 和 glance-registry 服务并同步到数据库:
- /etc/init.d/glance-api restart
- /etc/init.d/glance-registry restart
- glance-manage version_control 0
- glance-manage db_sync
测试 glance 的安装,上传一个镜像。下载 Cirros 镜像并上传:
- wget https://launchpad.net/cirros/trunk/0.3.0/+download/
- cirros-0.3.0-x86_64-disk.img
- glance image-create --name='cirros' --public --container-format=ovf --disk-format=qcow2 < ./cirros-0.3.0-x86_64-disk.img
- 查看上传的镜像:
- glance image-list
Quantum
- 安装 Quantum server和 OpenVSwitch包:
- apt-get install quantum-server quantum-plugin-openvswitch
- 创建 quantum 数据库并授权用户访问:
编辑 /etc/quantum/quantum.conf文件
- mysql -uroot -p
- create database quantum;
- grant all on quantum.* to 'quantum'@'%' identified by 'quantum';
- quit;
- [DEFAULT]
- debug = True
- verbose = True
- state_path = /var/lib/quantum
- lock_path = $state_path/lock
- bind_host = 0.0.0.0
- bind_port = 9696
- core_plugin = quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2
- api_paste_config = /etc/quantum/api-paste.ini
- control_exchange = quantum
- rabbit_host = 172.16.0.51
- rabbit_password = guest
- rabbit_port = 5672
- rabbit_userid = guest
- notification_driver = quantum.openstack.common.notifier.rpc_notifier
- default_notification_level = INFO
- notification_topics = notifications
- [QUOTAS]
- [DEFAULT_SERVICETYPE]
- [AGENT]
- root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf
- [keystone_authtoken]
- auth_host = 172.16.0.51
- auth_port = 35357
- auth_protocol = http
- admin_tenant_name = service
- admin_user = quantum
- admin_password = password
- signing_dir = /var/lib/quantum/keystone-signing
- 编辑 OVS 插件配置文件 /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini:
- [DATABASE]
- sql_connection = mysql://quantum:quantum@172.16.0.51/quantum
- reconnect_interval = 2
- [OVS]
- tenant_network_type = gre
- enable_tunneling = True
- tunnel_id_ranges = 1:1000
- [AGENT]
- polling_interval = 2
- [SECURITYGROUP]
启动 quantum 服务:
- /etc/init.d/quantum-server restart
Nova
- 安装 Nova 相关软件包:
- apt-get install nova-api nova-cert novnc nova-conductor nova-consoleauth nova-scheduler nova-novncproxy
创建 nova 数据库,授权 nova 用户访问它:
- mysql -uroot -p
- create database nova;
- grant all on nova.* to 'nova'@'%' identified by 'nova';
- quit;
在 /etc/nova/api-paste.ini 中修改 autotoken 验证部分:
- [filter:authtoken]
- paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
- auth_host = 172.16.0.51
- auth_port = 35357
- auth_protocol = http
- admin_tenant_name = service
- admin_user = nova
- admin_password = password
- signing_dir = /tmp/keystone-signing-nova
- # Workaround for https://bugs.launchpad.net/nova/+bug/1154809
- auth_version = v2.0
修改 /etc/nova/nova.conf, 类似下面这样:
- [DEFAULT]
- # LOGS/STATE
- debug = False
- verbose = True
- logdir = /var/log/nova
- state_path = /var/lib/nova
- lock_path = /var/lock/nova
- rootwrap_config = /etc/nova/rootwrap.conf
- dhcpbridge = /usr/bin/nova-dhcpbridge
- # SCHEDULER
- compute_scheduler_driver = nova.scheduler.filter_scheduler.FilterScheduler
- ## VOLUMES
- volume_api_class = nova.volume.cinder.API
- # DATABASE
- sql_connection = mysql://nova:nova@172.16.0.51/nova
- # COMPUTE
- libvirt_type = kvm
- compute_driver = libvirt.LibvirtDriver
- instance_name_template = instance-%08x
- api_paste_config = /etc/nova/api-paste.ini
- # COMPUTE/APIS: if you have separate configs for separate services
- # this flag is required for both nova-api and nova-compute
- allow_resize_to_same_host = True
- # APIS
- osapi_compute_extension = nova.api.openstack.compute.contrib.standard_extensions
- ec2_dmz_host = 172.16.0.51
- s3_host = 172.16.0.51
- metadata_host = 172.16.0.51
- metadata_listen = 0.0.0.0
- # RABBITMQ
- rabbit_host = 172.16.0.51
- rabbit_password = guest
- # GLANCE
- image_service = nova.image.glance.GlanceImageService
- glance_api_servers = 172.16.0.51:9292
- # NETWORK
- network_api_class = nova.network.quantumv2.api.API
- quantum_url = http://172.16.0.51:9696
- quantum_auth_strategy = keystone
- quantum_admin_tenant_name = service
- quantum_admin_username = quantum
- quantum_admin_password = password
- quantum_admin_auth_url = http://172.16.0.51:35357/v2.0
- service_quantum_metadata_proxy = True
- libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
- linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
- firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver
- # NOVNC CONSOLE
- novncproxy_base_url = http://192.168.8.51:6080/vnc_auto.html
- # Change vncserver_proxyclient_address and vncserver_listen to match each compute host
- vncserver_proxyclient_address = 192.168.8.51
- vncserver_listen = 0.0.0.0
- # AUTHENTICATION
- auth_strategy = keystone
- [keystone_authtoken]
- auth_host = 172.16.0.51
- auth_port = 35357
- auth_protocol = http
- admin_tenant_name = service
- admin_user = nova
- admin_password = password
- signing_dir = /tmp/keystone-signing-nova
同步数据库,启动 nova 相关服务:
- nova-manage db sync
- cd /etc/init.d/; for i in $( ls nova-* ); do sudo /etc/init.d/$i restart; done
检查 nova 相关服务笑脸
Horizon
- nova-manage service list
- # nova-manage service list
- Binary Host Zone Status State Updated_At
- nova-cert control internal enabled :-) 2013-05-07 07:09:56
- nova-conductor control internal enabled :-) 2013-05-07 07:09:55
- nova-consoleauth control internal enabled :-) 2013-05-07 07:09:56
- nova-scheduler control internal enabled :-) 2013-05-07 07:09:56
- 安装 horizon:
- apt-get install openstack-dashboard memcached
如果你不喜欢 Ubuntu 的主题,可以禁用它,使用默认界面:
重新加载 apache2 和 memcache:
- sed -i 's/127.0.0.1/172.16.0.51/g' /etc/openstack-dashboard/local_settings.py
- sed -i 's/127.0.0.1/172.16.0.51/g' /etc/memcached.conf
- vim /etc/openstack-dashboard/local_settings.py
- DEBUG = True
- # Enable the Ubuntu theme if it is present.
- #try:
- # from ubuntu_theme import *
- #except ImportError:
- # pass
- /etc/init.d/apache2 restart
- /etc/init.d/memcached restart
现在可以通过浏览器 http://192.168.8.51/horizon 使用 admin:password 来登录界面。
所有计算节点
网络设置
所有计算节点安装方法相同,只需要替换 IP 地址即可:
- # cat /etc/network/interfaces
- # This file describes the network interfaces available on your system
- # and how to activate them. For more information, see interfaces(5).
- # The loopback network interface
- auto lo
- iface lo inet loopback
- # The primary network interface
- auto eth0
- iface eth0 inet manual
- up ifconfig $IFACE 0.0.0.0 up
- up ip link set $IFACE promisc on
- down ip link set $IFACE promisc off
- down ifconfig $IFACE down
- auto br-ex
- iface br-ex inet static
- address 192.168.80.22
- netmask 255.255.255.0
- gateway 192.168.80.1
- dns-nameservers 8.8.8.8
- #address 192.168.80.22
- #netmask 255.255.255.0
- #gateway 192.168.80.1
- #dns-nameservers 8.8.8.8
- auto eth1
- iface eth1 inet static
- address 172.16.0.52
- netmask 255.255.0.0
- auto eth2
- iface eth2 inet static
- address 10.10.10.52
- netmask 255.255.255.0
添加源
- 添加 Grizzly 源,并升级系统
- cat > /etc/apt/sources.list.d/grizzly.list << _GEEK_
- deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main
- deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-proposed/grizzly main
- _GEEK_
- apt-get update
- apt-get upgrade
- apt-get install ubuntu-cloud-keyring
- 设置 ntp 和开启路由转发:
OpenVSwitch
- # apt-get install ntp
- # sed -i 's/server ntp.ubuntu.com/server 172.16.0.51/g' /etc/ntp.conf
- # service ntp restart
- # vim /etc/sysctl.conf
- net.ipv4.ip_forward=1
- # sysctl -p
- 安装 openVSwitch:
必须按照下面安装顺序:
设置 ovs-brcompatd 启动:
- apt-get install openvswitch-datapath-source
- module-assistant auto-install openvswitch-datapath
- apt-get install openvswitch-switch openvswitch-brcompat
启动 openvswitch-switch:
- sed -i 's/# BRCOMPAT=no/BRCOMPAT=yes/g' /etc/default/openvswitch-switch
- echo 'brcompat' >> /etc/modules
- /etc/init.d/openvswitch-switch restart
- * ovs-brcompatd is not running # brcompatd 没有启动,尝试再次启动.
- * ovs-vswitchd is not running
- * ovsdb-server is not running
- * Inserting openvswitch module
- * /etc/openvswitch/conf.db does not exist
- * Creating empty database /etc/openvswitch/conf.db
- * Starting ovsdb-server
- * Configuring Open vSwitch system IDs
- * Starting ovs-vswitchd
- * Enabling gre with iptables
再次启动,直到 ovs-brcompatd、ovs-vswitchd、ovsdb-server等服务都启动:
如果还是启动不了的话,用下面命令:
- # /etc/init.d/openvswitch-switch restart
- # lsmod | grep brcompat
- brcompat 13512 0
- openvswitch 84038 7 brcompat
创建网桥:
- /etc/init.d/openvswitch-switch force-reload-kmod
- ovs-vsctl add-br br-int # br-int 用于 vm 整合
- ovs-vsctl add-br br-ex # br-ex 用于从互联网上访问 vm
- ovs-vsctl add-port br-ex eth0 # br-ex 桥接到 eth0
- 重启网卡可能会出现:
- /etc/init.d/networking restart
- RTNETLINK answers: File exists
- Failed to bring up br-ex.
br-ex 可能有 ip 地址,但没有网关和 DNS,需要手工配置一下,或者重启机器. 重启机器后就正常了
- 查看桥接的网络
- ovs-vsctl list-br
- ovs-vsctl show
- 安装 Quantum openvswitch agent, metadata-agent l3 agent 和 dhcp agent:
- apt-get install quantum-plugin-openvswitch-agent quantum-dhcp-agent quantum-l3-agent quantum-metadata-agent
- 编辑 /etc/quantum/quantum.conf 文件:
编辑 OVS 插件配置文件 /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini:
- [DEFAULT]
- debug = True
- verbose = True
- state_path = /var/lib/quantum
- lock_path = $state_path/lock
- bind_host = 0.0.0.0
- bind_port = 9696
- core_plugin = quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2
- api_paste_config = /etc/quantum/api-paste.ini
- control_exchange = quantum
- rabbit_host = 172.16.0.51
- rabbit_password = guest
- rabbit_port = 5672
- rabbit_userid = guest
- notification_driver = quantum.openstack.common.notifier.rpc_notifier
- default_notification_level = INFO
- notification_topics = notifications
- [QUOTAS]
- [DEFAULT_SERVICETYPE]
- [AGENT]
- root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf
- [keystone_authtoken]
- auth_host = 172.16.0.51
- auth_port = 35357
- auth_protocol = http
- admin_tenant_name = service
- admin_user = quantum
- admin_password = password
- signing_dir = /var/lib/quantum/keystone-signing
- [DATABASE]
- sql_connection = mysql://quantum:quantum@172.16.0.51/quantum
- reconnect_interval = 2
- [OVS]
- enable_tunneling = True
- tenant_network_type = gre
- tunnel_id_ranges = 1:1000
- local_ip = 10.10.10.52
- integration_bridge = br-int
- tunnel_bridge = br-tun
- [AGENT]
- polling_interval = 2
- [SECURITYGROUP]
编辑 /etc/quantum/l3_agent.ini:
- [DEFAULT]
- debug = True
- verbose = True
- use_namespaces = True
- external_network_bridge = br-ex
- signing_dir = /var/cache/quantum
- admin_tenant_name = service
- admin_user = quantum
- admin_password = password
- auth_url = http://172.16.0.51:35357/v2.0
- l3_agent_manager = quantum.agent.l3_agent.L3NATAgentWithStateReport
- root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf
- interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver
- enable_multi_host = True
编辑 /etc/quantum/dhcp_agent.ini:
- [DEFAULT]
- debug = True
- verbose = True
- use_namespaces = True
- signing_dir = /var/cache/quantum
- admin_tenant_name = service
- admin_user = quantum
- admin_password = password
- auth_url = http://172.16.0.51:35357/v2.0
- dhcp_agent_manager = quantum.agent.dhcp_agent.DhcpAgentWithStateReport
- root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf
- state_path = /var/lib/quantum
- interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver
- dhcp_driver = quantum.agent.linux.dhcp.Dnsmasq
- enable_multi_host = True
- enable_isolated_metadata = False
编辑 /etc/quantum/metadata_agent.ini:
- [DEFAULT]
- debug = True
- auth_url = http://172.16.0.51:35357/v2.0
- auth_region = RegionOne
- admin_tenant_name = service
- admin_user = quantum
- admin_password = password
- state_path = /var/lib/quantum
- nova_metadata_ip = 172.16.0.51
- nova_metadata_port = 8775
启动 quantum 所有服务:
- service quantum-plugin-openvswitch-agent restart
- service quantum-dhcp-agent restart
- service quantum-l3-agent restart
- service quantum-metadata-agent restart
- 安装 Cinder 需要的包:
配置 iscsi 并启动服务:
- apt-get install cinder-api cinder-common cinder-scheduler cinder-volume python-cinderclient iscsitarget open-iscsi iscsitarget-dkms
- sed -i 's/false/true/g' /etc/default/iscsitarget
- /etc/init.d/iscsitarget restart
- /etc/init.d/open-iscsi restart
创建 cinder 数据库并授权用户访问:
- mysql -uroot -p
- create database cinder;
- grant all on cinder.* to 'cinder'@'%' identified by 'cinder';
- quit;
修改 /etc/cinder/cinder.conf:
- cat /etc/cinder/cinder.conf
- [DEFAULT]
- # LOG/STATE
- verbose = True
- debug = False
- iscsi_helper = ietadm
- auth_strategy = keystone
- volume_group = cinder-volumes
- volume_name_template = volume-%s
- state_path = /var/lib/cinder
- volumes_dir = /var/lib/cinder/volumes
- rootwrap_config = /etc/cinder/rootwrap.conf
- api_paste_config = /etc/cinder/api-paste.ini
- # RPC
- rabbit_host = 172.16.0.51
- rabbit_password = guest
- rpc_backend = cinder.openstack.common.rpc.impl_kombu
- # DATABASE
- sql_connection = mysql://cinder:cinder@172.16.0.51/cinder
- # API
- osapi_volume_extension = cinder.api.contrib.standard_extensions
修改 /etc/cinder/api-paste.ini 文件末尾 [filter:authtoken] 字段 :
- [filter:authtoken]
- paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
- service_protocol = http
- service_host = 172.16.0.51
- service_port = 5000
- auth_host = 172.16.0.51
- auth_port = 35357
- auth_protocol = http
- admin_tenant_name = service
- admin_user = cinder
- admin_password = password
- signing_dir = /var/lib/cinder
- 创建一个卷组,命名为 cinder-volumes:
创建一个普通分区,我这里用的sdb,创建了一个主分区,大小为所有空间
- # fdisk /dev/sdb
- n
- p
- 1
- Enter
- Enter
- t
- 8e
- w
- # partx -a /dev/sdb
- # pvcreate /dev/sdb1
- # vgcreate cinder-volumes /dev/sdb1
- # vgs
- VG #PV #LV #SN Attr VSize VFree
- cinder-volumes 1 7 0 wz--n- 1.64t 75.50g
- 同步数据库并重启服务:
- cinder-manage db sync
- /etc/init.d/cinder-api restart
- /etc/init.d/cinder-scheduler restart
- /etc/init.d/cinder-volume restart
最后,我们需要执行:
- /etc/init.d/iscsitarget stop
具体请看这里
Nova
- 安装 nova-compute:
在 /etc/nova/api-paste.ini 中修改 autotoken 验证部分:
- apt-get install nova-compute
修改 /etc/nova/nova.conf,类似下面这样:
- [filter:authtoken]
- paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
- auth_host = 172.16.0.51
- auth_port = 35357
- auth_protocol = http
- admin_tenant_name = service
- admin_user = nova
- admin_password = password
- signing_dir = /tmp/keystone-signing-nova
- # Workaround for https://bugs.launchpad.net/nova/+bug/1154809
- auth_version = v2.0
启动 nova-compute 服务:
- cat /etc/nova/nova.conf
- [DEFAULT]
- dhcpbridge_flagfile=/etc/nova/nova.conf
- dhcpbridge=/usr/bin/nova-dhcpbridge
- logdir=/var/log/nova
- state_path=/var/lib/nova
- lock_path=/var/lock/nova
- force_dhcp_release=True
- iscsi_helper=tgtadm
- #iscsi_helper = ietadm
- libvirt_use_virtio_for_bridges=True
- connection_type=libvirt
- root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
- verbose=True
- ec2_private_dns_show_ip=True
- #api_paste_config=/etc/nova/api-paste.ini
- volumes_path=/var/lib/nova/volumes
- enabled_apis=ec2,osapi_compute,metadata
- # SCHEDULER
- compute_scheduler_driver = nova.scheduler.filter_scheduler.FilterScheduler
- ## VOLUMES
- volume_api_class = nova.volume.cinder.API
- osapi_volume_listen_port=5900
- iscsi_ip_prefix=192.168.80
- iscsi_ip_address=192.168.80.22
- # DATABASE
- sql_connection = mysql://nova:nova@172.16.0.51/nova
- # COMPUTE
- libvirt_type = kvm
- compute_driver = libvirt.LibvirtDriver
- instance_name_template = instance-%08x
- api_paste_config = /etc/nova/api-paste.ini
- # COMPUTE/APIS: if you have separate configs for separate services
- # this flag is required for both nova-api and nova-compute
- allow_resize_to_same_host = True
- # APIS
- osapi_compute_extension = nova.api.openstack.compute.contrib.standard_extensions
- ec2_dmz_host = 172.16.0.51
- s3_host = 172.16.0.51
- metadata_host=172.16.0.51
- metadata_listen=0.0.0.0
- # RABBITMQ
- rabbit_host = 172.16.0.51
- rabbit_password = guest
- # GLANCE
- image_service = nova.image.glance.GlanceImageService
- glance_api_servers = 172.16.0.51:9292
- # NETWORK
- network_api_class = nova.network.quantumv2.api.API
- quantum_url = http://172.16.0.51:9696
- quantum_auth_strategy = keystone
- quantum_admin_tenant_name = service
- quantum_admin_username = quantum
- quantum_admin_password = password
- quantum_admin_auth_url = http://172.16.0.51:35357/v2.0
- service_quantum_metadata_proxy = True
- libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
- linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
- firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver
- # NOVNC CONSOLE
- novncproxy_base_url = http://192.168.80.21:6080/vnc_auto.html
- # Change vncserver_proxyclient_address and vncserver_listen to match each compute host
- vncserver_proxyclient_address = 192.168.80.22
- vncserver_listen = 192.168.80.22
- # AUTHENTICATION
- auth_strategy = keystone
- [keystone_authtoken]
- auth_host = 172.16.0.51
- auth_port = 35357
- auth_protocol = http
- admin_tenant_name = service
- admin_user = nova
- admin_password = password
- signing_dir = /tmp/keystone-signing-nova
- service nova-compute restart
检查 nova 相关服务笑脸:
发现 compute 节点已经加入:
- # nova-manage service list
- Binary Host Zone Status State Updated_At
- nova-cert control internal enabled :-) 2013-05-07 07:09:56
- nova-conductor control internal enabled :-) 2013-05-07 07:09:55
- nova-consoleauth control internal enabled :-) 2013-05-07 07:09:56
- nova-scheduler control internal enabled :-) 2013-05-07 07:09:56
- nova-compute node-02 nova enabled :-) 2013-05-07 07:10:03
- nova-compute node-03 nova enabled :-) 2013-05-07 07:10:03
到这里,计算节点就成功加入了,我们可以尝试新建节点验证是否成功,具体horizon的使用方法,后面再写文章详细介绍,主要就是网络的创建这块稍微有点复杂,需要稍微注意下!