How to Install Mirantis Fuel 5.1 Openstack wihceph
作者:@法不荣情 [原文链接] http://weibo.com/p/2304189cacdb3d0102v55r
本人刚开始接触openstack,对一切还不是很熟悉,刚开始时是使用rdo 快速部署单节点openstack,之后手动安装了次openstack,是安装文档来敲命令,有些地方又看不懂,非常麻烦,更别说部署一个多节点的openstack HA高可用环境了,还好openstack社区中,mirantis openstack出了Fuel这个工具,可以快速部署一套openstack。除了使用之前在vmware workstation 10上使用fuel5.0快速部署了openstack HA高可用,感觉还不错,很快就装好了一个openstack HA高可用的环境。 最近看到5.1版本的出来了,看了相关文档,现在来在实际物理环境中部署一套openstack HA环境,其中使用ceph作为统一存储,另外添加两个存储节点。
感谢罗勇老师等人的文档,写的很好,当然也感谢mirantis的贡献,以下是个人在部署过程中的一些记录,以此作为笔记,若有错误,还望指出。
1、关于mirantis
Mirantis,一家很牛逼的openstack服务集成商,他是社区贡献排名前5名中唯一一个靠软件和服务吃饭的公司(其他分别是Red Hat, HP, IBM,Rackspace)。相对于其他几个社区发行版,Fuel的版本节奏很快,平均每两个月就能提供一个相对稳定的社区版。
2、关于FUEL
Fuel 是一个为openstack端到端”一键部署“设计的工具,其功能含盖自动的PXE方式的操作系统安装,DHCP服务,Orchestration服务 和puppet 配置管理相关服务等,此外还有openstack 关键业务健康检查和log 实时查看等非常好用的服务。
FUEL5.1是基于icehouse版本的openstack,其中系统为centos6.5和Ubuntu12.04.4。
Fuel的优点如下:
· 节点的自动发现和预校验
· 配置简单、快速
· 支持多种操作系统和发行版,支持HA部署
· 对外提供API对环境进行管理和配置,例如动态添加计算/存储节点
· 自带健康检查工具
· 支持Neutron,例如GRE和namespace都做进来了,子网能配置具体使用哪个物理网卡等
Fuel的架构
图片来源于http://www.openstack.cn/p692.html
使用虚拟机采用fuel来部署openstack可以看这个文档,写的非常好,很详细
http://www.openstack.cn/p692.html
3、环境拓扑图
但在部署时因为是测试环境,所以网卡有限每个服务器只有两张网卡,所以只用到两台交换机,交换机是DELL PowerConnect 5448和DELL PowerConnect 5448。
4、交换机配置
配置所需要的VLAN(此处用到的VLAN有101和102),以及在交换机端口上开启流量控(flowcontrol),所有交换机包括Private, Management, Storage networks都需允许所需要的VLAN通过即在使用端口上配置为trunk模式,并允许VLAN。配置如下(其他交换机设备的配置可能会有所不同)
switch > enable
switch # configure
switch (config) #vlandatabase
switch (config)# vlan 101-102
switch (config) # interfacerange ethernet all
switch (config) # switchportmode trunk
switch (config) # switchporttrunk allowed vlan add all
如果交换机没有配置的话,在fuel网络验证的时候会出现问题。因为使用到了VLAN标记。
5、安装fuel master
这个就是单纯装系统在加点配置,如下图所示进入安装欢迎界面,按提示按“Tab”键可以修改ip信息,也可以将showmenu=no修改为showmenu=yes,然后回车进入详细配置界面,此处是使用默认安装,直接回车即可一步安装完成。
安装完成后的界面如下图所示
该界面提示了root用户登录的密码,以及fuel web登录的方式以及用户名和密码,使用网页登录界面如下所示
6、部署过程
6.1 新建openstack环境
使用用户名admin,密码admin登录后见如下图界面
点击“新建openstack环境”开始建立openstack环境,点击“前进”进入下一步;
输入openstack环境名车,选择openstack版本,此处其实是选择系统,因为openstack版本固定为icehouse版本了,点击“前进”进入下一步。
选择环境的部署模式,有HA多节点和openstack多节点两个模式,HA多节点需要至少3个控制节点来部署,此处选择“HA多节点”,点击“前进”进入下一步;
因为环境部署在物理机上,所以选择KVM,如果是在虚拟机上则选择QEMU,若是使用vCenter环境的话,则选择vCenter,点击“前进”进入下一步;
此处选择GRE网络模式,点击“前进”进入下一步;
后端存储选择“ceph”,此处要注意的是选择这个选项时,需要另外两个或两个以上节点作为存储节点,点击“前进”进入下一步;
附加服务,此处不选择使用,点击“前进”进入下一步;
点击“新建”,完成openstack环境的建立。
6.2 发现节点
此测试环境中使用两张网卡,不过最好是三张,且必须要有PXE功能,在BIOS中启动服务器的“虚拟技术”功能,且设置为从pxe网络启动。
从pxe启动后进入界面,默认会自动进入bootstrap启动,画面出现bootstrap login后,fuel web才会发现此节点
Fuel web发现节点时,提示如下
发现节点之后,接下来就是增加节点,进入刚创建的openstack环境,点击右上角的“增加节点”,然后勾选“controller”角色,在选择此角色的服务器,建议在这之前最好记好这么服务器的网卡的MAC地址,因为此处没办法判断那台服务器是哪台,或者可以这样处理,选择控制节点时,就是开启要作为控制节点的服务器至少三台从网络PXE启动,然后增加节点完成之后,在进行计算节点或存储节点服务器的选择
增加节点完之后,如下图所示,但状态是“等待增加”,下图是部署好的;
6.3 部署与配置
勾选某台服务器进行磁盘配置和网络配置
如下,磁盘配置,此处使用默认;
如下使用网络配置,更改如下;
接下来进入整个网络配置,点击 “网络”,设置如图所示
最后验证网络,如果在交换机环节没有配置好的话,此处会提示错误,如果强制部署的话,部署过程可能会产生错误。
点击“设置”,进行openstack设置和存储设置,其他保持默认
存储使用ceph
都设置完成之后,点击“部署变更”开始部署
部署完成之后如下,会提示web登录的信息
参考资料
1、 http://community.mellanox.com/docs/DOC-1474
Related references:
- MLNX-OS User Manual - (located at support.mellanox.com )
- Planning Guide — Mirantis OpenStack v5.1 | Documentation
- Reference Architectures — Mirantis OpenStack v5.1 | Documentation
- HowTo Configure 56GbE Link on Mellanox Adapters and Switches
- HowTo upgrade MLNX-OS Software on Mellanox switches
- Mirantis Fuel ISO Download page
- HowTo Configure iSER Block Storage for OpenStack Cloud with Mellanox ConnectX-3 Adapters
- Mellanox CloudX, Mirantis Fuel 5.1 Solution Guide
- Movie - HowTo Install Mirantis Fuel 5.1 OpenStack with Mellanox Adapters
It is also recommended to watch the movie HowTo Install Mirantis Fuel 5.1 OpenStack with Mellanox Adapters
frameborder="0" height="513" scrolling="no" src="https://www.youtube.com/embed/5Ga28Rp7K_I?wmode=transparent" width="684" style="margin: 0px; padding: 0px; border-width: 0px; font-weight: inherit; font-style: inherit; font-family: inherit; vertical-align: baseline;">
Note: Server’s IPMI and the switches management interfaces wiring and configuration are out of scope.
You need to ensure that there is management access (SSH) to Mellanox Ethernet switch SX1036 to perform the configuration.
Setup BOM:
Component | Quantity | Description |
---|---|---|
Fuel Master server | 1 | DELL PowerEdge R620
|
Cloud Controllers and Compute servers
| 6 | DELL PowerEdge R620
|
Cloud Storage server | 1 | Supermicro X9DR3-F
|
Admin (PXE) and Public switch | 1 | 1Gb switch with VLANs configured to support both networks |
Cloud Ethernet Switch | 1 | Mellanox SX1036 40/56Gb 36 port Ethernet |
Cables | 16 x 1Gb CAT-6e for Admin (PXE) and Public networks 7 x 56GbE copper cables up to 2m (MC2207130-XXX) |
Note: You can use Mellanox ConnectX-3 PRO EN (MCX313A-BCCT ) or Mellanox ConnectX-3 Pro VPI (MCX353-FCCT) adapter cards.
- 2 SSD drives in bays 0-1 configured in RAID-1 (Mirror): The OS will be installed on it.
- 22 SSD drives in bays 3-24 configured in RAID-10: The Cinder volume will be configured on the RAID drive.
Network Physical Setup:
- Connect all nodes to the Admin (PXE) 1GbE switch (preferably through the eth0 interface on board).
It is recommended to write the MAC address of the Controller and Storage servers to make Cloud installation easier (see Controller Node section below in Nodes tab). - Connect all nodes to the Public 1GbE switch (preferably through the eth1 interface on board).
- Connect port #1 (eth2) of ConnectX-3 Pro to SX1036 Ethernet switch (Private, Management, Storage networks).
Note: Port bonding is not supported when using SR-IOV over the ConnectX-3 adapter family.
Note: Refer to the MLNX-OS User Manual to get familiar with switch software (located at support.mellanox.com ).
Network | Subnet/Mask | Gateway | Notes |
---|---|---|---|
Admin (PXE) | 10.20.0.0/24 | N/A | The network is used to provision and manage Cloud nodes via the Fuel Master. The network is enclosed within a 1Gb switch and has no routing outside. This is the default Fuel network. |
Management | 192.168.0.0/24 | N/A | This is the Cloud Management network. The network uses VLAN 2 in SX1036 over 40/56Gb interconnect. This is the default Fuel network. |
Storage | 192.168.1.0/24 | N/A | This network is used to provide storage services. The network uses VLAN 3 in SX1036 over 40/56Gb interconnect. This is the default Fuel network. |
Public and Neutron L3 | 10.7.208.0/24 | 10.7.208.1 | Public network is used to connect Cloud nodes to an external network. Neutron L3 is used to provide Floating IP for tenant VMs. Both networks are represented by IP ranges within same subnet with routing to external networks. All Cloud nodes which have Public IP and HA functionality require an additional Virtual IP. For our example with 7 Cloud nodes we need 8 IPs in Public network range. Consider a larger range if you are planning to add more servers to the cloud later. In our build we will use range IP range 10.7.208.53 >> 10.7.208.76 for both Public and Neutron L3. IP allocation will be as follows:
|
- Boot Fuel Master Server from the ISO as a virtual CD (click here for the image).
- Press the <TAB> key on the very first installation screen which says "Welcome to Fuel Installer" and update the kernel option from showmenu=no to showmenu=yes and hit Enter. It will now install Fuel and reboot the server.
- After the reboot, boot from the local disk. The Fuel menu window will start.
- Network setup:
- Configure eth0 - PXE (Admin) network interface.
Ensure the default Gateway entry is empty for the interface – the network is enclosed within the switch and has no routing outside.
Select Apply.
- Configure eth1 – Public network interface.
The interface is routable to LAN/internet and will be used to access the server.
Configure static IP address, netmask and default gateway on the public network interface.
Select Apply.
- Configure eth0 - PXE (Admin) network interface.
- PXE Setup
The PXE network is enclosed within the switch.
Do not make changes – proceed with defaults.
Press Check button to ensure no errors are found.
- Time Sync
Check NTP availability (e.g. ntp.org) via Time Sync tab on the left.
Configure NTP server entries suitable for your infrastructure.
Press Check to verify settings. - Navigate to Quit Setup and select Save and Quit to proceed with the installation.
- Once the Fuel installation is done, you are provided with Fuel access details both for SSH and HTTP.
Access Fuel Web UI by http://10.7.208.53:8000. Use "admin" for both login and password.
- Open in WEB browser (for example: http://10.7.208.53:8000)
- Log into Fuel using "admin" for both login and password.
- Open a new environment in the Fuel dashboard. A configuration wizard will start.
- Configure the new environment wizard as follows:
- Name and Release
- Name: TEST
- Release: Icehouse on CentOS 6.5 (2014.1.1-5.1)
- Deployment Mode
- Multi-node with HA
- Compute
- KVM
- Network
- Neutron with VLAN segmentation
- Storage Backend
- Cinder: Default
- Glance : Default
- Additional Services
- None
- Finish
- Click Create button
- Name and Release
- When done, a new TEST environment will be created. Click on it and proceed with environment configuration.
- To use high performance block storage, check ISER protocol for volumes (Cinder) in the Storage section.
Note: "Cinder LVM over iSCSI for volumes" should remain checked (default).
- Save the settings.
- Click Add Node.
- Identify 3 controller node. Use the last 4 Hexa of its MAC address of interface connected to Admin (PXE) network.
Assign the node's role to be a Controller node. - Click Apply Changes.
- Click Add Node.
- Identify your controller node. Use the last 4 Hexa of its MAC address of interface connected to Admin (PXE) network.
In our example this is an only Supermicro server, so identification is easy.
Select this node to be a Storage - Cinder LVM node. - Click Apply Changes.
- Click Add Node.
- Select all the nodes that are left and assign them the Compute role.
- Click Apply Changes.
You can choose and configure multiple nodes in parallel.
Fuel will not let you to proceed with bulk configuration if HW differences between selected nodes (like the number of network ports) are detected.
In this case the Configure Interfaces button will have an error icon (see below).

The example below allows configuring 6 nodes in parallel. The 7th node (Supermicro storage node) will be configured separately.

- In this example, we set the Admin (PXE) network to eth0 and the Public network to eth1.
- The Storage, Private and Management networks should run on the ConnectX-3 adapters 40/56GbE port.
This is an example:
- Click Back To Node List and perform network configuration for Storage Node
- Select Storage node
- Press Configure Disks button
- Click on sda disk bar, set Cinder allowed space to 0 MB and make Base System occupy the entire drive – press USE ALL ALLOWED SPACE.
- Press Apply.

The base MAC is left untouched.
Save Configuration
You should see the following message: Verification succeeded. Your network is configured correctly. Otherwise, check the log file for troubleshooting.

- Click the Health Test tab.
- Check the Select All checkbox.
- Uncheck Platform services functional tests (image with special packages is required).
- Click Run Tests.
Bug in Launchpad
- Fuel server Dashboard user / password: admin / admin
- Fuel server SSH user / password: root / r00tme
- TestVM SSH user / password: cirros / cubswin:)
- To get controller node CLI permissions run: # source /root/openrc
Prepare Linux VM Image for CloudX:
In order to have network and RoCE support on the VM, MLNX_OFED (2.2-1 or later) should be installed on the VM environment.
MLNX_OFED may be downloaded from http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers
(In case of CentOS/RHEL OS, you can use virt-manager to open existing VM image and perform MLNX_OFED installation).
Issue #
|
Description
| Workaround |
Bug in Launchpad
|
1
|
The default number of supported virtual functions (VFs),16, is not sufficient.
| To have more vNICs available, contact Mellanox Support. |
|
2
|
Hypervisor crash on instance (VM) termination
| Please contact Mellanox Support. |
|
3 | 56Gb links are discovered by Fuel as 10Gb | No action is required. Actual port speed is 56Gb. After deployment ports are re-discovered as 56Gb. |