openstack octavia 简介以及手工安装过程

Octavia安装与配置

openstack octavia 是 openstack lbaas的支持的一种后台程序,提供为虚拟机流量的负载均衡。实质是类似于trove,调用 nove 以及neutron的api生成一台安装好haproxy和keepalived软件的虚拟机,并连接到目标网路。octavia共有4个组件 housekeeping,worker,api,health-manager,octavia agent。api作用就不详细说了。worker:主要作用是和nova,neutron等组件通信,用于虚拟机调度以及把对于虚拟机操作的指令下发给octavia agent。housekeeping:查看octavia/controller/housekeeping/house_keeping.py得知其三个功能点:SpareAmphora,DatabaseCleanup,CertRotation。依次是清理虚拟机的池子,清理过期数据库,更新证书。health-manager:检查虚拟机状态,和虚拟机中的octavia agent通信,来更新各个组件的状态。octavia agent 位于虚拟机内部:对下是接受指令操作下层的haproxy软件,对上是和health-manager通信汇报各种情况。可以参考博文http://lingxiankong.github.io/blog/2016/03/30/octavia/?utm_source=tuicool&utm_medium=referral

写的比我更详细一点




目前官方不提供安装文档。谷歌了下似乎也没人写过具体的安装步骤,只推荐用devstack来进行安装。本人尝试根据devstack的安装脚本总结了下安装octavia的步骤,验证是成功的,不当之处请各位指正。

一 安装

1、创建数据库

1
2
mysql> CREATE DATABASE octavia;
mysql> GRANT ALL PRIVILEGES ON octavia.* TO  'octavia' @ 'localhost'   IDENTIFIED BY  'OCTAVIA_DBPASS' ;mysql> GRANT ALL PRIVILEGES ON octavia.* TO  'octavia' @ '%'  \  IDENTIFIED BY  'OCTAVIA_DBPASS' ;

2 创建用户 角色 endpoint

1
2
3
4
5
openstack user create --domain default --password-prompt octavia
openstack role add --project service --user cinder admin
openstack endpoint create octavia public http: //10 .1.65.58:9876/ --region RegionOne 
openstack endpoint create octavia admin http: //10 .1.65.58:9876/ --region RegionOne 
openstack endpoint create octavia internal http: //10 .1.65.58:9876/ --region RegionOne

3 安装软件包

1
yum  install  openstack-octavia-worker openstack-octavia-api python-octavia openstack-octavia openstack-octavia openstack-octavia

4 导入镜像 镜像是从devstack 生成的系统中导出来的

1
openstack  image create amphora-x64-haproxy --public --container- format =bare --disk- format  qcow2

5 创建管理网络,并在主机创建ovs端口,使octavia-worker,octavia-housekeeping,octavia-health-manager能和生成的虚拟机实例通讯

 5.1 生成管理网络,网段

1
2
openstack network create lb-mgmt-net
openstack subnet create --subnet-range 192.168.0.0 /24  --allocation-pool       start=192.168.0.2,end=192.168.0.200 --network lb-mgmt-net lb-mgmt-subnet


 5.2 生成管理端口防火墙规则 

5555端口是管理网络,考虑到octavia组件尚不成熟,开启了22端口,镜像本身也是开启了22端口,这点吐槽下trove,同样是不成熟的模块,默认不开启22端口,还得去改源码。

1
2
3
openstack security group create lb-mgmt-sec-grp
openstack security group rule create --protocol udp --dst-port 5555 lb-health-mgr-sec-grp
openstack security group rule create --protocol tcp --dst-port 22 lb-mgmt-sec-grp


 5.3 在管理网络创建一个端口用于连接宿主机中的octavia health_manager

1
neutron port-create --name octavia-health-manager-standalone-listen-port --security-group lb-health-mgr-sec-grp --device-owner Octavia:health-mgr --binding:host_id=controller lb-mgmt-net

 5.4 创建宿主机的ovs端口 并连接至5.1生成的网络

1
ovs-vsctl  --may-exist add-port br-int o-hm0 --  set  Interface o-hm0  type =internal --  set  Interface o-hm0 external-ids:iface-status=active --  set  Interface o-hm0 external-ids:attached-mac=fa:16:3e:6f:9f:9a --  set  Interface o-hm0 external-ids:iface- id =457e4953-b2d6-49ee-908b-2991506602b2

其中iface-id 和attached-mac 为 5.3生成的port的 属性

1
ip link  set  dev o-hm0 address fa:16:3e:6f:9f:9a

 5.5 在宿主机上创建dhcp (为啥不用传统的dnsmasq呢?)

1
dhclient - v  o-hm0 -cf  /etc/octavia/dhcp/dhclient .conf

6 配置修改,和其他openstack组件设置差不多

  6.1 设置数据库

1
2
[database]
connection = mysql+pymysql: //octavia :OCTAVIA_DBPASS@controller /octavia

  6.2 设置消息队列

1
2
3
4
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS

6.3 设置 keystone的认证信息

1
2
3
4
5
6
[keystone_authtoken]
auth_version = 2
admin_password = OCTAVIA_PASS
admin_tenant_name = octavia
admin_user = octavia
auth_uri = http: //controller :5000 /v2 .0

6.4 设置health_manager组件监听地址,此ip地址等于5.3中创建的io地址

1
2
3
4
[health_manager]
bind_port = 5555
bind_ip = 192.168.0.7
controller_ip_port_list = 192.168.0.7:5555

6.5 设置和虚拟机通信的 公钥私钥

1
2
3
4
5
6
7
8
[haproxy_amphora]
server_ca =  /etc/octavia/certs/ca_01 .pem
client_cert =  /etc/octavia/certs/client .pem
key_path =  /etc/octavia/ . ssh /octavia_ssh_key
base_path =  /var/lib/octavia
base_cert_dir =  /var/lib/octavia/certs
connection_max_retries = 1500
connection_retry_interval = 1

6.6 设置 用于开启虚拟机实例的信息

1
2
3
4
5
6
7
8
9
10
11
12
[controller_worker]
amp_boot_network_list = 826be4f4-a23d-4c5c-bff5-7739936fac76  # 步骤5.1中生成的id
amp_image_tag = amphora  # 步骤4 中已经定义这个metadata
amp_secgroup_list = d949202b-ba09-4003-962f-746ae75809f7  # 步骤5.2 生成的安全组id
amp_flavor_id = dd49b3d5-4693-4407-a76e-2ca95e00a9ec
amp_image_id = b23dda5f-210f-40e6-9c2c-c40e9daa661a  #步骤4中生成的镜像id
amp_ssh_key_name = 155  #
amp_active_wait_sec = 1
amp_active_retries = 100
network_driver = allowed_address_pairs_driver
compute_driver = compute_nova_driver
amphora_driver = amphora_haproxy_rest_driver

7 修改neutron配置

  7.1 修改 /etc/neutron/neutron.conf 增加lbaas服务

 

1
service_plugins = [existing service plugins],neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2

7.2 在[service_providers] 章节 设置lbaas 的服务提供者为octavia

 

1
  service_provider = LOADBALANCERV2:Octavia:neutron_lbaas.drivers.octavia.driver.OctaviaDriver:default

8 启动服务

如果之前 开启了 LBaaS v2 with an agent 服务 请关闭,并清理下neutron数据库下 lbaas_loadbalancers lbaas_loadbalancer_statistics 不然会报错

同步数据库

 

1
octavia-db-manage   upgrade  head

重启neutron 

1
systemctl restart neutron-server

启动octavia

1
systemctl restart  octavia-housekeeping  octavia-worker octavia-api octavia-health-manager

二 验证

9.1 创建loadbalancer

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@controller ~] # neutron lbaas-loadbalancer-create --name test-lb-1 lbtest
Created a new loadbalancer:
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| admin_state_up      | True                                 |
| description         |                                      |
id                   | 5af472bb-2068-4b96-bcb3-bef7ff7abc56 |
| listeners           |                                      |
| name                |  test -lb-1                            |
| operating_status    | OFFLINE                              |
| pools               |                                      |
| provider            | octavia                              |
| provisioning_status | PENDING_CREATE                       |
| tenant_id           | 9a4b2de78c2d45cfbf6880dd34877f7b     |
| vip_address         | 192.168.123.10                       |
| vip_port_id         | d163b73c-258a-4e03-90ad-5db31cfe23ac |
| vip_subnet_id       | 74aea53a-014a-4f9c-86f9-805a2a772a27 |
+---------------------+--------------------------------------+

 9.2 查看虚拟机,值得注意的地方,loadbalancer的地址是vip,和虚拟机的地址是不相同的

1
2
[root@controller ~] # openstack server list |grep 82b59e85-29f2-46ce-ae0b-045b7fceb5ca
| 82b59e85-29f2-46ce-ae0b-045b7fceb5ca | amphora-734da57c-e444-4b8e-a706-455230ae0803 | ACTIVE  | lbtest=192.168.123.9; lb-mgmt-net=192.168.0.6        | amphora-x64-haproxy 201610131607    |

 9.3 创建linstener

1
neutron lbaas-listener-create --name  test -lb-tcp --loadbalancer  test -lb-1 --protocol TCP  --protocol-port 22

 9.4 设置安全组

1
  neutron port-update  --security-group default d163b73c-258a-4e03-90ad-5db31cfe23ac

 9.5 创建pool,新建三台虚拟机,并加入pool

1
2
3
4
5
6
7
8
9
10
11
12
13
openstack server create  --flavor m1.small --nic net- id =22525640-297e-40eb-bd77-0a9afd861f8c --image  "cirros for kvm raw"   --min 3 --max 3  test
 
[root@controller ~] # openstack server list |grep test-
| d8dc22d4-e657-4c54-96f9-3a53ca67533d |  test -3                                       | ACTIVE  | lbtest=192.168.123.8                                 | cirros  for  kvm raw                  |
| c7926665-84c5-48a5-9de5-5e15e71baa5d |  test -2                                       | ACTIVE  | lbtest=192.168.123.13                                | cirros  for  kvm raw                  |
| fcf60c23-b799-4d08-a5a7-2b0fc9f1905e |  test -1                                       | ACTIVE  | lbtest=192.168.123.11                                | cirros  for  kvm raw                  |
 
neutron lbaas-pool-create   --name  test -lb-pool-tcp  --lb-algorithm ROUND_ROBIN --listener  test -lb-tcp --protocol TCP
  
for  in  {8,13,11}
do
neutron lbaas-member-create --subnet lbtest  --address 192.168.123.${i}  --protocol-port 22   test -lb-pool-tcp
done

 9.6 验证

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[root@controller ~] # >/root/.ssh/known_hosts;ip netns exec qrouter-4718cc34-68cc-47a7-9201-405d1fc09213 ssh cirros@192.168.123.10 "hostname"
The authenticity of host  '192.168.123.10 (192.168.123.10)'  can't be established.
RSA key fingerprint is 72:c4:11:41:53:51:f2:1b:b5:e6:1b:69:a8:c2:5b:d4.
Are you sure you want to  continue  connecting ( yes /no )?  yes
Warning: Permanently added  '192.168.123.10'  (RSA) to the list of known hosts.
cirros@192.168.123.10's password: 
test -3
[root@controller ~] # >/root/.ssh/known_hosts;ip netns exec qrouter-4718cc34-68cc-47a7-9201-405d1fc09213 ssh cirros@192.168.123.10 "hostname"
The authenticity of host  '192.168.123.10 (192.168.123.10)'  can't be established.
RSA key fingerprint is 3d:88:0f:4a:b1:77:c9:6a:fd:82:4d:31:0c:ca:82:d8.
Are you sure you want to  continue  connecting ( yes /no )?  yes
Warning: Permanently added  '192.168.123.10'  (RSA) to the list of known hosts.
cirros@192.168.123.10's password: 
test -1
[root@controller ~] # >/root/.ssh/known_hosts;ip netns exec qrouter-4718cc34-68cc-47a7-9201-405d1fc09213 ssh cirros@192.168.123.10 "hostname"
The authenticity of host  '192.168.123.10 (192.168.123.10)'  can't be established.
RSA key fingerprint is 1c:03:f0:f9:92:a7:0f:5d:9d:09:22:14:94:62:e4:c4.
Are you sure you want to  continue  connecting ( yes /no )?  yes
Warning: Permanently added  '192.168.123.10'  (RSA) to the list of known hosts.
cirros@192.168.123.10's password: 
test -2

三 过程分析

10.1 worker的相关操作

 创建 云主机实例,关联到管理网络:

1
REQ: curl -g -i -X POST http: //controller :8774 /v2 .1 /9a4b2de78c2d45cfbf6880dd34877f7b/servers  -H  "User-Agent: python-novaclient"  -H  "Content-Type: application/json"  -H  "Accept: application/json"  -H  "X-OpenStack-Nova-API-Version: 2.1"  -H  "X-Auth-Token: {SHA1}0f810ab0fdd5b92489f73a7f0988adfc9da4e517"  -d  '{"server": {"name": "amphora-4f22d55b-0680-4111-aef6-da98c9ccd1d4", "imageRef": "b23dda5f-210f-40e6-9c2c-c40e9daa661a", "key_name": "155", "flavorRef": "dd49b3d5-4693-4407-a76e-2ca95e00a9ec", "max_count": 1, "min_count": 1, "personality": [{"path": "/etc/octavia/amphora-agent.conf", "contents": ""}, {"path": "/etc/octavia/certs/client_ca.pem", "contents": "="}, {"path": "/etc/octavia/certs/server.pem", "contents": ""}], "networks": [{"uuid": "826be4f4-a23d-4c5c-bff5-7739936fac76"}], "security_groups": [{"name": "d949202b-ba09-4003-962f-746ae75809f7"}], "config_drive": true}}'  _http_log_request  /usr/lib/python2 .7 /site-packages/keystoneauth1/session .py:337

当检测到目标云主机的管理网络状态变为active后进行下一步

1
2
3
REQ: curl -g -i -X GET http: //controller :8774 /v2 .1 /9a4b2de78c2d45cfbf6880dd34877f7b/servers/d3c97360-56b2-4f75-b905-2ef83870a342/os-interface  -H  "User-Agent: python-novaclient"  -H  "Accept: application/json"  -H  "X-OpenStack-Nova-API-Version: 2.1"  -H  "X-Auth-Token: {SHA1}3f6ccac4cb8b70b06fb5e62b9db2272702d8ec67"  _http_log_request  /usr/lib/python2 .7 /site-packages/keystoneauth1/session .py:337
2016-10-17 12:06:30.041 29993 DEBUG novaclient.v2.client [-] RESP: [200] Content-Length: 286 Content-Type: application /json  Openstack-Api-Version: compute 2.1 X-Openstack-Nova-Api-Version: 2.1 Vary: OpenStack-API-Version, X-OpenStack-Nova-API-Version X-Compute-Request-Id: req-ccc07b37-e942-4a5b-a87a-b0e8d3887ba3 Date: Mon, 17 Oct 2016 04:06:30 GMT Connection: keep-alive
RESP BODY: { "interfaceAttachments" : [{ "port_state" "ACTIVE" "fixed_ips" : [{ "subnet_id" "4e3409e5-4e9a-4599-9b2e-f760b2fab380" "ip_address" "192.168.0.11" }],  "port_id" "bbf99a69-0fb2-42a6-b7de-b7969bda9d73" "net_id" "826be4f4-a23d-4c5c-bff5-7739936fac76" "mac_addr" "fa:16:3e:01:04:2c" }]}
1
2016-10-17 12:06:30.078 29993 DEBUG octavia.controller.worker.tasks.amphora_driver_tasks [-] Finalized the amphora. execute  /usr/lib/python2 .7 /site-packages/octavia/controller/worker/tasks/amphora_driver_tasks .py:164

创建对外服务的vip的端口

1
2
2016-10-17 12:06:30.226 29993 DEBUG octavia.controller.worker.controller_worker [-] Task  'octavia.controller.worker.tasks.network_tasks.AllocateVIP'  (af8ea5a0-42c8-4d30-9ffa-016668811fc8) transitioned into state  'RUNNING'  from state  'PENDING'  _task_receiver  /usr/lib/python2 .7 /site-packages/taskflow/listeners/logging .py:189
2016-10-17 12:06:30.227 29993 DEBUG octavia.controller.worker.tasks.network_tasks [-] Allocate_vip port_id c7d7b552-83ac-4e0c-84bf-0b9cae661eab, subnet_id 74aea53a-014a-4f9c-86f9-805a2a772a27,ip_address 192.168.123.31 execute  /usr/lib/python2 .7 /site-packages/octavia/controller/worker/tasks/network_tasks .py:328
1
<br data - filtered = "filtered" >

在该vip下面创建一个实际的port 并把该port   attach至 云主机

1
2
2016-10-17 12:06:32.662 29993 DEBUG octavia.network.drivers.neutron.allowed_address_pairs [-] Created vip port: 1627d28d-bf54-46eb-9d78-410c5d647bf4  for  amphora: 3f6e22a1-e0b0-4098-ba20-daf47cfdae19 _plug_amphora_vip  /usr/lib/python2 .7 /site-packages/octavia/network/drivers/neutron/allowed_address_pairs .py:97
2016-10-17 12:06:32.663 29993 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X POST http: //controller :8774 /v2 .1 /9a4b2de78c2d45cfbf6880dd34877f7b/servers/d3c97360-56b2-4f75-b905-2ef83870a342/os-interface  -H  "User-Agent: python-novaclient"  -H  "Content-Type: application/json"  -H  "Accept: application/json"  -H  "X-OpenStack-Nova-API-Version: 2.1"  -H  "X-Auth-Token: {SHA1}3f6ccac4cb8b70b06fb5e62b9db2272702d8ec67"  -d  '{"interfaceAttachment": {"port_id": "1627d28d-bf54-46eb-9d78-410c5d647bf4"}}'  _http_log_request  /usr/lib/python2 .7 /site-packages/keystoneauth1/session .py:337

创建listener

1
2
3
2016-10-17 19:01:09.384 29993 DEBUG octavia.amphorae.drivers.haproxy.rest_api_driver [-] request url https: //192 .168.0.9:9443 /0 .5 /listeners/c3a1867c-b2e5-49a7-819b-7a7d39063dda/reload  request  /usr/lib/python2 .7 /site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver .py:248
2016-10-17 19:01:09.412 29993 DEBUG octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connected to amphora. Response: <Response [202]> request  /usr/lib/python2 .7 /site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver .py:270
2016-10-17 19:01:09.414 29993 DEBUG octavia.controller.worker.controller_worker [-] Task  'octavia.controller.worker.tasks.amphora_driver_tasks.ListenersUpdate'  (0f588287-a383-4c70-9847-20187dd19f9f) transitioned into state  'SUCCESS'  from state  'RUNNING'  with result  'None'  _task_receiver  /usr/lib/python2 .7 /site-packages/taskflow/listeners/logging .py:178

10.2

octavia agent分析

在9443端口创建监听端口给worker和health-manager 访问

1
2016-10-17 12:10:41.344 1043 INFO werkzeug [-]  * Running on https: //0 .0.0.0:9443/ (Press CTRL+C to quit)

octavia agent的似乎有bug,不显示debug信息。

11 高可用测试

将/etc/octavia/octavia.conf配置文件中的loadbalancer_topology = SINGLE 改成 ACTIVE_STANDBY 可以启用高可用模式,目前不持双ACTIVE

生成loadbalancer之后,可以看到生成两个虚拟机

[root@controller octavia]# neutron lbaas-loadbalancer-create --name test-lb1238 lbtest2 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
Created a new loadbalancer:
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| admin_state_up      | True                                 |
| description         |                                      |
id                   | 4e43f3c7-c0f6-44c7-8dab-e2fc8ed16e0f |
| listeners           |                                      |
| name                |  test -lb1238                          |
| operating_status    | OFFLINE                              |
| pools               |                                      |
| provider            | octavia                              |
| provisioning_status | PENDING_CREATE                       |
| tenant_id           | 9a4b2de78c2d45cfbf6880dd34877f7b     |
| vip_address         | 192.168.235.14                       |
| vip_port_id         | 42f72c9f-4623-4bf5-ae82-29f8cf588d2d |
| vip_subnet_id       | 52e93565-eab2-4316-a04c-3e554992c993 |
+---------------------+--------------------------------------+
1
2
3
[root@controller ~] # openstack server list |grep 192.168.235                 |
| 736b8b76-2918-49a7-8477-995a168709bd | amphora-5379f109-01fa-429c-860b-c739e0c5ad5e | ACTIVE  | lb-mgmt-net=192.168.0.8; lbtest2=192.168.235.10  | amphora-x64-haproxy 201610131607    |
| bd867667-b8d2-49c5-bb1e-54f0753d33a3 | amphora-23540889-b07e-4c0e-ab9b-df0075fbb9c3 | ACTIVE  | lb-mgmt-net=192.168.0.25; lbtest2=192.168.235.19 | amphora-x64-haproxy 201610131607

看到3个ip:vip是192.168.235.14,两台机器出口spacer.gifip是192.168.235.10和192.168.235.19

登陆虚拟机验证一下,注意登陆的ip是管理网络的ip:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
[root@controller ~] # ssh 192.168.0.8 "ps -ef |grep keepalived; cat  /var/lib/octavia/vrrp/octavia-keepalived.conf"
root      1868     1  0 04:40 ?        00:00:00  /usr/sbin/keepalived  -D -d -f  /var/lib/octavia/vrrp/octavia-keepalived .conf
root      1869  1868  0 04:40 ?        00:00:00  /usr/sbin/keepalived  -D -d -f  /var/lib/octavia/vrrp/octavia-keepalived .conf
root      1870  1868  0 04:40 ?        00:00:00  /usr/sbin/keepalived  -D -d -f  /var/lib/octavia/vrrp/octavia-keepalived .conf
root      5448  5377  0 05:00 ?        00:00:00  bash  -c  ps  -ef | grep  keepalived;  cat   /var/lib/octavia/vrrp/octavia-keepalived .conf
root      5450  5448  0 05:00 ?        00:00:00  grep  keepalived
vrrp_script check_script {
   script  /var/lib/octavia/vrrp/check_script .sh
   interval 5
   fall 2
   rise 2
}
vrrp_instance 4e43f3c7c0f644c78dabe2fc8ed16e0f {
  state MASTER
  interface eth1
  virtual_router_id 1
  priority 100
  nopreempt
  garp_master_refresh 5
  garp_master_refresh_repeat 2
  advert_int 1
  authentication {
   auth_type PASS
   auth_pass ee46125
  }
  unicast_src_ip 192.168.235.10
  unicast_peer {
        192.168.235.19
  }
  virtual_ipaddress {
   192.168.235.14
  }
  track_script {
     check_script
  }
}
[root@controller ~] # ssh 192.168.0.8 "ps -ef |grep haproxy; cat  /var/lib/octavia/836053f0-ea72-46ae-9fae-8b80153ef593/haproxy.cfg"
nobody    2195     1  0 04:43 ?        00:00:00  /usr/sbin/haproxy  -f  /var/lib/octavia/836053f0-ea72-46ae-9fae-8b80153ef593/haproxy .cfg -L jrwLnRhlvXcPd21JhvXEMStRHh0 -p  /var/lib/octavia/836053f0-ea72-46ae-9fae-8b80153ef593/836053f0-ea72-46ae-9fae-8b80153ef593 .pid -sf 2154
root      6745  6676  0 05:06 ?        00:00:00  bash  -c  ps  -ef | grep  haproxy;  cat   /var/lib/octavia/836053f0-ea72-46ae-9fae-8b80153ef593/haproxy .cfg
root      6747  6745  0 05:06 ?        00:00:00  grep  haproxy
# Configuration for test-lb1238
global
     daemon
     user nobody
     group nogroup
     log  /dev/log  local0
     log  /dev/log  local1 notice
     stats socket  /var/lib/octavia/836053f0-ea72-46ae-9fae-8b80153ef593 .sock mode 0666 level user
 
defaults
     log global
     retries 3
     option redispatch
     timeout connect 5000
     timeout client 50000
     timeout server 50000
 
peers 836053f0ea7246ae9fae8b80153ef593_peers
     peer 3OduZJiPzm475Q7IgyshE5oq1Jk 192.168.235.19:1025
     peer jrwLnRhlvXcPd21JhvXEMStRHh0 192.168.235.10:1025
 
 
frontend 836053f0-ea72-46ae-9fae-8b80153ef593
     option tcplog
     bind 192.168.235.14:22
     mode tcp
     default_backend 457d4de5-3213-4969-8f20-1f2d3505ff1e
 
backend 457d4de5-3213-4969-8f20-1f2d3505ff1e
     mode tcp
     balance leastconn
     timeout check 5
     server fa28676f-a762-4a8e-91ab-7a83f071b62b 192.168.235.20:22 weight 1 check inter 5s fall 3 rise 3
     server 1ded44da-cba5-434c-8578-95153656c392 192.168.235.24:22 weight 1 check inter 5s fall 3 rise 3

另一台结果结果类似。

结论:octavia的高可用是通过haproxy加keepalived来完成的。

四、其他

1、在services_lbaas.conf下有个选项 

[octavia]

request_poll_timeout = 200

此选项的定义了,创建loadbalancer之后,当超过这个时间以后,如果octavia还没有的状态没有变成active,neutron就会把这个loadbalancer设置为error,默认值是100,在我的环境下高可用模式会来不及。日志如下:

1
2
2016-10-19 09:38:26.392 6256 DEBUG neutron_lbaas.drivers.octavia.driver [req-bee3619a-f9d4-4463-adcd-3cb99826b600 - - - - -] Octavia reports load balancer 2676dac6-c41d-4501-9c41-781a176c6baf has provisioning status of PENDING_CREATE thread_op  /usr/lib/python2 .7 /site-packages/neutron_lbaas/drivers/octavia/driver .py:75
2016-10-19 09:38:29.393 6256 DEBUG neutron_lbaas.drivers.octavia.driver [req-bee3619a-f9d4-4463-adcd-3cb99826b600 - - - - -] Timeout has expired  for  load balancer 2676dac6-c41d-4501-9c41-781a176c6baf to complete an operation.  The last reported status was PENDING_CREATE thread_op  /usr/lib/python2 .7 /site-packages/neutron_lbaas/drivers/octavia/driver .py:94

2、源代码小修改例子:

 当neutron的loadbalancer状态发生变成active或者error时候时候推送到报警系统

 修改/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/octavia/driver.py

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
         if  prov_status  = =  'ACTIVE'  or  prov_status  = =  'DELETED' :
             kwargs  =  { 'delete' : delete}
             if  manager.driver.allocates_vip  and  lb_create:
                 kwargs[ 'lb_create' =  lb_create
                 # TODO(blogan): drop fk constraint on vip_port_id to ports
                 # table because the port can't be removed unless the load
                 # balancer has been deleted.  Until then we won't populate the
                 # vip_port_id field.
                 # entity.vip_port_id = octavia_lb.get('vip').get('port_id')
                 entity.vip_address  =  octavia_lb.get( 'vip' ).get( 'ip_address' )
             manager.successful_completion(context, entity,  * * kwargs)
             if  prov_status  = =  'ACTIVE' :
               urllib2.urlopen( 'http://********' )
               LOG.debug( "report  status to******* {0}{1}" . format (entity.root_loadbalancer. id , prov_status))
             return
         elif  prov_status  = =  'ERROR' :
             manager.failed_completion(context, entity)
             urllib2.urlopen( 'http://*******' )
             LOG.debug( "report status to ******* {0}{1}" . format (entity.root_loadbalancer. id , prov_status))
             return

 



3、octavia的数据库和neutron不是同一张表,但是里面有很多数据要求要保持一致,一定要保持两者相关数据的同步,不同步的话会带来很多问题,亲身经历。


本文转自 superbigsea 51CTO博客,原文链接:http://blog.51cto.com/superbigsea/1862253



评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值