HowTo access metadata from RDO Havana Instance on Fedora 20 & OpenStack Networking concepts

本文介绍了如何在Fedora 20和OpenStack Havana环境中访问OpenStack实例的元数据,详细解释了OpenStack Networking组件的工作原理及配置,并通过具体命令演示了如何验证元数据代理的正确设置。

HowTo access metadata from RDO Havana Instance on Fedora 20 & OpenStack Networking concepts

OpenStack Networking concepts

The  OpenStack Networking components are deployed on the Controller, Compute, and Network nodes in the following configuration:

Controller node: may host the Neutron server service, which provides the networking API and communicates with and tracks the agents.
        DHCP agent: spawns and controls dnsmasq processes to provide leases to instances. This agent also spawns neutron-ns-metadata-proxy processes as part of the metadata system.
        Metadata agent: Provides a metadata proxy to the nova-api-metadata service. The neutron-ns-metadata-proxy direct traffic that they receive in their namespaces to the proxy.
        OVS plugin agent: Controls OVS network bridges and routes between them via patch, tunnel, or tap without requiring an external OpenFlow controller.
        L3 agent: performs L3 forwarding and NAT.

Otherwise a separate box hosts Neutron Server and all services mentioned above

Compute node: has an OVS plugin agent and openstack-nova-compute service.


Namespaces (View also Identifying and Troubleshooting Neutron Namespaces )

For each network you create, the Network node (or Controller node, if combined) will have a unique network namespace (netns) created by the DHCP and Metadata agents. The netnshosts an interface and IP addresses for dnsmasq and the neutron-ns-metadata-proxy. You can view the namespaces with the `ip netns list`  command, and can interact with the namespaces with the `ip netns exec namespace command`   command.

As mentioned in  Direct_access _to_Nova_metadata 
in an environment running Neutron, a request from your instance must traverse a number of steps:

    1. From the instance to a router, 
    2. Through a NAT rule in the router namespace, 
    3. To an instance of the neutron-ns-metadata-proxy, 
    4. To the actual Nova metadata service 


   Reproducing  Dirrect_access_to_Nova_metadata   I was able to get  list of EC2 metadata available, but not their values. However,  my major concern was getting  values of metadata obtained in post Direct_access _to_Nova_metadata 
and also at  /openstack  location. The last  ones seem to me important not less then present  in EC2 list . Not all of  /openstack  metadata are provided by EC2 list.


Commands been run bellow are supposed to verify Nova&Neutron Set up to be performed  successfully , otherwise passing four steps 1,2,3,4 is supposed to fail and it will force you to analyse corresponding Logs file ( View References). It doesn't matter did you set up RDO Havana cloud environment  manually or via packstack
   
Run on Controller Node :- 

[root@dallas1 ~(keystone_admin)]$ ip netns list

qrouter-cb80b040-f13f-4a67-a39e-353b1c873a0d
qdhcp-166d9651-d299-47df-a5a1-b368e87b612f

Check on the Routing on Cloud controller's router namespace, it should show
port 80 for 169.254.169.254 routes to the host at port 8700 


[root@dallas1 ~(keystone_admin)]$ ip netns exec qrouter-cb80b040-f13f-4a67-a39e-353b1c873a0d iptables -L -t nat | grep 169

REDIRECT   tcp  --  anywhere             169.254.169.254      tcp dpt:http redir ports  8700


Check routing table inside the router namespace:

[root@dallas1 ~(keystone_admin)]$ ip netns exec qrouter-cb80b040-f13f-4a67-a39e-353b1c873a0d ip r

 default via 192.168.1.1 dev qg-8fbb6202-3d 
10.0.0.0/24 dev qr-2dd1ba70-34  proto kernel  scope link  src 10.0.0.1 
192.168.1.0/24 dev qg-8fbb6202-3d  proto kernel  scope link  src 192.168.1.100 

[root@dallas1 ~(keystone_admin)]$ ip netns exec qrouter-cb80b040-f13f-4a67-a39e-353b1c873a0d netstat -na
Active Internet connections (servers and established)

Proto Recv-Q Send-Q Local Address           Foreign Address         State      
tcp        0      0 0.0.0.0:8700            0.0.0.0:*               LISTEN   

Active UNIX domain sockets (servers and established)
Proto RefCnt Flags       Type       State         I-Node   Path

[root@dallas1 ~(keystone_admin)]$ ip netns exec qdhcp-166d9651-d299-47df-a5a1-b368e87b612f netstat -na

Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State      
tcp        0      0 10.0.0.3:53             0.0.0.0:*               LISTEN     
tcp6       0      0 fe80::f816:3eff:feef:53 :::*                    LISTEN     
udp        0      0 10.0.0.3:53             0.0.0.0:*                          
udp        0      0 0.0.0.0:67              0.0.0.0:*                          
udp6       0      0 fe80::f816:3eff:feef:53 :::*                               
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags       Type       State         I-Node   Path

[root@dallas1 ~(keystone_admin)]$ iptables-save | grep 8700

-A INPUT -p tcp -m multiport --dports 8700 -m comment --comment "001 metadata incoming" -j ACCEPT


[root@dallas1 ~(keystone_admin)]$ netstat -lntp | grep 8700
tcp        0      0 0.0.0.0:8700            0.0.0.0:*               LISTEN      2830/python
  

[root@dallas1 ~(keystone_admin)]$ ps -ef | grep 2830
 
nova      2830     1  0 09:41 ?        00:00:57 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
nova      2856  2830  0 09:41 ?        00:00:00 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
nova      2874  2830  0 09:41 ?        00:00:09 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
nova      2875  2830  0 09:41 ?        00:00:01 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log


On another cluster


[root@dfw02 ~(keystone_admin)]$ ip netns list
qrouter-86b3008c-297f-4301-9bdc-766b839785f1
qrouter-bf360d81-79fb-4636-8241-0a843f228fc8
qdhcp-426bb226-0ab9-440d-ba14-05634a17fb2b
qdhcp-1eea88bb-4952-4aa4-9148-18b61c22d5b7

[root@dfw02 ~(keystone_admin)]$ netstat -lntp | grep 8700
tcp        0      0 0.0.0.0:8700            0.0.0.0:*               LISTEN      2746/python         
[root@dfw02 ~(keystone_admin)]$ ps -ef | grep 2746
nova      2746     1  0 08:57 ?        00:02:31 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
nova      2830  2746  0 08:57 ?        00:00:00 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
nova      2851  2746  0 08:57 ?        00:00:10 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
nova      2858  2746  0 08:57 ?        00:00:02 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
root      9976 11489  0 16:31 pts/3    00:00:00 grep --color=auto 2746



Inside namespaces output seems like this

[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-86b3008c-297f-4301-9bdc-766b839785f1  netstat -lntp | grep 8700
tcp        0      0 0.0.0.0:8700            0.0.0.0:*               LISTEN      4946/python         
[root@dfw02 ~(keystone_admin)]$ ps -ef | grep 4946
root      4946     1  0 08:58 ?        00:00:00 /usr/bin/python /bin/neutron-ns-metadata-proxy --pid_file=/var/lib/neutron/external/pids/86b3008c-297f-4301-9bdc-766b839785f1.pid --metadata_proxy_socket=/var/lib/neutron/metadata_proxy --router_id=86b3008c-297f-4301-9bdc-766b839785f1 --state_path=/var/lib/neutron --metadata_port=8700 --verbose --log-file=neutron-ns-metadata-proxy-86b3008c-297f-4301-9bdc-766b839785f1.log --log-dir=/var/log/neutron
root     10396 11489  0 16:33 pts/3    00:00:00 grep --color=auto 4946


[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-bf360d81-79fb-4636-8241-0a843f228fc8  netstat -lntp | grep 8700
tcp        0      0 0.0.0.0:8700            0.0.0.0:*               LISTEN      4746/python         
[root@dfw02 ~(keystone_admin)]$ ps -ef | grep 4746
root      4746     1  0 08:58 ?        00:00:00 /usr/bin/python /bin/neutron-ns-metadata-proxy --pid_file=/var/lib/neutron/external/pids/bf360d81-79fb-4636-8241-0a843f228fc8.pid --metadata_proxy_socket=/var/lib/neutron/metadata_proxy --router_id=bf360d81-79fb-4636-8241-0a843f228fc8 --state_path=/var/lib/neutron --metadata_port=8700 --verbose --log-file=neutron-ns-metadata-proxy-bf360d81-79fb-4636-8241-0a843f228fc8.log --log-dir=/var/log/neutron


 1. At this point  you should be able (inside any running Havana instance) to launch your browser ("links" at least if there is no Light Weight X environment)  to

      http://169.254.169.254/openstack/latest (not EC2)

The response  will be  :    meta_data.json password vendor_data.json


     


   If Light Weight X Environment is unavailable then use "links"



  


  What is curl   http://curl.haxx.se/docs/faq.html#What_is_cURL

   Now you should be able to run on F20 instance 


[root@vf20rs0404 ~] # curl http://169.254.169.254/openstack/latest/meta_data.json | tee meta_data.json

 % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                                 Dload  Upload   Total   Spent    Left  Speed
    100  1286  100  1286    0     0   1109      0  0:00:01  0:00:01 --:--:--  1127
                . . . . . . . . 
                "uuid": "10142280-44a2-4830-acce-f12f3849cb32",
                "availability_zone": "nova",
                "hostname": "vf20rs0404.novalocal", 
                "launch_index": 0, 
                "public_keys": {"key2": "ssh-rsa . . . . .  Generated by Nova\n"}, 
                "name": "VF20RS0404"

On another instance (in my case Ubuntu 14.04 )

 root@ubuntutrs0407:~#curl http://169.254.169.254/openstack/latest/meta_data.json | tee meta_data.json

 Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                             Dload  Upload   Total   Spent    Left  Speed
 100  1292  100  1292    0     0    444      0  0:00:02  0:00:02 --:--:--   446
            {"random_seed": "...", 
            "uuid": "8c79e60c-4f1d-44e5-8446-b42b4d94c4fc", 
            "availability_zone": "nova", 
            "hostname": "ubuntutrs0407.novalocal", 
            "launch_index": 0, 
            "public_keys": {"key2": "ssh-rsa .... Generated by Nova\n"}, 
            "name": "UbuntuTRS0407"}

Running VMs on Compute node:-

[root@dallas1 ~(keystone_boris)]$ nova list
+--------------------------------------+---------------+-----------+------------+-------------+-----------------------------+
| ID                                   | Name          | Status    | Task State | Power State | Networks                    |
+--------------------------------------+---------------+-----------+------------+-------------+-----------------------------+
| d0f947b1-ff6a-4ff0-b858-b63a3d07cca3 | UbuntuTRS0405 | SUSPENDED | None       | Shutdown    | int=10.0.0.7, 192.168.1.106 |
| 8c79e60c-4f1d-44e5-8446-b42b4d94c4fc | UbuntuTRS0407 | ACTIVE    | None       | Running     | int=10.0.0.6, 192.168.1.107 |
| 8775924c-dbbd-4fbb-afb8-7e38d9ac7615 | VF20RS037     | SUSPENDED | None       | Shutdown    | int=10.0.0.2, 192.168.1.115 |
| d22a2376-33da-4a0e-a066-d334bd2e511d | VF20RS0402    | SUSPENDED | None       | Shutdown    | int=10.0.0.4, 192.168.1.103 |
| 10142280-44a2-4830-acce-f12f3849cb32 | VF20RS0404    | ACTIVE    | None       | Running     | int=10.0.0.5, 192.168.1.105 |
+--------------------------------------+---------------+-----------+------------+-------------+--------------------

Launching browser to http://169.254.169.254/openstack/latest/meta_data.json on another Two Node Neutron GRE+OVS F20 Cluster. Output is sent directly to browser


   




2. I have provided some information about the OpenStack metadata api, which is available at /openstack, but if you are concerned  about the EC2 metadata API.
browser should be launched to  http://169.254.169.254/latest/meta-data/



   What allows to to get any of displayed parameters

    For instance :-
     
 



   OR via CLI
   
ubuntu@ubuntutrs0407:~$ curl  http://169.254.169.254/latest/meta-data/instance-id
i-000000a4

ubuntu@ubuntutrs0407:~$ curl  http://169.254.169.254/latest/meta-data/public-hostname
ubuntutrs0407.novalocal

ubuntu@ubuntutrs0407:~$ curl  http://169.254.169.254/latest/meta-data/public-ipv4
192.168.1.107

To verify instance-id launch virt-manger connected to Compute Node


  which shows same value "000000a4"

  Another option in text mode is "links" browser

   $ ssh -l ubuntu -i key2.pem 192.168.1.109
  
   Inside Ubuntu 14.04 instance  :-

   # apt-get -y install links
   # links

    Press ESC to get to menu:-
    

   
   

References

课程设计报告:总体方案设计说明 一、软件开发环境配置 本系统采用C++作为核心编程语言,结合Qt 5.12.7框架进行图形用户界面开发。数据库管理系统选用MySQL,用于存储用户数据与小精灵信息。集成开发环境为Qt Creator,操作系统平台为Windows 10。 二、窗口界面架构设计 系统界面由多个功能模块构成,各模块职责明确,具体如下: 1. 起始界面模块(Widget) 作为应用程序的入口界面,提供初始导航功能。 2. 身份验证模块(Login) 负责处理用户登录与账户注册流程,实现身份认证机制。 3. 游戏主大厅模块(Lobby) 作为用户登录后的核心交互区域,集成各项功能入口。 4. 资源管理模块(BagWidget) 展示用户持有的全部小精灵资产,提供可视化资源管理界面。 5. 精灵详情模块(SpiritInfo) 呈现选定小精灵的完整属性数据与状态信息。 6. 用户名录模块(UserList) 系统内所有注册用户的基本信息列表展示界面。 7. 个人资料模块(UserInfo) 显示当前用户的详细账户资料与历史数据统计。 8. 服务器精灵选择模块(Choose) 对战准备阶段,从服务器可用精灵池中选取参战单位的专用界面。 9. 玩家精灵选择模块(Choose2) 对战准备阶段,从玩家自有精灵库中筛选参战单位的操作界面。 10. 对战演算模块(FightWidget) 实时模拟精灵对战过程,动态呈现战斗动画与状态变化。 11. 对战结算模块(ResultWidget) 对战结束后,系统生成并展示战斗结果报告与数据统计。 各模块通过统一的事件驱动机制实现数据通信与状态同步,确保系统功能的连贯性与数据一致性。界面布局遵循模块化设计原则,采用响应式视觉方案适配不同显示环境。 资源来源于网络分享,仅用于学习交流使用,请勿用于商业,如有侵权请联系我删除!
D3.js作为一种基于JavaScript的数据可视化框架,通过数据驱动的方式实现对网页元素的动态控制,广泛应用于网络结构的图形化呈现。在交互式网络拓扑可视化应用中,该框架展现出卓越的适应性与功能性,能够有效处理各类复杂网络数据的视觉表达需求。 网络拓扑可视化工具借助D3.js展示节点间的关联结构。其中,节点对应于网络实体,连线则表征实体间的交互关系。这种视觉呈现模式有助于用户迅速把握网络整体架构。当数据发生变化时,D3.js支持采用动态布局策略重新计算节点分布,从而保持信息呈现的清晰度与逻辑性。 网络状态监测界面是该工具的另一个关键组成部分,能够持续反映各连接通道的运行指标,包括传输速度、响应时间及带宽利用率等参数。通过对这些指标的持续追踪,用户可以及时评估网络性能状况并采取相应优化措施。 实时数据流处理机制是提升可视化动态效果的核心技术。D3.js凭借其高效的数据绑定特性,将连续更新的数据流同步映射至图形界面。这种即时渲染方式不仅提升了数据处理效率,同时改善了用户交互体验,确保用户始终获取最新的网络状态信息。 分层拓扑展示功能通过多级视图呈现网络的层次化特征。用户既可纵览全局网络架构,也能聚焦特定层级进行细致观察。各层级视图支持展开或收起操作,便于用户开展针对性的结构分析。 可视化样式定制系统使用户能够根据实际需求调整拓扑图的视觉表现。从色彩搭配、节点造型到整体布局,所有视觉元素均可进行个性化设置,以实现最优的信息传达效果。 支持拖拽与缩放操作的交互设计显著提升了工具的使用便利性。用户通过简单的视图操控即可快速浏览不同尺度的网络结构,这一功能降低了复杂网络系统的认知门槛,使可视化工具更具实用价值。 综上所述,基于D3.js开发的交互式网络拓扑可视化系统,整合了结构展示、动态布局、状态监控、实时数据处理、分层呈现及个性化配置等多重功能,形成了一套完整的网络管理解决方案。该系统不仅协助用户高效管理网络资源,还能提供持续的状态监测与深度分析能力,在网络运维领域具有重要应用价值。 资源来源于网络分享,仅用于学习交流使用,请勿用于商业,如有侵权请联系我删除!
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值