Get xVM information from Domain 0

  ch12.html href="images/css1.css" type="text/css" rel="STYLESHEET"> href="images/css2.css" type="text/css" rel="STYLESHEET">

You should now be comfortable with creating and managing guest images, including using different types of images as well as distributing different devices and resources to the guests. In this chapter, we cover some of the more advanced aspects of guest resource management. First we present tools for acquiring information that will be useful for assessing how resources are currently shared among guests. Then we move onto guest memory management, VCPU management including the VCPU scheduler, and finally end with IO schedulers. We include a discussion of Xen guest isolation characteristics, paying particular attention to handling of severe resource consumption in one guest so that other guests remain stable.

Accessing Information about Guests and the Hypervisor

In Chapter 6, "Managing Unprivileged Domains," we introduced xm list, a simple command for getting the status of running guest domains. A number of similar tools are available that provide access to detailed information about guests as well as the hypervisor—xm info, xm dmesg, xm log, xm top, and xm uptime. These tools lend themselves to tasks such as debugging odd behavior, getting a better understanding of what's going on under the hood, auditing the security of the system, and observing the impact of recent administrative actions.

xm info

The xm info utility provides a convenient command for grabbing systemwide and hardware information such as number of physical CPUs, hostname, and version of the hypervisor. Much of this information is not typically available from inside a guest, Domain0 or otherwise, because they see only virtualized devices and not the physical hardware. This information can be essential for performance analysis purposes. We start with an example of the xm info command being run as shown in Listing 12.1.

The rows in Listing 12.1 are gathered from three main sources: general system information, hardware specifications, and Xen-specific information. We cover each section in turn.

Listing 12.1. An Example of xm info

[root@dom0]# xm info
host                   : xenserver
release                : 2.6.16.33-xen0
version                : #7 SMP Wed Jan 24 14:29:29 EST 2007
machine                : i686
nr_cpus                : 4
nr_nodes               : 1
sockets_per_node       : 2
cores_per_socket       : 1
threads_per_core       : 2
cpu_mhz                : 2800
hw_caps                : bfebfbff:20000000:00000000: 00000180:0000641d:00000000:00000001
total_memory           : 3071
free_memory            : 0
xen_major              : 3
xen_minor              : 0
xen_extra              : .4-1
xen_caps               : xen-3.0-x86_32
xen_pagesize           : 4096
platform_params        : virt_start=0xfc000000
xen_changeset          : unavailable
cc_compiler            : gcc version 4.1.2 20060928 (prerelease) (Ubuntu 4.1.1-13ubuntu5
cc_compile_by          : root
cc_compile_domain      :
cc_compile_date        : Tue Jan 23 10:43:24 EST 2007
xend_config_format     : 3
[root@dom0]#

					  

The first four attributes displayed are taken from the uname function and mostly relate to the privileged guest domain's kernel.

  • host The hostname of the system. This is not necessarily the DNS hostname the router assigned to the computer, but the local hostname usually displayed with the command prompt. Xen does not check whether the local hostname is the same as the DNS hostname.

  • release The Domain0 kernel's release number.

  • version The date that Domain0's kernel was compiled.

  • machine The platform the Domain0 kernel was built for. In the case of x86 architectures, such as i686 and i386, this attribute is not necessarily the precise hardware platform but is definitely compatible with the underlying hardware.

The next nine attributes displayed are all hardware specifications.

  • nr_cpus The number of logical CPUs present in the system. The number of logical CPUs in a system is the maximum number of threads that can be run simultaneously on all physical CPUs combined. Physical CPUs refers to the actual processors on the motherboard. Multiple threads can be run on each CPU due to either hyperthreading or multiple cores.

  • nr_nodes Number of NUMA cells. NUMA stands for Non-Uniform Memory Access and is a way of associating a specific processor to a specific portion of memory located physically closer to that processor for quicker access and some interesting features. If a computer's motherboard does not support NUMA, this value is set to 1 as is the case in Listing 12.1. If NUMA was supported this number would be greater than 1. The number of NUMA cells does not affect the total number of logical CPUS, only the distribution of work on those CPUs.

  • sockets_per_node Number of CPU sockets per NUMA cell. A socket is the physical location on which a processor sits. Only sockets that house a processor and are being used are recognized and counted as a socket.

  • cores_per_socket CPU cores per CPU socket. Today multicore processors are common. Each core acts like a separate CPU while sharing the caches. Thus each separate core acts as an independent logical CPU.

  • threads_per_core Whether single or multicore, a processor can also be enabled with hyperthreading technology. Due to the nature of the technology, each separate core may have hyperthreading capabilities, and therefore each core will be seen as two separate cores, or two logical CPUs.

  • cpu_mhz This is the maximum clock speed of each processor.

  • hw_caps Stands for hardware capabilities and is a bit vector representing the CPU flags also normally available in /proc/cpuinfo. These flags tell the operating system whether things such as Physical Address Extension (PAE) are available. The corresponding flags from cpuinfo for this particular machine are shown in Listing 12.2 and are much more readable than a bit vector. If you need to figure out whether your machine supports a feature such as PAE, look to this file.

    Listing 12.2. CPU Flags in /proc/cpuinfo

    [root@dom0]# grep flags /proc/cpuinfo
    flags           : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe lm constant_tsc pni monitor ds_cpl cid cx16 xtpr lahf_lm
    [root@dom0]#
    
    					  

  • total_memory The total amount of RAM available, used and unused.

  • free_memory The amount of RAM not used by either guests or the Xen hypervisor. This memory is claimable if a guest requests more memory.

The rest of the attributes are Xen specific.

  • xen_major The first number in a Xen release number. This number represents a major turning point in the development of Xen. In this book we are only using Xen 3, which was a large improvement on Xen 2.

  • xen_minor The middle number in a Xen release number. This number represents additions of smaller features and fixes of major bugs while keeping the direction of the product to be represented by the xen_major number.

  • xen_extra The third and last number in a Xen release number. This represents minor changes such as bug fixes. The higher the extra number is, the more stable it should be compared to earlier versions.

  • xen_caps This contains the Xen hypervisor version, which currently would be either 2.0 or 3.0. It also specifies whether this is on an x86 or some other platform such as ppc, and whether it is built for 32-bit or 64-bit. Finally, a "p" would indicate that the hypervisor is Physical Address Translation or PAE enabled. It is called xen_caps because the Xen kernel version defines some of the capabilities of Xen.

  • xen_pagesize Size of blocks of data read from a block device, such as a hard drive, into main memory represented in bytes. This number doesn't really change except between 64-bit and 32-bit machines where the size of a page is 65536 bytes and 4096 bytes, respectively.

  • platform_params The Xen hypervisor reserves some amount of space at the top of virtual memory. This parameter shows the address at which the hypervisor turns over memory to the guests. Above this address, the hypervisor reigns. The value of this cut-off point depends on the amount of virtual memory available to the guest and is influenced by whether the system is 64-bit or 32-bit and whether it has PAE.

  • xen_changeset The mercurial revision number. This number is useful only if you compiled Xen from source code that was downloaded from the Xen mercurial version control server.

  • cc_compiler The compiler version of the hypervisor is important if you are planning on compiling modules or new guest kernels. The compiler version of a module and a kernel should be the same. If a module was not compiled correctly, you get a message similar to Listing 12.3 when trying to load the module using insmod.

    Listing 12.3. Module Load Errors

    [root@dom0]# insmod ./ath/ath_pci.ko
    insmod: error inserting './ath/ath_pci.ko': -1 Invalid module format
    dom0# modinfo ./ath/ath_pci.ko
    filename:       ./ath/ath_pci.ko
    author:         Errno Consulting, Sam Leffler
    description:    Support for Atheros 802.11 wireless LAN cards.
    version:        0.9.2
    license:        Dual BSD/GPL
    vermagic:       2.6.18-1.2798.fc6 SMP mod_unload 586 REGPARM 4KSTACKS gcc-3.4
    <entry continues, but is irrelevant for what we're discussing>
    [root@dom0]#
    
    					  

    The error received when trying to load the module in Listing 12.3, from the first command, is not very descriptive. However, compare the kernel and compiler version in the vermagic (a truncated form of version magic) line of Listing 12.3 (2.6.18-1.2798.fc6 and gcc 3.4), with the kernel and compiler versions given previously in Listing 12.1 (2.6.16.33-xen0 and gcc 4.1.2). vermagic is the kernel version and compiler that the module was compiled for. We can see that the kernel versions match in major and minor number (2.6), but one is 2.6.18 and one is 2.6.16. Also, one is compiled with gcc 3.4 and the other with gcc 4.1.2. In this case, the problem is the incorrect gcc version. The problem can be resolved by recompiling the module after switching the gcc version.

  • cc_compile_by The user that compiled the kernel.

  • cc_compile_domain An identifying marker, possibly the DNS hostname, of the user who compiled the kernel. If you compile your own kernel, you might not see anything displayed here.

  • cc_compile_date The date and time the kernel was compiled.

The last attribute is actually Xend specific.

  • xend_config_format This value is hard coded in the source code for xm info. It is used to represent certain features and formats supported by the current Xen version.

xm dmesg

The Xen hypervisor prints its own system messages independent of any guest's kernel. At the beginning of powering on a Xen server, right before booting Domain0, messages are printed prefixed with (XEN). This signifies a message from the hypervisor. The xm dmesg command is used to display just these hypervisor-specific messages, whereas the Linux dmesg command displays the Domain0's kernel messages as well as the hypervisor's messages.

For convenience, we view only the beginning and end of an example Xen dmesg output as shown in Listing 12.4.

Listing 12.4. An Example of xm dmesg

[root@dom0]# xm dmesg
 __  __            _____  ___  _  _      _
 / // /___ _ __   |___ / / _ /| || |    / |
  /  // _ / '_ /    |_ /| | | | || |_ __| |
  /  /  __/ | | |  ___) | |_| |__   _|__| |
 /_//_/___|_| |_| |____(_)___(_) |_|    |_|


 http://www.cl.cam.ac.uk/netos/xen
 University of Cambridge Computer Laboratory


 Xen version 3.0.4-1 (root@) (gcc version 4.1.2 20060928 (prerelease) (Ubuntu 4.1.1-13ubuntu5)) Tue Jan 23 10:43:24 EST 2007
 Latest ChangeSet: unavailable


(XEN) Command line: /xen-3.0.gz console=vga
(XEN) Physical RAM map:
(XEN)  0000000000000000 - 000000000009d400 (usable)
(XEN)  000000000009d400 - 00000000000a0000 (reserved)


<dmesg output removed here>


(XEN)  ENTRY ADDRESS: c0100000
(XEN) Dom0 has maximum 4 VCPUs
(XEN) Scrubbing Free RAM: .done.
(XEN) Xen trace buffers: disabled
(XEN) Std. Loglevel: Errors and warnings
(XEN) Guest Loglevel: Nothing(Rate-limited: Errors and warnings)
(XEN) Xen is relinquishing VGA console.
(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to Xen).
[root@dom0]#

					  

With the --clear option, it's possible to clear the hypervisor's message buffer, as shown in Listing 12.5.

Listing 12.5. Clearing xm dmesg Log

[root@dom0]# xm dmesg --clear
[root@dom0]# xm dmesg


[root@dom0]#

The buffer has been cleared so nothing is displayed the second time of running xm dmesg. There isn't much reason for doing this because the message buffer is circular; it will never run out of memory but instead overwrites older data.

xm log

There exist a few different logging constructs in Xen that report to three files: xend.log, xend-debug.log, and xen-hotplug.log, all contained in /var/log/xen/ by default. The log file names and locations can be changed either through /etc/xen/xend-config.sxp or their respective scripts. Listing 12.6 shows how to change the log file's location. The xend.log location is commented out, and a new location is specified for xend.log.

Listing 12.6. Xend Log Configuration

#(logfile /var/log/xen/xend.log)
(logfile /xendlog/xend.log)

It's possible to set the logging level desired for xend.log. If you browse the Xen logs you will notice that most of the messages are DEBUG messages. These aren't always necessary and in some cases you may not want the clutter. To change the debug level, you must edit a line in xend-config.sxp that looks like Listing 12.7. The possible logging levels with severity in increasing order are DEBUG, INFO, WARNING, ERROR, and CRITICAL.

Messages to xend.log are usually only printed when an administrator executes a command on a guest, like starting, pausing, or shutting down guests. The amount of messages on the DEBUG level will not affect performance of the Xen system.

DEBUG messages contain information on the status of a current operation and the parameters that are passed to commands.

INFO mostly logs what commands the administrator has executed along with what virtual devices have been created and destroyed.

WARNING messages indicate that something irregular happened and may be a problem.

ERROR messages are printed when something is broken, like a guest was started unsuccessfully or other situations where Xen did not behave how the user requested.

A CRITICAL message means the entire Xen system may be broken, rather than a single guest or operation. You should never see a CRITICAL log message.

Unless you are developing guest images and testing if they are created successfully, the WARNING log level should suffice. On the other hand, if you aren't sure which logs you will need, the grep command can parse only the level of messages you want from the log.

Listing 12.7. Loglevel Configuration

#(loglevel DEBUG)
(loglevel INFO)

The purpose and use of each log file is as follows:

  • xend.log Records actions of xm and programs that interact with Xend, and this is the file that xm log will print.

  • xend-debug.log When a script or program encounters a bug or some fault of the code, the output is printed to xend-debug.log. Look at this file if the xend.log file doesn't offer enough information when trying to fix a problem.

  • xen-hotplug.log Records errors that have to do with connecting or accessing devices such as vifs, virtual bridges, hard drives, and so on. The default location of this log file is the same as the other two, /var/log/xen. However, you can change where the log is saved by modifying a single line in /etc/xen/scripts/xen-hotplug-common.sh. The directory path in the line that looks like exec 2>>/var/log/xen/xen-hotplug.log should be changed to the location you want to print the log.

xm top

Similar to the top command in Linux, the xm top command runs an interactive ncurses program that displays all running guests in a formatted table with statistics on how they are using resources. xm top actually runs the program xentop, but the xm command in general can be more convenient because you only have to remember a single command. xentop is useful for forming a quick picture of what the system is doing at a given point in time and watching how certain actions affect the Xen system. Figure 12.1 shows what a typical instance of xentop looks like.


xentop displays a dynamic, changing list of statistics on all running Xen guests. This includes information otherwise difficult to obtain, such as percentage of CPU usage and percentage of memory usage per domain. The screenshot in Figure 12.1 has different elements numbered identifying its description in the following list:

1— Current wall clock time.

2— Xen version running.

3— The total number of domains followed by a tally of which domains are in which states. For a description of what the states mean, see the discussion of xm list in Chapter 6.

4— The total amount of physical memory, memory being used, and free memory.

5— The name of each domain in alphabetical order.

6— The state as would be printed in xm list.

7— The amount of time spent running on a CPU, as would be printed in xm list.

8— Current CPU usage presented as a percentage of total CPU potential in the system. Thus, if there's two CPUs, each is considered 50 percent of the total potential.

9— Amount of memory allocated to the domain. Covered later in this chapter.

10— Percentage of total system memory allocated to the guest domain.

11— Maximum amount of memory a domain may possess. Covered later in this chapter.

12— Maximum amount of memory a domain may possess presented as a percentage of total system memory.

13— Number of VCPUs owned by a guest domain whether they be active or inactive.

14— Number of network devices owned by a guest domain.

15— Amount of network traffic in kilobits that is leaving a domain through any interface it owns.

16— Amount of network traffic in kilobits that is entering a domain through any interface it owns.

17— Amount of VBDs forwarded to the domain.

18— The number of OO (Out of) requests for a block device. There are a number of kernel threads that work to satisfy read and write requests to VBDs. Sometimes a kernel thread is told by a scheduler that there are requests to satisfy, but no requests are found in the queue of requests. Thus, the kernel thread is out of requests. When the OO request counter is high, it is a sign that the physical block device cannot keep up with the guest's IO demand. When the OO request counter is low, the physical block device is sufficient for the current IO workload.

19— Number of read requests for any block device.

20— Number of write operations for any block device.

21— Used with labels and security issues covered in Chapter 11, "Securing a Xen System." SSID stands for service set identifier.

22— VCPUs the guest has, even if not currently in use, and a total of how many seconds each VCPU has spent executing on a logical CPU.

23— Network interfaces the guest is aware of and statistics on RX and TX.

24— Virtual Block Devices owned by the guest are shown as well as read and write statistics.

You can manipulate a lot of this data in several ways through command line arguments and hotkeys in the application. The Q key, for example, quits the application. The hotkey commands are shown at the bottom left of the xentop ncurses window shown in Figure 12.1.

The following list describes the keys that are used to interact with xentop after it's running. Many of these commands can also be passed to xentop on the command line when starting the program.

D— Sets the amount of seconds before the program queries for an update of all information.

N— Toggles the network interface statistics from being shown as in item 23 in the preceding list. This may also be set on the command line with the --networks option.

B— Toggles rows of data showing which virtual block devices are attached to which guest domains. This may also be set on the command line with the -x or --vbds option.

V— Toggles rows of data, showing which VCPUs are attached to which guest domains. This may also be set on the command line with the --vcpus option.

R— Toggles the VCPU statistics from being shown as in item 22 preceding. This may also be set on the command-line with the –repeat-header option.

S— Information is initially sorted in alphabetical order by name, but with this command, switches to sorting the data by each column.

Q— Quits the program.

Arrows— Scrolls through the table of guests. Scrolling is used when you have a lot of guest domains currently running.

The most typical reason for using xm top is looking for the guests that are using the most of some resource, like network bandwidth, processor time, or main memory. Another common usage is examining the effects of a DomU configuration change before and after the change takes place.

xm uptime

The classic Linux uptime command shows how long the current machine has been running since it was last booted. On a Xen machine in Domain0, xm uptime shows how long all the guests have been running since each of them was created. Listing 12.8 shows an example. This is not the same as the CPU time shown in the xm list command. uptime is independent of how much time a guest has been executing on any CPU.

Listing 12.8. An Example of xm uptime

[root@dom0]# xm uptime
Name                              ID Uptime
Domain-0                           0 21 days,  3:08:08
generic-guest                    19  6 days,  5:43:32
redhat-guest                     27  1:16:57
ubuntu-guest                      1 21 days,  0:50:45
gentoo-guest                      4 21 days,  0:38:12
[root@dom0]#

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值