Use host-cpu-tune to Fine tune XenServer 6.2.0 Performance

Using host-cpu-tune

The tool can be found in /usr/lib/xen/bin/host-cpu-tune. When executed with no parameters, it displays help and this tool allows XenServer administrators to configure a host’s dom0 vCPU count and pinning strategy.

[root@host ~]# /usr/lib/xen/bin/host-cpu-tune
Usage: /usr/lib/xen/bin/host-cpu-tune { show | advise | set <dom0_vcpus> <pinning> [–force] }
         show     Shows current running configuration
         advise   Advise on a configuration for current host
         set      Set host’s configuration for next reboot
          <dom0_vcpus> specifies how many vCPUs to give dom0
          <pinning>    specifies the host’s pinning strategy
                       allowed values are ‘nopin’ or ‘xpin’
          [–force]    forces xpin even if VMs conflict

Examples: /usr/lib/xen/bin/host-cpu-tune show
          /usr/lib/xen/bin/host-cpu-tune advise
          /usr/lib/xen/bin/host-cpu-tune set 4 nopin
          /usr/lib/xen/bin/host-cpu-tune set 8 xpin
          /usr/lib/xen/bin/host-cpu-tune set 8 xpin –force
[root@host ~]#


The total number of pCPUs and advise as follows:
# num of pCPUs < 4 ===> same num of vCPUs for dom0 and no pinning
# < 24 ===> 4 vCPUs for dom0 and no pinning
# < 32 ===> 6 vCPUs for dom0 and no pinning
# < 48 ===> 8 vCPUs for dom0 and no pinning
# >= 48 ===> 8 vCPUs for dom0 and excl pinning

The utility works in three distinct modes:

  1. Show: This mode displays the current dom0 vCPU count and infer the current pinning strategy.
    Note: This functionality will only examine the current state of the host. If configurations are changed (for example, with the setcommand) and the host has not yet been rebooted, the output may be inaccurate.
  2. Advise: This recommends a dom0 vCPU count and a pinning strategy for this host.
    Note: This functionality takes into account the number of pCPUs available in the host and makes a recommendation based on heuristics determined by Citrix. System administrators are encouraged to experiment with different settings and find the one that best suits their workloads.
  3. Set: This functionality changes the host configuration to the specified number of dom0 vCPUs and pinning strategy.
    Note: This functionality may change parameters in the host boot configuration files. It is highly recommended to reboot the host as soon as possible after using this command.
    Warning: Setting zero vCPUs to dom0 (with set 0 nopin) will cause the host not to boot.

Resetting to Default

The host-cpu-tune tool uses the same heuristics as the XenServer Installer to determine the number of dom0 vCPUs. The installer, however, never activates exclusive pinning because of race conditions with Rolling Pool Upgrades (RPUs). During RPU, VMs with manual pinning settings can fail to start if exclusive pinning is activated on a newly upgraded host.

To reset the dom0 vCPU pinning strategy to default:

  1. Run the following command to find out the number of recommended dom0 vCPUs:
    [root@host ~]# /usr/lib/xen/bin/host-cpu-tune advise
  2. Configure the host accordingly, without any pinning:
    • [root@host ~]# /usr/lib/xen/bin/host-cpu-tune set <count> nopin
    • Where <count> is the recommended number of dom0 vCPUs indicated by the advise command.
  1. Reboot the host. The host will now have the same settings as it did when XenServer 6.2.0 was installed. 

Usage in XenServer Pools

Settings configured with this tool only affect a single host. If the intent is to configure an entire pool, this tool must be used on each host separately.

When one or more hosts in the pool are configured with exclusive pinning, migrating VMs between hosts may change the VM’s pinning characteristics. For example, if a VM are manually pinned with the vCPU-params:mask parameter, migrating it to a host configured with exclusive pinning may fail. This could happen if one or more of that VM’s vCPUs are pinned to a pCPU index exclusively allocated to dom0 on the destination host.

Additional commands to obtain information concerning CPU topology:

xenpm get-cpu-topolgy
xl vcpu-list 


{loadposition content_starwind600}