Xen Linux PV on HVM drivers
Xen PVHVM drivers for Linux HVM guests
This page lists some resources about using optimized paravirtualized PVHVM drivers (also called PV-on-HVM drivers) with Xen fully virtualized HVM guests running (unmodified) Linux kernels. Xen PVHVM drivers completely bypass the Qemu emulation and provide much faster disk- and network IO performance.
Note that Xen PV (paravirtual) guests automatically use PV drivers, so there's no need for these drivers if you use PV domUs! (you're already automatically using the optimized drivers). These PVHVM drivers are only required for Xen HVM (fully virtualized) guest VMs.
Xen PVHVM Linux driver sources:
- Upstream vanilla kernel.org Linux 2.6.36 kernel and later versions contain Xen PVHVM drivers out-of-the-box!
- Jeremy's pvops kernel in xen.git (branch: xen/stable-2.6.32.x) contains the new PVHVM drivers. These drivers are included in upstream Linux kernel 2.6.36+. See XenParavirtOps wiki page for more information about pvops kernels.
- Easy-to-use "old unmodified_drivers" PV-on-HVM drivers patch for Linux 2.6.32 (Ubuntu 10.04 and other distros): http://lists.xensource.com/archives/html/xen-devel/2010-05/msg00392.html
- Xen source tree contains "unmodified_drivers" directory, which has the "old" PV-on-HVM drivers. These drivers build easily with Linux 2.6.18 and 2.6.27, but require some hackery to get them build with 2.6.3x kernels (see below for help).
Xen PVHVM drivers in upstream Linux kernel
Xen developers rewrote the Xen PVHVM drivers in 2010 and submitted them for inclusion in upstream Linux kernel. Xen PVHVM drivers were merged to upstream kernel.org Linux 2.6.36, and various optimizations were added in Linux 2.6.37. Today upstream Linux kernels automatically and out-of-the-box include Xen PVHVM drivers.
List of email threads and links to git branches related to the new Xen PV-on-HVM drivers for Linux:
These new Xen PVHVM drivers are also included in Jeremy's pvops kernel xen.git, in branch "xen/stable-2.6.32.x" . See XenParavirtOps wiki page for more information.
There's also a backport of the Linux 2.6.36+ Xen PVHVM drivers to Linux 2.6.32 kernel, see these links for more information:
- and the git branch: http://xenbits.xen.org/gitweb?p=people/sstabellini/linux-pvhvm.git;a=shortlog;h=refs/heads/2.6.32-pvhvm .
Xen PVHVM drivers configuration example
In dom0 in the "/etc/xen/<vm>" configuration file use the following syntax:
vif = [ 'mac=00:16:5e:02:07:45, bridge=xenbr0, model=e1000' ] disk = [ 'phy:/dev/vg01/vm01-disk0,hda,w', ',hdc:cdrom,r' ] xen_platform_pci=1
With this example configuration when "xen_platform_pci" is enabled ("1"), the guest VM can use optimized PVHVM drivers: xen-blkfront for disk, and xen-netfront for network. When "xen_platform_pci" is disabled ("0"), the guest VM will use Xen Qemu-dm emulated devices: emulated IDE disk and emulated intel e1000 nic.
NOTE! If you have "type=ioemu" specified for the "vif"-line, PVHVM drivers WILL NOT work! Don't specify "type" parameter for the vif. (with type=ioemu the pvhvm nic in the VM will have mac address full of zeroes - and thus won't work!).
If you need full configuration file example for PVHVM see below.
Enable or disable Xen PVHVM drivers from dom0
In the configuration file for the Xen HVM VM ("/etc/xen/<vm>") in dom0 you can control the availability of Xen Platform PCI device. Xen PVHVM drivers require that virtual PCI device to initialize and operate.
To enable Xen PVHVM drivers for the guest VM:
To disable Xen PVHVM drivers for the guest VM:
"xen_platform_pci" setting is available in Xen 4.x versions. It is NOT available in the stock RHEL5/CentOS5 Xen or in Xen 3.x.
Linux kernel commandline boot options for controlling Xen PVHVM drivers unplug behaviour
When using the optimized Xen PVHVM drivers with fully virtualized Linux VM there are some kernel options you can use to control the "unplug" behaviour of the Qemu emulated IDE disk and network devices. When using the optimized devices the Qemu emulated devices need to be "unplugged" in the beginning of the Linux VM boot process so there's no risk for data corruption because both the Qemu emulated device and PVHVM device being active at the same time (both provided by the same backend).
TODO: List the kernel cmdline options here.
Tips about how to build the old "unmodified_drivers" with different Linux versions
- With Linux 2.6.32: http://lists.xensource.com/archives/html/xen-devel/2010-04/msg00502.html
- With Linux 2.6.27: http://wp.colliertech.org/cj/?p=653
- With arbitrary kernel version (custom patch): http://blog.alex.org.uk/2010/05/09/linux-pv-drivers-for-xen-hvm-building-normally-within-an-arbitrary-kernel-tree/
Some Linux distributions also ship Xen PVHVM drivers as binary packages
- RHEL5 / CentOS 5
- RHEL6 / CentOS 6
- SLES 10
- SLES 11
- Linux distros that ship with Linux 2.6.36 or later kernel include Xen PVHVM drivers in the default kernel.
You might also want to check out the XenKernelFeatures wiki page.
Verifying Xen Linux PVHVM drivers are using optimizations
If you're Using at least Xen 4.0.1 hypervisor and the new upstream Linux PVHVM drivers available in Linux 2.6.36 and later versions, follow these steps:
- Make sure you're using the PVHVM drivers.
- Add "loglevel=9" parameter for the HVM guest Linux kernel cmdline in grub.conf.
- Reboot the guest VM.
- Check "dmesg" for the following text: "Xen HVM callback vector for event delivery is enabled".
Some distro kernels, or a custom kernel from xen/stable-2.6.32.x git branch might have these optimizations available aswell.
Example HVM guest configuration file for PVHVM use
Example configuration file ("/etc/xen/f16hvm") for Xen 4.x HVM guest VM using Linux PVHVM paravirtualized optimized drivers for disks and network:
builder='hvm' name = "f16pvhvm" memory = 1024 vcpus=1 pae=1 acpi=1 apic=1 vif = [ 'mac=00:16:4f:02:02:15, bridge=virbr0, model=e1000' ] disk = [ 'phy:/dev/vg01/f16pvhvm-disk0,hda,w', 'file:/root/iso/Fedora-16-x86_64-DVD.iso,hdc:cdrom,r' ] boot='cd' xen_platform_pci=1 on_poweroff = 'destroy' on_reboot = 'restart' on_crash = 'restart' sdl=0 vnc=1 vncpasswd='' stdvga=0 serial='pty' tsc_mode=0 usb=1 usbdevice='tablet' keymap='en'
This example has been tested and is working on Fedora 16 Xen dom0 host using the included Xen 4.1.2 and Linux 3.1 kernel in dom0, and Fedora 16 Xen PVHVM guest VM, also using the stock F16 Linux 3.1 kernel with the out-of-the-box included PVHVM drivers.
Verify Xen PVHVM drivers are working in the Linux HVM guest kernel
Run "dmesg | egrep -i 'xen|front'" in the HVM guest VM. This example is from Fedora 16 PVHVM guest Linux 3.1 kernel. You should see messages like this:
# dmesg | egrep -i 'xen|front' [ 0.000000] DMI: Xen HVM domU, BIOS 4.1.2 10/21/2011 [ 0.000000] Hypervisor detected: Xen HVM [ 0.000000] Xen version 4.1. [ 0.000000] Xen Platform PCI: I/O protocol version 1 [ 0.000000] Netfront and the Xen platform PCI driver have been compiled for this kernel: unplug emulated NICs. [ 0.000000] Blkfront and the Xen platform PCI driver have been compiled for this kernel: unplug emulated disks. [ 0.000000] ACPI: RSDP 00000000000ea020 00024 (v02 Xen) [ 0.000000] ACPI: XSDT 00000000fc0134b0 00034 (v01 Xen HVM 00000000 HVML 00000000) [ 0.000000] ACPI: FACP 00000000fc0132d0 000F4 (v04 Xen HVM 00000000 HVML 00000000) [ 0.000000] ACPI: DSDT 00000000fc003440 0FE05 (v02 Xen HVM 00000000 INTL 20100528) [ 0.000000] ACPI: APIC 00000000fc0133d0 000D8 (v02 Xen HVM 00000000 HVML 00000000) [ 0.000000] Booting paravirtualized kernel on Xen HVM [ 0.000000] Xen HVM callback vector for event delivery is enabled [ 0.051316] Xen: using vcpuop timer interface [ 0.051322] installing Xen timer for CPU 0 [ 1.253888] xen/balloon: Initialising balloon driver. [ 1.253904] xen-balloon: Initialising balloon driver. [ 1.257832] Switching to clocksource xen [ 1.264861] xen: --> pirq=16 -> irq=8 (gsi=8) [ 1.264928] xen: --> pirq=17 -> irq=12 (gsi=12) [ 1.264973] xen: --> pirq=18 -> irq=1 (gsi=1) [ 1.265014] xen: --> pirq=19 -> irq=6 (gsi=6) [ 1.265075] xen: --> pirq=20 -> irq=4 (gsi=4) [ 1.265134] xen: --> pirq=21 -> irq=7 (gsi=7) [ 1.643937] xen: --> pirq=22 -> irq=28 (gsi=28) [ 1.643940] xen-platform-pci 0000:00:03.0: PCI INT A -> GSI 28 (level, low) -> IRQ 28 [ 1.704280] xen: --> pirq=23 -> irq=23 (gsi=23) [ 1.721598] XENBUS: Device with no driver: device/vfb/0 [ 1.721600] XENBUS: Device with no driver: device/vbd/768 [ 1.721601] XENBUS: Device with no driver: device/vbd/5632 [ 1.721603] XENBUS: Device with no driver: device/vif/0 [ 1.721604] XENBUS: Device with no driver: device/console/0 [ 2.377167] vbd vbd-5632: 19 xenbus_dev_probe on device/vbd/5632 [ 2.378134] blkfront: xvda: flush diskcache: enabled [ 6.266448] Initialising Xen virtual ethernet driver.
Especially the following lines are related to Xen PVHVM drivers:
[ 0.000000] Xen Platform PCI: I/O protocol version 1 [ 0.000000] Netfront and the Xen platform PCI driver have been compiled for this kernel: unplug emulated NICs. [ 0.000000] Blkfront and the Xen platform PCI driver have been compiled for this kernel: unplug emulated disks [ 0.000000] Xen HVM callback vector for event delivery is enabled [ 2.377167] vbd vbd-5632: 19 xenbus_dev_probe on device/vbd/5632 [ 2.378134] blkfront: xvda: flush diskcache: enabled [ 6.266448] Initialising Xen virtual ethernet driver.
Here you can see the Qemu-dm emulated devices are being unplugged for safety reasons. Also "xen-blkfront" paravirtualized driver is being used for the block device "xvda", and the xen-netfront paravirtualized network driver is being initialized.
Verify network interface "eth0" is using the optimized paravirtualized xen-netfront driver:
# ethtool -i eth0 driver: vif version: firmware-version: bus-info: vif-0 supports-statistics: yes supports-test: no supports-eeprom-access: no supports-register-dump: no
driver "vif" means it's a Xen Virtual Interface paravirtualized driver.
Also you can verify from "/proc/partitions" file that your disk devices are called xvd* (Xen Virtual Disk) so you're using the optimized paravirtualized disk/block driver:
# cat /proc/partitions major minor #blocks name 202 0 31457280 xvda 202 1 512000 xvda1 202 2 4194304 xvda2 202 3 26749952 xvda3
Using Xen PVHVM drivers with Ubuntu HVM guests
At least Ubuntu 11.10 does NOT include xen platform PCI driver as built-in to the kernel, so if you want to use Xen PVHVM drivers you need to add "xen-platform-pci" driver in addition to "xen-blkfront" to the initramfs image. Xen platform PCI driver is required for Xen PVHVM to work!
Using Xen PVHVM drivers with Ubuntu 11.10:
- Add "xen-platform-pci" to file "/etc/initramfs-tools/modules".
- Re-generate the kernel initramfs image with: "update-initramfs -u".
- Make sure "/etc/fstab" is mounting partitions based on Label or UUID. Names of the disk devices will change when using PVHVM drivers (from sd* to xvd*). This doesn't affect LVM volumes.
- Set "xen_platform_pci=1" in "/etc/xen/<ubuntu>" configfile in dom0.
- Start the Ubuntu VM and verify it's using PVHVM drivers (see above).
- For workloads that favor PV MMUs, PVonHVM can have a small performance hit compared to PV.
- For workloads that favor nested paging (in hardware e.g. Intel EPT or AMD NPT), PVonHVM performs much better than PV.
- Best to take a close look and measure your particular workload(s).
- Follow trends in hardware-assisted virtualization.
- 64bit vs. 32bit can also be a factor (e.g. in Stefano's benchmarks, linked to below, 64bit tends to be faster, but it's always best to actually do the measurements.
- More benchmarks very welcome!
Take a look at Stefano's slides and Xen Summit talk: