Xen Common Problems
|This page contains answers to some common problems and questions about Xen. This list contains the most frequently asked questions. You may also want to check out the articles in Category:FAQ. Each section in this document links to further questions in other FAQ documents.|
- 1 General
- 1.1 I'd like to contribute to the Xen Project wiki pages, I have something to add/edit/fix, how can I do it?
- 1.2 I'm interesting in being a Xen Project developer! Are there projects to work on?
- 1.3 How does Xen Project compare to KVM? Which one is better?
- 1.4 Is there a "Best Practices" document available?
- 1.5 Is there a list of various 3rd party management tools and (web) interfaces for Xen Project software?
- 1.6 Do you have a list of Xen Project related research papers?
- 1.7 What's the difference between Xen Project Hypervisor (from XenProject.org) and Citrix XenServer (or XCP)?
- 1.8 Can I browse the Xen Project source trees online?
- 1.9 Can I browse the Xen Project Qemu-dm (HVM guest ioemu) source repositories online?
- 1.10 Where do I find more General FAQs?
- 2 Compatibility
- 2.1 Where can I find a list of available Xen Project Domain 0 kernels?
- 2.2 Where can I find a list of available features in different Xen Project enabled kernels?
- 2.3 Where can I find information about Xen Project on RISC CPUs?
- 2.4 How can I check the version of Xen Project hypervisor?
- 2.5 What are the names of different hardware features related to virtualization and Xen?
- 2.6 How can I check if I'm able to run Xen HVM (fully virtualized) guests?
- 2.7 How can I check if my CPU supports HAP (Hardware Assisted Paging) ?
- 2.8 How can I check if my hardware has an IOMMU (VT-d) and it's enabled and supported?
- 2.9 Are there FreeBSD Xen PV kernels/images available?
- 2.10 Where can I find information about NetBSD Xen support?
- 2.11 Where do I find more Compatibility related FAQs?
- 3 Booting
- 3.1 Is there a list of all available Xen hypervisor (xen.gz) commandline boot options for grub.conf?
- 3.2 What's Xen pygrub?
- 3.3 What's Xen pvgrub?
- 3.4 Does Xen pygrub support booting PV guests using GRUB2 config files?
- 3.5 How can I boot Xen HVM guest from a virtual emulated floppy, using an image file as the floppy?
- 3.6 I have problems getting Xen or dom0 kernel to boot, how can I set up a serial console to log and troubleshoot the boot process?
- 3.7 Where do I find more Booting FAQs?
- 4 Guest / DomU
- 4.1 Where can I find a list of all the available domU configuration options in /etc/xen/<guest> cfgfile?
- 4.2 Where can I find optimized Xen PV-on-HVM drivers for Linux HVM (fully virtualized) guests?
- 4.3 I can't start any HVM guests, I get error "libxl__domain_make domain creation fail: cannot make domain: -3". PV guests work fine.
- 4.4 I upgraded PV domU from old Xenlinux kernel to new pvops kernel, and now the domU doesn't work anymore!
- 4.5 My Xen PV guest kernel crashes, how can I debug it and get a stack/call trace?
- 4.6 How do I change virtual/emulated CD .iso image for a Xen HVM guest?
- 4.7 I'm trying to create a new Xen VM but I get error there's not enough free memory
- 4.8 Error: (4, 'Out of memory', 'panic: xc_dom_core.c:442: xc_dom_alloc_segment: segment ramdisk too large (0xee93 > 0x8000 - 0x1755 pages)')
- 4.9 How can I use Xen PVHVM optimized paravirtualized drivers with Ubuntu 11.10 or Fedora 16 HVM guests?
- 4.10 I followed a third-party HOWTO for creating or converting a VM, but it doesn't start. There are errors around access of qemu-dm as the device model. Why?
- 4.11 Where do I find more DomU FAQs?
- 5 Host / Dom0
- 5.1 How can I limit the number of vcpus my dom0 has?
- 5.2 Can I dedicate a cpu core (or cores) only for dom0?
- 5.3 In dom0 how can I access and mount partitions inside a Xen disk image?
- 5.4 Xen dom0 complains about not enough free loop devices when trying to start a new domU or when trying to "mount -o loop" from the cmdline
- 5.5 Can I run graphical X applications in Xen dom0 without installing X server and display drivers?
- 5.6 I'd like to run Xen hypervisor/dom0 on Redhat Enterprise Linux 6 (RHEL6)
- 5.7 Where do I find more Dom0 FAQs?
- 6 Networking
- 6.1 Is there more information about Xen "blktap" disk backend?
- 6.2 What emulated NIC types/models are available in Xen HVM fully virtualized guests?
- 6.3 Xen complains about "hotplug scripts not working"
- 6.4 How to specify custom (non-default) vif-script for some domU network interface?
- 6.5 What's the difference between vifX.Y and tapX.Y network interfaces in dom0?
- 6.6 Using SR-IOV Virtual Function (VF) PCI passthru with Xen
- 6.7 Where do I find more Networking FAQs?
- 7 Console
- 7.1 I can't connect to the console of my guest using "xl console <guest>"
- 7.2 Console of my PV guest shows kernel boot messages and then it stops and doesn't work anymore
- 7.3 Console of my PV guest is totally empty, it doesn't show anything, not even kernel boot messages!
- 7.4 What's the correct console device name for my Xen PV guest
- 7.5 How do I exit domU "xl console" session
- 7.6 Everything seems OK but I still can't access the domU "xl console"!
- 7.7 Can I set up Xen HVM Linux guest to display the kernel boot messages on "xl console" ?
- 7.8 Where do I find more Console FAQs?
- 8 Other Problems / Questions
- 9 USB, PCI and VGA passthrough
- 9.1 Can I use 3D graphics in Xen?
- 9.2 Can I passthrough an USB device connected to dom0 to a Xen guest?
- 9.3 Can I passthrough a PCI device to Xen guest?
- 9.4 Can I passthru a VGA graphics adapter to Xen guest?
- 9.5 I have problems using my graphics card in Xen dom0, with the pvops dom0 kernel.. any tips?
- 9.6 Is there more information about Xen PVSCSI passthrough functionality?
- 9.7 How do I change the resolution of Xen PV domU vfb graphical VNC console?
- 9.8 How can I get resolutions larger than 800x600 for Xen HVM guest graphical VNC console?
- 9.9 Where do I find more VGA Passthrough FAQs?
- 10 HA and Fault Tolerance
I'd like to contribute to the Xen Project wiki pages, I have something to add/edit/fix, how can I do it?
Contributions are very welcome! You should create yourself a wiki account, then fill out this form to get editing rights (we used to allow anyone to edit, but spammers have ruined that). More info on getting started at
- Create an account or log in
- Play with the Sandbox
- MediaWiki Cheat Sheet
- MediaWiki Help Contents
- MediaWiki Formatting
- Books about MediaWiki
- Wiki Community Portal
- Wiki Management Tools
- Multi-language Conventions
I'm interesting in being a Xen Project developer! Are there projects to work on?
Yes! Please check the Xen Development Projects wiki page for more information!
How does Xen Project compare to KVM? Which one is better?
Please see this blog post for more information: http://blog.xenproject.org/index.php/2010/05/07/xen-%E2%80%93-kvm-linux-%E2%80%93-and-the-community/
Also check these documents:
Is there a "Best Practices" document available?
See the Xen Project Best Practices wiki page.
Is there a list of various 3rd party management tools and (web) interfaces for Xen Project software?
Yes, please see the Xen Project Ecosystem Directory for tools and solutions using Xen Project.
Yes, please see the Research Papers Directory
What's the difference between Xen Project Hypervisor (from XenProject.org) and Citrix XenServer (or XCP)?
The Xen Project Hypervisor from XenProject.org is the core hypervisor used in many different products. It includes basic command line management tools. It is distributed as a tarball and from mercurial source code repositories. You need to compile and install Xen hypervisor from sources, and combine it with a kernel of your choice. It can be thought as the "core engine" you can use to build your own virtualization platform.
Many Linux distributions package Xen and distribute it as prebuilt binaries, combined with Xen capable kernels of their choice. There are many additional thirdparty management tools available to manage the Xen hypervisor.
There are multiple companies shipping products based on the Xen hypervisor, including Citrix, Oracle, SUSE, Ubuntu, and others.
Citrix XenServer is a commercial product that includes the core Xen Project Hypervisor from XenProject.org, combined with CentOS-based dom0 distro, management toolstack, and everything else required to build a ready made virtualization platform. XenServer was Open Sourced in 2014. Citrix XenServer is a dedicated virtualization platform (not a general purpose Linux distro), and it is shipped as a ready to install ISO-image that contains everything you need out-of-the-box. Citrix XenServer includes the XAPI management toolstack, allowing you to pool multiple Xen hosts together and manage them centrally using either the graphical Citrix XenCenter management tool, the 'xe' commandline tool or using the XenAPI directly from your scripts. Citrix XenCenter graphical management tool is only available for Windows.
Can I browse the Xen Project source trees online?
Yes, you can browse the source tree and see the changelogs/summaries and track changes online from the Xen Project Repositories wiki page.
Can I browse the Xen Project Qemu-dm (HVM guest ioemu) source repositories online?
Yes, you can see the changelogs/summaries and track changes online from the Xen Project Repositories wiki page.
Where do I find more General FAQs?
See Xen FAQ General
Where can I find a list of available Xen Project Domain 0 kernels?
See the Dom0 Kernels for Xen wiki page.
Where can I find a list of available features in different Xen Project enabled kernels?
See the Xen Kernel Feature Matrix wiki page.
Where can I find information about Xen Project on RISC CPUs?
- Please see Xen ARM (PV) wiki page for Xen Project on ARM CPUs. The Xen ARM project is no longer actively developed. It has been superseded by the Xen ARM with Virtualization Extensions effort.
- Please see Xen ARM with Virtualization Extensions wiki page for Xen Project on ARMv7 CPUs with Virtualization Extensions. This project is actively developed.
- Xen Project has also been ported to MIPS64 platform. See the Netlogic Microsystems press release for more information: http://vmblog.com/archive/2012/01/30/netlogic-microsystems-announces-the-industry-s-first-open-source-xen-hypervisor-for-multi-core-mips64-processors.aspx or the xen-devel mailinglist post: http://lists.xenproject.org/archives/html/xen-devel/2012-01/msg00301.html .
- Please see XenPPC wiki page for Xen on PowerPC CPUs. XenPPC project is not active anymore.
How can I check the version of Xen Project hypervisor?
Run "xl info" in domain 0 (on older versions, you might need to use "xm info"). You can find the hypervisor version in "xen_major", "xen_minor" and "xen_extra" fields. The version is major.minor.extra.
Determining the version from within a domU is dependent on the guest operating system. For Linux guests:
$ dmesg | grep Xen\ version Xen version: 4.2.0 (preserve-AD)
- Xen supports running PV (paravirtualized) VMs on any PAE-capable x86 or x86_64 CPU (both Intel and AMD CPUs). CPU Virtualization extensions are NOT required or used to run Xen PV domUs.
- Xen requires CPU Virtualization Extensions to run Xen HVM (Fully Virtualized) VMs. Also the system BIOS needs to support and enable the CPU Virtualization extensions.
- CPU Virtualization extensions are called Intel VT-x or AMD-V and they are required for running Xen HVM guests.
- HAP (Hardware Assisted Paging) can be optionally used to boost the performance of Xen memory management for HVM VMs. HAP is an additional feature of the CPU, and it's not present on older CPUs. Intel HAP is called Intel EPT (Extended Page Tables) and AMD HAP is called AMD NPT (Nested Page Tables). AMD NPT is sometimes also referred as AMD RVI (Rapid Virtualization Indexing).
- IOMMU (IO Memory Management Unit) support from CPU/BIOS/chipset is needed for Xen IO Virtualization. IOMMU makes it possible to dedicate PCI device securely to a Xen VM by using Xen PCI passthru. Intel IOMMU is called Intel VT-d, and AMD IOMMU is called just AMD IOMMU.
- SR-IOV (Single Root IO Virtualization) can be used together with IOMMU PCI passthru and PCI Express SR-IOV capable devices. SR-IOV needs to be supported and enabled by the system chipset, BIOS and the PCI-e device itself. For example Intel 82599 10 Gigabit Ethernet NIC supports 64 Virtual Functions (VFs), which means the NIC can be configured to show up as 64 different PCI devices (PCI IDs), so you can use Xen PCI passthru to passthrough each VF to some Xen VM and give the VM direct access to the PCI-e device. SR-IOV provides excellent IO performance and very low overhead.
How can I check if I'm able to run Xen HVM (fully virtualized) guests?
Xen requires CPU virtualization support/extensions (Intel VT, AMD-V) for running HVM guests (=Windows). Virtualization support needs to be enabled in the system BIOS. Intel calls this feature also as "VT-x". Also note that Intel "VT-x" is different from "VT-d", which is a totally different feature (see the next section below).
NOTE that Linux dom0 kernel doesn't see 'vmx' or 'svm' CPU flags in "/proc/cpuinfo" because Xen *hypervisor* (xen.gz) is using the hardware virtualization features and hiding the flags from dom0! Xen dom0 is actually a virtual machine, so it doesn't see all the cpu flags as Xen hypervisor is hiding some flags from dom0.
You can run "xl info" in dom0 and check from the 'xen_caps' line if Xen is able to run hvm guests. Also Xen hypervisor boot messages in "xl dmesg" show if hardware virtualization (HVM) is enabled or disabled.
Example "xl dmesg | grep -i hvm" output for an Intel system where HVM is supported by the CPU:
(XEN) HVM: VMX enabled
Example "xl dmesg" output for an Intel system where HVM is supported by the CPU but it's disabled in the system BIOS:
(XEN) VMX disabled by Feature Control MSR.
Example "xl dmesg | grep -i hvm" output for an AMD system where HVM is supported:
(XEN) HVM: SVM enabled
You can also see the HVM support status from the "xen_caps" line on Xen hypervisor "xl info | grep xen_caps" output:
xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
How can I check if my CPU supports HAP (Hardware Assisted Paging) ?
You can check Xen dmesg by running "xl dmesg" to verify if HAP is supported on your CPU:
(XEN) HVM: Hardware Assisted Paging detected and enabled.
Newer Xen versions (4.1.3+, 4.2.x+) will have info like this:
(XEN) HVM: Hardware Assisted Paging (HAP) detected (XEN) HVM: HAP page sizes: 4kB, 2MB
HAP support is provided by the following features on CPUs:
- Intel EPT (Extended Page Tables).
- AMD NPT (Nested Page Tables, sometimes also called as AMD RVI - Rapid Virtualization Indexing).
How can I check if my hardware has an IOMMU (VT-d) and it's enabled and supported?
Hardware IOMMU (Intel VT-d) is required for hardware assisted PCI passthru (I/O virtualization) to Xen HVM (fully virtualized) guest VMs. You can check for IOMMU status from "xl dmesg" output.
Example "xl dmesg" output for a system where IOMMU is supported:
(XEN) I/O virtualisation enabled
Example "xl dmesg" output for a system where IOMMU is not supported:
(XEN) I/O virtualisation disabled
You might also see additional output like:
(XEN) Intel VT-d Snoop Control supported. (XEN) Intel VT-d DMA Passthrough not supported. (XEN) Intel VT-d Queued Invalidation supported. (XEN) Intel VT-d Interrupt Remapping not supported.
And if hardware IOMMU is used for PV guest PCI passthru or not:
(XEN) I/O virtualisation for PV guests enabled
Xen doesn't require hardware IOMMU for PCI passthru to PV guests, but having an IOMMU makes it more secure. PCI passthru to Xen HVM guests requires hardware IOMMU.
Remember there's usually a separate configuration option for IOMMU IO Virtualization (VT-d) in the BIOS, so you need to enable IOMMU from the system BIOS before booting to Xen.
Are there FreeBSD Xen PV kernels/images available?
Please check the following link: http://wiki.freebsd.org/FreeBSD/Xen .
Where can I find information about NetBSD Xen support?
NetBSD supports both Xen dom0 and PV domUs. For more information see: http://www.netbsd.org/ports/xen/
Is there a list of all available Xen hypervisor (xen.gz) commandline boot options for grub.conf?
Yes, please see the XenHypervisorBootOptions wiki page.
What's Xen pygrub?
Pygrub allows you to simply specify in Xen PV guest "/etc/xen/<guest>" cfgfile:
bootloader = "/usr/bin/pygrub"
Which replaces all these options:
kernel = "vmlinuz-220.127.116.11" ramdisk = "initrd-18.104.22.168.img" root = "/dev/xvda1" extra = "earlyprintk=xen console=hvc0"
When you start a Xen PV guest pygrub will be executed in dom0 and it'll access the guest disk configured in the "/etc/xen/<guest>" cfgfile, find "/boot/grub/grub.conf" (or menu.lst) from the guest filesystem, parse it, fetch the kernel, initrd image and kernel parameters configured in the guest to dom0, and then boot up the guest using those kernel + initrd + parameters. All the domU specific settings are configured inside the domU, in grub.conf.
Requirements for using pygrub for Xen PV domUs:
- Pygrub binary available in dom0, installed together with Xen.
- Pygrub configured for the guest in "/etc/xen/<guest>" cfgfile
- Guest has /boot/grub/grub.conf (or menu.lst) with proper entries in it. There's actually no need to install the real grub to the MBR of the guest disk, it's enough to just have the grub.conf/menu.lst in the guest.
Pygrub makes it much easier to:
- Manage the guest kernels inside every guest, keeping the guest distribution package managers (dpkg,rpm) and package dependencies happy.
- Upgrade guest kernels without touching dom0 or guest configuration files in "/etc/xen/" in dom0.
- Each guest distribution can easily use the default kernel provided by the distribution.
- Pygrub allows you to select which kernel to boot, in the "xl console <guest>" session. It's easiest to start the guest with "xl create -c /etc/xen/guest" to attach to the console immediate for selecting the kernel from pygrub menu. If nothing is chosen then the default entry will be booted automatically.
Management tools like "virt-install" and "virt-manager" will use Xen pygrub as a default when you install new guests using them.
What's Xen pvgrub?
Xen pvgrub is like pygrub but more secure.
Pygrub is executed in dom0, so it could have security concerns/issues, if bugs were found from it. Pvgrub is executed in the Xen PV guest, so there are no such security issues with it.
Pvgrub is a separate build and separate tool, so usually it's easier to begin with pygrub, and then later switch to pvgrub.
See PvGrub wiki page for more information and usage of pvgrub.
Does Xen pygrub support booting PV guests using GRUB2 config files?
Yes, pygrub supports guests using GRUB2 config files.
How can I boot Xen HVM guest from a virtual emulated floppy, using an image file as the floppy?
This is the required configuration in "/etc/xen/<hvmguest>" cfgfile:
fda = '/path/to/floppy.img' boot=a
You can also use this cmdline method while creating the guest:
xl create /etc/xen/hvmguest.cfg fda=/path/to/floppy.img boot=a
Important note: Make sure SElinux access restrictions are not blocking access to /path/to/floppy.img!
I have problems getting Xen or dom0 kernel to boot, how can I set up a serial console to log and troubleshoot the boot process?
See the XenSerialConsole wiki page.
Where do I find more Booting FAQs?
See Xen FAQ Booting
Guest / DomU
Where can I find a list of all the available domU configuration options in /etc/xen/<guest> cfgfile?
See the XenConfigurationFileOptions wiki page.
Where can I find optimized Xen PV-on-HVM drivers for Linux HVM (fully virtualized) guests?
Please see the XenLinuxPVonHVMdrivers wiki page.
I can't start any HVM guests, I get error "libxl__domain_make domain creation fail: cannot make domain: -3". PV guests work fine.
This means you likely have an Intel CPU with Trusted Execution Technology. Enabling this feature in your BIOS can have a negative impact on the availability of VT-d, which is a prerequisite of running HVM guests. Please disable TXT in the BIOS and you will be able to boot your HVM guests.
I upgraded PV domU from old Xenlinux kernel to new pvops kernel, and now the domU doesn't work anymore!
Many people seem to upgrade their Xen guest PV kernels at the same time they upgrade to a newer Xen version. This is actually not a required step! You can still use your old domU kernels with the new Xen version, and with a new dom0 kernel version. Xen hypervisor is the compatibility layer, so dom0 and domUs can have totally different kernels.
Old Xenlinux PV kernels (linux-2.6.18-xen, ubuntu 2.6.24, debian 2.6.26, sles11 2.6.27, sles11 sp1 2.6.32, opensuse 2.6.31 etc) have different device names for virtual disks and virtual console than the new pvops (upstream kernel.org) Linux kernels. The old device names in Xenlinux kernels usually are:
- /dev/sdX for virtual disks, for example /dev/sda
- /dev/xvc0 for the virtual console
The new device names in pvops kernels are:
- /dev/xvdX for virtual disks, for example /dev/xvda. XVD = Xen Virtual Disk.
- /dev/hvc0 for the virtual console
So you need to do changes in the guest for it to boot up and work properly with the new kernel:
- Fix "/etc/xen/<guest>" configuration file and change the virtual disks from /dev/sdX to /dev/xvdX
- Fix domU kernel root filesystem parameter, ie. change root=/dev/sda1 to root=/dev/xvda1
- Fix domU kernel console parameter, ie. have "earlyprintk=xen console=hvc0" in the extra="" section.
- If you use pygrub to load the kernel/initrd from the guest filesystem then you need to make the changes above in the domU /boot/grub/grub.conf (or menu.lst).
- Make sure the domU kernel has xen-blkfront driver built-in, or if it's built as a module (most common option), then you need to have a proper initrd (ramdisk) image for the domU kernel that actually loads the xen-blkfront driver so that the guest kernel can access the root filesystem! Failing to do this will cause the domU kernel to crash on boot because it can't find the root filesystem!
- Note that you usually can't use the dom0 initrd image for a domU! dom0 initrd is generated for *dom0*, so it tries to load all the drivers for the *physical* hardware, while you have only virtual hardware in the domU! initrd ramdisk images are system specific, where dom0 and domU are different systems. Usually it's easiest to build an initrd/initramfs image for the domU in the *domU*. So boot the domU up with the old kernel, and generate an initrd image for the new pvops kernel. Make sure the new generated initrd-image loads xen-blkfront driver!
- You need set up (or modify) a getty for the new console device in the domU init/inittab settings, so you can get a login prompt on the "xl console" session.
- You need to add the console device name to "/etc/securetty" in the domU so root is allowed to login from the console.
- If you're having networking problems with the new pvops domU kernels make sure you have "xen-netfront" driver loaded in the new kernel. It's usually built as a module in the recent pvops/upstream/distro kernels.
Also see Migrate_from_Linux_2.6.18_to_2.6.31_and_higher wiki page for more information.
My Xen PV guest kernel crashes, how can I debug it and get a stack/call trace?
Edit "/etc/xen/<guest>" cfgfile, and set:
Then when your domU guest crashes, run this in dom0:
/usr/lib/xen/bin/xenctx -s System.map-domUkernelversion <domUid> <vcpu-number>
If you're running 64bit dom0, then xenctx might be under "/usr/lib64/". This command will give you a stack trace of the crashed domU kernel on the specified vcpu, allowing you to see where it crashes. You should get the stack trace for every vcpu your guest has! Vcpu numbers start from 0.
Note that you need to use the "System.map" file from the exact kernel version the domU is running, otherwise the stack trace results are not correct!
How do I change virtual/emulated CD .iso image for a Xen HVM guest?
First check "/etc/xen/<guest>" cfgfile and check what is your cdrom device in the guest. It's usually "/dev/hdc".
An example to swap the emulated CD using an iso image:
xl block-configure <guest> file:/path/to/new/cd.iso hdc:cdrom r
I'm trying to create a new Xen VM but I get error there's not enough free memory
Tools like "free" or "top" in Xen dom0 show that you have a lot of free memory, but Xen still complains that there's not enough free memory to create/start a new VM, giving errors such as "Out of memory". This usually happens because dom0 is using all the memory, so there's no free memory in the Xen hypervisor (xen.gz) for other VMs. Note that dom0 is actually a Xen virtual machine!
Xen hypervisor dedicates physical memory for each VM, so you need to have actual free unallocated memory in the hypervisor to start a VM. You can check the Xen hypervisor memory usage with the following commands:
Check the amount of total, used and free memory in Xen hypervisor:
and check how much memory each VM is using:
It's possible that dom0 is using most of your memory and Xen hypervisor doesn't have any free memory left. You should use "dom0_mem=" setting for Xen in grub to configure dom0 a fixed/dedicated amount of memory, so the rest of the memory is free for other VMs to use. See XenBestPractices wiki page for more info about configuring "dom0_mem=" option for Xen.
Error: (4, 'Out of memory', 'panic: xc_dom_core.c:442: xc_dom_alloc_segment: segment ramdisk too large (0xee93 > 0x8000 - 0x1755 pages)')
Getting this error when trying to start a new VM (running "xl create") means you're trying to use too big ramdisk image (initrd/initramfs file) for the domU. The ramdisk image configured is bigger than the amount of available memory for the VM. Remember the ramdisk images are compressed, and they need to fit as uncompressed to the domU memory. By modifying /etc/initramfs-tools/initramfs.conf, to set MODULES=dep, you can decrease the size of the generated initramfs.
How can I use Xen PVHVM optimized paravirtualized drivers with Ubuntu 11.10 or Fedora 16 HVM guests?
Please see the Xen_Linux_PV_on_HVM_drivers wiki page for more information about Xen PVHVM drivers.
I followed a third-party HOWTO for creating or converting a VM, but it doesn't start. There are errors around access of qemu-dm as the device model. Why?
Apparently, some older documentation and third-party instructions suggest using the following line in a VM config file:
device_model = 'qemu-dm'
However, the use of qemu-dm in this context is no longer correct (as of 4.4, at least). The term qemu-dm is a generic term meaning "the current qemu device model". It is not an actual configuration file option. If you need to select a device model, use the configuration syntax:
device_model_version = 'qemu-xen'
to use the default qemu, or alternatively, if you want to use the older version of qemu bundled with the hypervisor:
device_model_version = 'qemu-xen-traditional'
Where do I find more DomU FAQs?
See Xen FAQ DomU
Host / Dom0
How can I limit the number of vcpus my dom0 has?
The recommended way is to use "dom0_max_vcpus=X" boot time option for Xen hypervisor (xen.gz) in grub.conf. You need to reboot after modifying grub.conf. Using this method makes sure you allocate resources only for the actual number of vcpus you need in dom0.
Can I dedicate a cpu core (or cores) only for dom0?
Yes, you can. It might a good idea especially for systems running IO intensive guests. Dedicating a CPU core only for dom0 makes sure dom0 always has free CPU time to process the IO requests for the domUs. Also when dom0 has a dedicated core there are less CPU context switches to do, giving better performance.
Specify "dom0_max_vcpus=X dom0_vcpus_pin" options for Xen hypervisor (xen.gz) in grub.conf and reboot.
After rebooting you can verify with "xl vcpu-list" that dom0 vcpus are pinned to only use the matching physical cpus.
The next step is to configure all the guests (domUs) to NOT use those same physical cpus. This can be done by specifying for example cpus="2-7" to all /etc/xen/<guest> cfgfiles. This example would leave physical cpus/cores 0 and 1 only for dom0, and make the guests use cpus/cores 2 to 7.
In dom0 how can I access and mount partitions inside a Xen disk image?
The easiest way is to use "kpartx" in dom0. It allows you to easily access all the partitions inside an image file or inside LVM volume.
You can use "kpartx -l /path/guestdisk" to list the partitions the image has, and "kpartx -a /path/guestdisk" to add the partition mappings. After the device mapper (dm) mappings have been added, you can access the partitions using "/dev/mapper/guestdiskpX" block devices. There's a separate block device for each partition. You can mount, copy, format, or whatever you need to do and it works just like for "real" partitions.
When you're done using the partitions in dom0, and you have unmounted them, you have to remove the kpartx mappings by running "kpartx -d /path/guestdisk". It's important to do this before using the image again for other purposes or starting the guest again! If you forget to remove the mappings the image might get corrupted when you access the image with other tools or start the guest.
Another neat way which works for me, is to simply attach the block device to dom0 as you would to a domU. e.g.:
xl block-attach Domain-0 file:/disk/image/for/domU xvda w
I have found this to work for OSs which use slices (such as Solaris and *BSD), and are not picked up using kpartx. You should check dmesg and see which device nodes were created for you. Once you are done working with the disk image you can:
xl block-detach Domain-0 xvda
If your domU is LVM-based, see Access_Disk_from_DomUs_when_Xen_was_installed_with_LVM
Xen dom0 complains about not enough free loop devices when trying to start a new domU or when trying to "mount -o loop" from the cmdline
Linux "loop" module has max 8 loop devices (/dev/loop*) as a default. Every Xen "file:" backed domU disk uses one loop device in dom0, and if you do "mount -o loop disk.img /mnt" in dom0 that uses a loop device aswell.
You can check all the loop devices in use by running "losetup -a". You can detach a loop device (free it) by running "losetup -d /dev/loopX" when it's not mounted or otherwise in use.
You can increase the amount of available loop devices by loading the linux "loop" module with an option "max_loop=X", for example "modprobe loop max_loop=128". You can add "max_loop=X" option to /etc/modprobe.conf, /etc/modules (or whatever cfgfile your dom0 distribution uses for module options) to make it boot time default.
You can also use Xen "phy:" backed LVM volumes instead of disk images. "phy:" doesn't require loop devices.
Such an error message may also be indicative of other issues (such as the guest crashing and restarting so quickly that xend does not have time to free the older loopback device).
Can I run graphical X applications in Xen dom0 without installing X server and display drivers?
Yes, you can. Many people prefer to not install any graphical drivers or X on dom0 for maximum stability. You can still run graphical applications using ssh X11 forwarding. You can run for example "virt-manager" in dom0 and display the GUI on your desktop/laptop over ssh.
1) Install "xorg-x11-xauth" package in dom0 (centos/rhel/fedora, "xauth" in debian/ubuntu, other distros might have different name for it).
2) Log in to dom0 from your Linux workstation like this: ssh -X root@dom0.
3) After the password is accepted you should notice ssh setting up the ssh X11 forwarding like this:
/usr/bin/xauth: creating new authority file /root/.Xauthority
4) You can now run graphical X applications, for example vnc viewer, virt-viewer etc, and the GUI will be tunneled securely over ssh to your local X desktop.
If you're using Windows you can install "xming" and "putty". Start Xming, and after that enable X11 forwarding in putty settings (Connection -> ssh -> X11 -> Enable X11 forwarding) and then connect to your dom0.
I'd like to run Xen hypervisor/dom0 on Redhat Enterprise Linux 6 (RHEL6)
Where do I find more Dom0 FAQs?
See Xen FAQ Dom0
Is there more information about Xen "blktap" disk backend?
Yes, Xen actually has two blktap backends:
- blktap1: please see the blktap wiki page for more information.
- blktap2: please see the blktap2 wiki page for more information.
blktap2 supports VHD images, and is considered much more robust than the old blktap1 qemu/qcow image file support. Xen 4.0 includes blktap2 support. Note that also the dom0 kernel needs to have the blktap2 driver.
What emulated NIC types/models are available in Xen HVM fully virtualized guests?
The following emulated network interface cards are available for Xen HVM guests in Xen 3.4:
Emulation is done by Qemu-dm running for each Xen HVM guest. Intel e1000 is known to be the best performing emulated NIC. Even faster is to use PV-on-HVM drivers, which totally bypasses emulation.
Older Xen versions might not have all the above NIC options available.
Xen complains about "hotplug scripts not working"
This problem is often related to udev. Do you have udev installed? Is your udev the correct/supported version? This error usually has more information in the end revealing the real reason.. for example:
Error: Device 0 (vif) could not be connected. Could not find bridge device br0.
Which means exactly what it says.. the guest is configured to to use a bridge called "br0", but there's no such bridge in dom0. Run "brctl show" to verify what bridges you have. Create the missing bridge, or edit the VM cfgfile and make it use another (correct) bridge.
Error: Device 0 (vif) could not be connected. Hotplug scripts not working.
This problem is often caused by not having "xen-netback" driver loaded in dom0 kernel.
The hotplug scripts are located in /etc/xen/scripts by default, and are labeled with the prefix vif-*. Those scripts log to /var/log/xen/xen-hotplug.log, and more detailed information can be found there.
Error: Device 5632 (vbd) could not be connected. Path closed or removed during hotplug add: backend/vbd/2/5632 state: 1"
This problem can be caused by not having "xen-blkback" driver loaded in dom0 kernel.
So when dealing with these problems always check you have all the required xen backend driver modules loaded in dom0 kernel! netbk or xen-netback for networking, and blkbk or xen-blkback for block devices (virtual disks).
Also read /var/log/xen/xl*.log, it most probably includes additional information about the problem and the error messages.
How to specify custom (non-default) vif-script for some domU network interface?
Here's an example how to have 2 nics (vifs) for the domU, each using different vif-script:
vif = [ 'mac=00:16:5e:72:04:01,bridge=xenbr0,script=your_custom_script1', 'mac=00:16:5e:72:04:02,bridge=xenbr1,script=your_custom_script2' ]
What's the difference between vifX.Y and tapX.Y network interfaces in dom0?
vifX.Y network interfaces are used with domUs (VMs) using paravirtualized network drivers, which means for pure PV domUs or for Xen HVM guests with paravirtualized PVHVM drivers in use. tapX.Y interfaces are used for HVM guests which are using emulated virtual NICs (Intel E1000, Realtek RTL8139, NE2k).
vifX.Y interfaces are created by the xen-netback backend driver in dom0 kernel. The frontend driver xen-netfront runs in the kernel of each VM.
There's exactly one vif/tap interface per virtual NIC in the VM. "X" means the domain ID, and "Y" is the number of the virtual NIC. So if you have a domU with ID 5 with 3 virtual NICs (eth0, eth1, eth2), the corresponding VIF interfaces in dom0 would be vif5.0, vif5.1, vif5.2.
Using SR-IOV Virtual Function (VF) PCI passthru with Xen
Check RHEL5_CentOS5_Xen_Intel_SR-IOV_NIC_Virtual_Function_VF_PCI_Passthru_Tutorial for tutorial how to use and configure Xen SR-IOV VF PCI passthru on RHEL5 / CentOS5 Xen with Intel 10 Gbit/sec 82599 (ixgbe) SR-IOV NIC. The same general idea applies to upstream Xen and other distros, but the steps required are probably slightly different.
Where do I find more Networking FAQs?
I can't connect to the console of my guest using "xl console <guest>"
Do you have "xenconsoled" process running in dom0? If you do, did you try killing it and restarting it? Also, only one console session per domU can exist at a time. Currently, attempting to use more than one console session per domU will not raise an error, but will result in strange behavior.
Console of my PV guest shows kernel boot messages and then it stops and doesn't work anymore
Do you have a getty configured for the console device in the guest? You usually need to add an entry to /etc/inittab in the guest to make the guest present login prompt on the "xl console" session. Is the getty configured for a correct console device? See the chapter below for correct guest console devices.
Example "/etc/inittab" entry for Debian Lenny (linux-image-2.6.26-2-xen) guest, write this on a new (last) line:
vc:2345:respawn:/sbin/getty 38400 hvc0
Example "/etc/inittab" entry for CentOS 5 (kernel-xen-2.6.18) guest:
co:2345:respawn:/sbin/agetty xvc0 9600 vt100-nav
After modifying /etc/inittab in the guest, run "kill -1 1" to make init (process id 1) reload the configuration file on-the-fly, or reboot the guest. This should make the console work and login prompt appear on the "xl console" session.
NOTE! Some distros (like newer Ubuntu) don't use "/etc/inittab" anymore, but instead you have to configure the gettys in other places! Ubuntu 9.10 and newer use "/etc/init/*.conf".
# hvc0 - getty # # This service maintains a getty on hvc0 from the point the system is # started until it is shut down again. start on stopped rc RUNLEVEL= stop on runlevel [!2345] respawn exec /sbin/getty -L hvc0 9600 linux
Console of my PV guest is totally empty, it doesn't show anything, not even kernel boot messages!
This usually means your PV (paravirtual) guest is using a newer pv_ops (upstream Linux kernel.org) kernel that has different console device name. You need to add "console=hvc0" to the guest kernel command line options.
If you're using Xen pygrub or pv-grub to load the kernel from guests filesystem, then you need to configure the guest kernel settings in the guest /boot/grub/grub.conf (or menu.lst), not in dom0! So edit the guest /boot/grub/grub.conf (or menu.lst) and add the "console=hvc0" option there for the default kernel.
If you're not using pygrub or pv-grub, aka you're configuring everything in /etc/xen/<guest> configuration file in dom0, and you have the kernel/initrd in dom0 filesystem, then you can specify the additional "console=hvc0" parameter in the /etc/xen/<guest> configuration file in dom0, on the extra="" line.
What's the correct console device name for my Xen PV guest
- Most kernels use "hvc0" as the console device
How do I exit domU "xl console" session
Press ctrl+] or if you're using Putty press ctrl+5.
Everything seems OK but I still can't access the domU "xl console"!
A couple of things to check:
- Do you have some old already running "xl console" session running on the host? possibly stuck in some dead ssh connection?
- Do you have virt-manager running on the host 'reserving' the guest console?
- Did it work earlier?
- Do you have problems with only one guest, or with all guests?
- Does shutting down and then re-starting the guest help?
- Are you really sure you are running a getty in the guest for the correct console device? :)
- Is the guest really up and running? Can you access it over the network with ssh? Can you ping it?
Can I set up Xen HVM Linux guest to display the kernel boot messages on "xl console" ?
Yes, you can. You need to add this line to the "/etc/xen/<hvmguest>" configuration file in dom0:
And then in the HVM guest grub.conf (inside the HVM VM) configure kernel logging to the (virtual) serial port like this:
#============================================================ # display the grub kernel selector menu on the serial console serial --unit=0 --speed=9600 terminal --timeout=5 serial console # kernel entries title openSUSE 11.1 - 22.214.171.124-0.1 root (hd0,1) kernel /boot/vmlinuz-126.96.36.199-0.1-default root=LABEL=ROOT resume=LABEL=SWAP splash=silent showopts console=tty1 console=ttyS0 initrd /boot/initrd-188.8.131.52-0.1-default #================================================
So basicly you need to add "console=ttyS0" to the kernel line, and also configure grub to display the kernel selector on the (virtual) serial console using "serial" and "terminal" settings.
After these settings "xl console <hvmguest>" works just like it does for PV (paravirtual) guests and allows you to see grub and the Linux kernel boot messages of the HVM guest. You can also use other tools (minicom, screen, etc) in dom0 to access the VM console pty device directly.
Also remember to set up a getty in the guest for the serial console device, ttyS0, so that you can also login from the serial console! See above for tips about that.
Where do I find more Console FAQs?
See Xen FAQ Console
Other Problems / Questions
Starting xend fails?
Xend has been deprecated as of Xen 4.2.0. You should therefore not use the command xm anymore. Instead, you should use the new xl command, which should be binary compatible.
Problems with Xen on Redhat Enterprise Linux 5 (RHEL5) or CentOS5
Try these links:
Can I Use the Xen Project Hypervisor in CentOS 6?
Yes, using Xen4CentOS.
Can I back up Xen domains
Yes, see Backing up Xen domains @ serverfault
USB, PCI and VGA passthrough
Can I use 3D graphics in Xen?
Yes, please see the XenVGAPassthrough wiki page for more information how to give a VM direct/full access (including video and 3d acceleration) to a graphics (VGA) card.
Can I passthrough an USB device connected to dom0 to a Xen guest?
Yes, please see the XenUSBPassthrough wiki page.
Can I passthrough a PCI device to Xen guest?
Yes, please see the XenPCIpassthrough wiki page.
Can I passthru a VGA graphics adapter to Xen guest?
Yes, please see the XenVGAPassthrough wiki page.
I have problems using my graphics card in Xen dom0, with the pvops dom0 kernel.. any tips?
Yes, please see the XenPVOPSDRM wiki page.
Is there more information about Xen PVSCSI passthrough functionality?
Yes, please see the XenPVSCSI wiki page.
How do I change the resolution of Xen PV domU vfb graphical VNC console?
You can specify the pvfb (paravirtual framebuffer) resolution and bpp (amount of colors) for the VM while loading the xen-fbfront driver in the domU kernel.
If xen-fbfront is built as a module, use the following options:
modprobe xen-fbfront video="32,1024,768"
Or add the options to "/etc/modprobe.conf", or whatever file your domU distro uses for driver module options. Note that you might need to regenerate the initrd/initramfs image for the domU kernel if the xen-fbfront driver is auto-loaded at boot time.
Or if xen-fbfront driver is built-in to the domU kernel, use the following cmdline options for the domU kernel:
If you're using Xen pygrub you can place that option to grub configfile inside the domU "/boot/grub/" directory, or if you're using kernel/ramdisk from dom0, then add those options to "extra" line on /etc/xen/<domU> on dom0.
How can I get resolutions larger than 800x600 for Xen HVM guest graphical VNC console?
Edit "/etc/xen/<hvmvm>" cfgfile and add the following options:
This will increase the amount of virtual video memory in the HVM VM, allowing it to use bigger resolutions, up to 2048x1536 @ 32bpp. If you don't specify "stdvga=1", ie. you keep using the Cirrus emulated graphics adapter, resolutions up to 1280x1024 are possible with "videoram=16".
After increasing the size of vram configure the resolution from inside the HVM guest in the usual way, just like you'd do on baremetal/native, using the resolution selector or configfile provided by the operating system in the VM.
Where do I find more VGA Passthrough FAQs?
HA and Fault Tolerance
Is there more information about Xen Fault Tolerance, aka Remus Transparent High Availability (HA) for VMs?
Yes, please see the Remus wiki page for more information.
Arp change notification problems after live migration when using pvops 2.6.32.x dom0 kernel
You need to have new enough version of pvops dom0 kernel (xen/stable-2.6.32.x branch, Aug 2010 or newer) and then you can enable "net.ipv4.conf.<dev>.arp_notify" sysctl setting for the devices where you want to send the arp notifications (gratuitous ARP). See this email for more information: http://lists.xensource.com/archives/html/xen-devel/2010-08/msg01595.html .