Booting Overview

From Xen
Jump to: navigation, search

This document describes the different mechanisms and config file options that are available for Xen.

The following diagram shows the first steps of the Linux boot and startup sequence to provide a baseline to allow explaining the differences

HVM Guest Boot Process

From a user's perspective the HVM Boot and install process is identical to the process on a native PC or server.

However, behind the scenes

  • The hvmloader is copied into guest memory by Xen (under the control of the Toolstack). The hvmloader sets up all necessary information for the Device Emulator which emulates a HW environment that appears exactly like a physical machine. 
  • The correct firmware is automatically loaded as a binary blob (usually located in /usr/lib64/xen/boot) and copied into guest memory based on config settings, but can be overridden via the firmware config file option. See the various firmware options in the man page.

Direct Kernel Boot

When installing a new guest OS it is often useful to boot directly from a kernel and initrd stored in the host OS' file system, allowing command line arguments to be passed directly to the installer. This capability is usually available for PV, PVH and HVM guests.

In this case, the bootloader and firmware are bypassed. Direct Kernel Boot is often an easy way to start the installation process and create a disk image. To install VMs this way, you need to be a host administrator (aka you need to have access to Dom0). Direct Kernel Boot is also useful for netboot. Note that most distros have kernel and initrd file systems available for download. In some cases, you will need to download an ISO, mount it and use the kernel and initrd from the ISO.

Also See

PV Guest Boot Process

Unlike HVM guests, the boot process for PV guests does not follow the standard boot process. Instead of of booting via the standard x86 boot entry, PV guests are are using alternative boot entries [in Linux startup_32() or startup_64()]. For Xen this is also known as the Xen PV boot path. This has implications for end-users, which cannot simply boot Xen PV guests by making available an installable medium during the boot process. To work around this, and to provide a user-experience similar to a normal install, the Xen community has developed two approaches to solve this problem:

  • Enable a standard widely used bootloader to support Xen PV guests (in thus case PvGrub and more recently GRUB2)
  • A xen tool (called PyGrub) that exposes a standard bootloader interface (in this case GRUB), which hides the differences in boot process between HVM and PV guests

PVGrub

PVGrub is a boot manager for Xen PV VMs, which originally was a fork of GRUB. In 2015, support for Xen PV guests was included into GRUB2. The following figure outlines the boot process with PVGrub

Note that PVGrub is a more secure and more efficient alternative to PyGrub to boot domU images. Unlike Pygrub it runs an adapted version of the grub boot loader inside the created domain itself, and uses the regular domU facilities to read the disk mounted as root directory, fetch files from network, etc. It also eventually loads the PV kernel and chain-boots it. PVGrub allows host admins to configure what guests and kernel versions a guest admin can install: this is also one of the main drawbacks. It can also be used for PXE booting.

Xen does not come with PVGrub: you will need to install an appropriate distro package (make sure that Xen support is enabled) or build it from source.

Also see

PyGrub

PyGrub enables you to start Linux domUs with a kernel inside the DomU instead of a kernel that lies in the filesystem of the dom0. This means easier management: each domU manages its own kernel and initrd, meaning you can use the built-in package manager to update the kernels, instead of having to track and update kernels stored in your dom0. It also allows easy migration of HVM'ed Linuxes - there's no need to extract the installed kernel & initrd. The following figure outlines the boot process with PyGrub:

When installing PV guests this way, you will frequently follow this pattern:

  • Step 1: Get vmlinuz & initrd.gz from a distro
  • Step 2: Create DomU filesystem
  • Step 3: Set up config for Direct Kernel Boot, then start the guest
  • Step 4: Perform the OS Install. Fix any loose ends that the installer didn’t handle
  • Step 5: Change config to use pygrub. Then shut down and restart guest

An example of this workflow can be found here.

Also see

PVH Guest Boot Process

PVH guests are intended to be guests that do not require device emulation. Integrated boot support is not yet implemented, this Direct Kernel Boot has to be used. In future, Xen will boot through a minimal EFI environment for guests, which will ensure that the boot process for Xen PVH guests behaves like a standard boot process.