Difference between revisions of "Xen Project Software Overview"

From Xen
Jump to: navigation, search
(Major updates to this page)
Line 1: Line 1:
{{sidebar
 
| name        = Xen Project Overview
 
| outertitle  = Shortcuts
 
 
| outertitlestyle = text-align: left;
 
| headingstyle = text-align: left;
 
| contentstyle = text-align: left;
 
 
| heading1    = Features
 
| content1    = [[Xen_Project_Release_Features|Xen Project Features]]
 
 
| heading2    = Xen Project-enabled Kernels
 
| content2    = [[Dom0_Kernels_for_Xen|Xen Project-Enabled operating systems]]
 
| content3    = [[DomU_Support_for_Xen|PV-Enabled operating systems]]
 
| content4    = [[Xen Kernel Feature Matrix]]
 
 
| heading5    = Guest types
 
| content5    = [[Paravirtualization_(PV)|Paravirtualization (PV)]]
 
| content6    = [[#HVM|HVM]]
 
| content7    = [[PV_on_HVM|PV on HVM]]
 
 
| heading8    = PV on HVM Drivers
 
| content8    = [[Xen_Linux_PV_on_HVM_drivers|Information about using PV-on-HVM drivers]]
 
| content9    = [[Using_Xen_PV_Drivers_on_HVM_Guest|HowTo on PV-on-HVM drivers]]
 
| content10    = [[Xen_FAQ_Drivers,_Windows|Windows drivers]]
 
 
| heading11    = Toolstacks
 
| content12    = [[Choice_of_Toolstacks|Choice of ToolStacks]] 
 
| content13    = [[Xen_/_XCP_/_XCP_on_Linux_Overview|Xen Project or XCP?]]
 
 
| heading14    = Host Install
 
| content14    = [[Host OS Install Considerations]]
 
| content15    = [[:Category:Host_Install|Installing Hosts]]
 
| content16    = [[LiveCD|Live CDs, DVDs, etc.]]
 
 
| heading17    = Guest Install
 
| content17    = [[:Category:Guest Install|Installing Guests]]
 
| content18    = [[Guest VM Images]]
 
}}
 
 
== What is the Xen Project Hypervisor? ==
 
== What is the Xen Project Hypervisor? ==
 
The Xen Project hypervisor is an open-source [[Wikipedia:Hypervisor|type-1 or baremetal hypervisor]], which makes it possible to run many instances of an operating system or indeed different operating systems in parallel on a single machine (or host). The Xen Project hypervisor is the only type-1 hypervisor that is available as open source. It is used as the basis for a number of different commercial and open source applications, such as: server virtualization, Infrastructure as a Service (IaaS), desktop virtualization, security applications, embedded and hardware appliances. The Xen Project hypervisor is powering the largest clouds in production today.  
 
The Xen Project hypervisor is an open-source [[Wikipedia:Hypervisor|type-1 or baremetal hypervisor]], which makes it possible to run many instances of an operating system or indeed different operating systems in parallel on a single machine (or host). The Xen Project hypervisor is the only type-1 hypervisor that is available as open source. It is used as the basis for a number of different commercial and open source applications, such as: server virtualization, Infrastructure as a Service (IaaS), desktop virtualization, security applications, embedded and hardware appliances. The Xen Project hypervisor is powering the largest clouds in production today.  
Line 46: Line 7:
 
* Driver Isolation: The Xen Project hypervisor has the capability to allow the main device driver for a system to run inside of a virtual machine.  If the driver crashes, or is compromised, the VM containing the driver can be rebooted and the driver restarted without affecting the rest of the system.
 
* Driver Isolation: The Xen Project hypervisor has the capability to allow the main device driver for a system to run inside of a virtual machine.  If the driver crashes, or is compromised, the VM containing the driver can be rebooted and the driver restarted without affecting the rest of the system.
 
* Paravirtualization: Fully paravirtualized guests have been optimized to run as a virtual machine.  This allows the guests to run much faster than with hardware extensions (HVM).  Additionally, the hypervisor can run on hardware that doesn't support virtualization extensions.
 
* Paravirtualization: Fully paravirtualized guests have been optimized to run as a virtual machine.  This allows the guests to run much faster than with hardware extensions (HVM).  Additionally, the hypervisor can run on hardware that doesn't support virtualization extensions.
 
  
 
This page will explore the key aspects of the Xen Project architecture that a user needs to understsand in order to make the best choices.
 
This page will explore the key aspects of the Xen Project architecture that a user needs to understsand in order to make the best choices.
 
* Guest types: The Xen Project hypervisor can run fully virtualized (HVM) guests, or paravirtualized (PV) guests.
 
* Domain 0: The architecture employs a special domain called domain 0 which contains drivers for the hardware, as well as the toolstack to control VMs.
 
* Toolstacks: This section covers various toolstack front-ends available as part of the Xen Project stack and the implications of using each.
 
  
 
== Introduction to Xen Project Architecture ==
 
== Introduction to Xen Project Architecture ==
  
Below is a diagram of the Xen Project architecture. The Xen Project hypervisor runs directly on the hardware and is responsible for handling CPU, Memory, and interrupts. It is the first program running after exiting the bootloader. On top of the hypervisor run a number of virtual machines. A running instance of a virtual machine is called a '''domain''' or '''guest'''. A special domain, called domain 0 contains the drivers for all the devices in the system. Domain 0 also contains a control stack to manage virtual machine creation, destruction, and configuration.
+
Below is a diagram of the Xen Project architecture. The Xen Project hypervisor runs directly on the hardware and is responsible for handling CPU, Memory, timers and interrupts. It is the first program running after exiting the bootloader. On top of the hypervisor run a number of virtual machines. A running instance of a virtual machine is called a '''domain''' or '''guest'''. A special domain, called domain 0 contains the drivers for all the devices in the system. Domain 0 also contains a control stack and other system services to manage a Xen based system. Note that through [[Dom0 Disaggregation]] it is possible to run some of these services and device drivers in a dedicated VM: this is however not the normal system set-up.
  
[[File:Xen Arch Diagram.png|500px]]
+
<gallery widths="650px" heights="500px" mode=nolines>
 +
File:Xen Arch Diagram v2.png
 +
</gallery>
  
 
Components in detail:
 
Components in detail:
* '''The Xen Project Hypervisor''' is an exceptionally lean (<150,000 lines of code) software layer that runs directly on the hardware and is responsible for managing CPU, memory, and interrupts. It is the first program running after the bootloader exits. The hypervisor itself has no knowledge of I/O functions such as networking and storage.  
+
* '''The Xen Project Hypervisor''' is an exceptionally lean (<65KSLOC on Arm and <300KSLOC on x86) software layer that runs directly on the hardware and is responsible for managing CPU, memory, and interrupts. It is the first program running after the bootloader exits. The hypervisor itself has no knowledge of I/O functions such as networking and storage.  
* '''Guest Domains/Virtual Machines''' are virtualized environments, each running their own operating system and applications. The hypervisor supports two different virtualization modes: Paravirtualization (PV) and Hardware-assisted or Full Virtualization (HVM). Both guest types can be used at the same time on a single hypervisor. It is also possible to use techniques used for Paravirtualization in an HVM guest: essentially creating a continuum between PV and HVM. This approach is called PV on HVM. Guest VMs are totally isolated from the hardware: in other words, they have no privilege to access hardware or I/O functionality. Thus, they are also called unprivileged domain (or DomU).
+
* '''Guest Domains/Virtual Machines''' are virtualized environments, each running their own operating system and applications. The hypervisor supports several different virtualization modes, which are described in more detail below. Guest VMs are totally isolated from the hardware: in other words, they have no privilege to access hardware or I/O functionality. Thus, they are also called unprivileged domain (or DomU).
* '''The Control Domain (or Domain 0)''' is a specialized Virtual Machine that has special privileges like the capability to access the hardware directly, handles all access to the system’s I/O functions and interacts with the other Virtual Machines. It also exposes a control interface to the outside world, through which the system is controlled. The Xen Project hypervisor is not usable without Domain 0, which is the first VM started by the system.
+
* '''The Control Domain (or Domain 0)''' is a specialized Virtual Machine that has special privileges like the capability to access the hardware directly, handles all access to the system’s I/O functions and interacts with the other Virtual Machines. he Xen Project hypervisor is not usable without Domain 0, which is the first VM started by the system. In a standard set-up, Dom0 contains the following functions:
* '''Toolstack and Console''': Domain 0 contains a control stack (also called Toolstack) that allows a user to manage virtual machine creation, destruction, and configuration. The toolstack exposes an interface that is either driven by a command line console, by a graphical interface or by a cloud orchestration stack such as OpenStack or CloudStack.
+
** '''System Services''': such as [[XenStore]]/[[XenBus]] (XS) for managing settings, the Toolstack (TS) exposing a user interface to a Xen based system, Device Emulation (DE) which is based on [[QEMU Upstream|QEMU]] in Xen based systems
* '''Xen Project-enabled operating systems''': Domain 0 requires a Xen Project-enabled kernel. Paravirtualized guests require a PV-enabled kernel. Linux distributions that are based on recent Linux kernel are Xen Project-enabled and usually include packages that contain the hypervisor and Tools (the default Toolstack and Console). All but legacy Linux kernels are PV-enabled, capable of running PV guests.
+
** '''Native Device Drivers''': Dom0 is the source of physical device drivers and thus native hardware support for a Xen system
 +
** '''Virtual Device Drivers''': Dom0 contains virtual device drivers (also called backends).
 +
** '''Toolstack''': allows a user to manage virtual machine creation, destruction, and configuration. The toolstack exposes an interface that is either driven by a command line console, by a graphical interface or by a cloud orchestration stack such as OpenStack or CloudStack. Note that several different toolstacks can be used with Xen
 +
* '''Xen Project-enabled operating systems''': Domain 0 requires a Xen Project-enabled kernel. Paravirtualized guests require a PV-enabled guest. Linux distributions that are based on Linux kernels newer than Linux 3.0 are Xen Project-enabled and usually include packages that contain the hypervisor and Tools (the default Toolstack and Console). All but legacy Linux kernels older than Linux 2.6.24 are PV-enabled, capable of running PV guests.
  
 
'''Also see:'''
 
'''Also see:'''
 
* [[Xen_Project_Release_Features|Xen Project Release Features]]
 
* [[Xen_Project_Release_Features|Xen Project Release Features]]
 
* [[Dom0_Kernels_for_Xen|Xen Project-Enabled operating systems]]
 
* [[Dom0_Kernels_for_Xen|Xen Project-Enabled operating systems]]
* [[DomU_Support_for_Xen|PV-Enabled operating systems]]
 
* [[Xen_Kernel_Feature_Matrix|Availability of Xen Project Functionality on Linux Kernel (by version)]]
 
  
 
== Guest Types ==
 
== Guest Types ==
This section gives an overview of guest types, such that the reader can make informed decisions about which guest type to use. On ARM hosts, there is only one guest type, while on x87 hosts the hypervisor supports running of three types of guests:
+
This following diagrams shows how guest types gave evolved for Xen.  
* '''Paravirtualized Guests or PV Guests:''' PV is a software virtualization technique originally introduced by Xen Project and later adopted by other virtualization platforms. PV does not require virtualization extensions from the host CPU, but requires Xen-aware guest operating systems.
+
 
* '''HVM Guests:''' HVM guests use virtualization extensions from the host CPU to virtualize guests. HVM requires Intel VT or AMD-V hardware extensions. The Xen Project software uses QEMU device models to emulate PC hardware, including BIOS, IDE disk controller, VGA graphic adapter, USB controller, network adapter etc.
+
<gallery widths="750px" heights="500px" mode=nolines>
* '''PVH Guests:''' PVH guests are lightweight HVM-like guests that use virtualization extensions from the host CPU to virtualize guests. Unlike HVM guests, PVH guests do not use QEMU to emulate devices, but use PV drivers for I/O and native operating system interfaces for virtualized timers, virtualized interrupt and boot. PVH guests require PVH enabled guest operating system. This approach is similar to how Xen virtualizes ARM guests, with the exception that ARM CPUs provide hardware support for virtualized timers and interrupts.
+
File:GuestModes.png
 +
</gallery>
  
In implementation terms, there are two major execution paths on x86 which implement these three guest types: the PV path implements PV guests; the HVM path implements HVM and PVH guests.
+
On ARM hosts, there is only one guest type, while on x86 hosts the hypervisor supports the following three types of guests:  
  
Note that originally we implemented PVH (v1) using the PV path, while making use of Hardware assisted virtualization. In Xen 4.9 and Xen 4.10 we replaced PVHv1 with an alternative implementation called PVHv2 using the HVM path.
+
* '''Paravirtualized Guests or PV Guests:''' PV is a software virtualization technique originally introduced by the Xen Project and was later adopted by other virtualization platforms. PV does not require virtualization extensions from the host CPU, but requires Xen-aware guest operating systems. PV guests are primarily of use for legacy HW and legacy guest images and in special scenarios, e.g. special guest types, special workloads (e.g. Unikernels), running Xen within another hypervisor without using nested hardware virtualization support, as container host,  …
  
 +
* '''HVM Guests:''' HVM guests use virtualization extensions from the host CPU to virtualize guests. HVM requires Intel VT or AMD-V hardware extensions. The Xen Project software uses QEMU device models to emulate PC hardware, including BIOS, IDE disk controller, VGA graphic adapter, USB controller, network adapter, etc. HVM Guests use PV interfaces and drivers when they are available in the guest (which is usually the case on Linux and BSD guests). On Windows, drivers are available to download via [https://xenproject.org/downloads/windows-pv-drivers.html our download page]. When available, HVM will use Hardware and Software Acceleration, such as Local APIC, Posted Interrupts, Viridian (Hyper-V) enlightenments and make use of guest PV interfaces where they are faster. Typically HVM is the best performing option on for Linux, Windows, *BSDs.
 +
 +
* '''PVH Guests:''' PVH guests are lightweight HVM-like guests that use virtualization extensions from the host CPU to virtualize guests. Unlike HVM guests, PVH guests do not require QEMU to emulate devices, but use PV drivers for I/O and native operating system interfaces for virtualized timers, virtualized interrupt and boot. PVH guests require PVH enabled guest operating system. This approach is similar to how Xen virtualizes ARM guests, with the exception that ARM CPUs provide hardware support for virtualized timers and interrupts.
 +
<br>
 
{{WarningLeft|IMPORTANT: Guest types are [[Type_Config_Option|selected]] through <code>builder</code> configuration file option for Xen 4.9 or before and the <code>type</code> configuration file option from Xen 4.10 onwards in the (also see [http://xenbits.xen.org/docs/unstable/man/xl.cfg.5.html#selecting_guest_type man pages]).}}
 
{{WarningLeft|IMPORTANT: Guest types are [[Type_Config_Option|selected]] through <code>builder</code> configuration file option for Xen 4.9 or before and the <code>type</code> configuration file option from Xen 4.10 onwards in the (also see [http://xenbits.xen.org/docs/unstable/man/xl.cfg.5.html#selecting_guest_type man pages]).}}
  
Line 95: Line 59:
 
* [[Paravirtualization_(PV)|More Information...]]
 
* [[Paravirtualization_(PV)|More Information...]]
 
* [http://xenbits.xen.org/docs/unstable/man/xl.cfg.5.html#paravirtualised__pv__guest_specific_options PV Specific Config Options]
 
* [http://xenbits.xen.org/docs/unstable/man/xl.cfg.5.html#paravirtualised__pv__guest_specific_options PV Specific Config Options]
 +
 +
{{Anchor|HVM}}
  
 
=== HVM and its variants (x86) ===
 
=== HVM and its variants (x86) ===
Line 100: Line 66:
 
Full Virtualization or Hardware-assisted virtualization (HVM) uses virtualization extensions from the host CPU to virtualize guests. HVM requires Intel VT or AMD-V hardware extensions. The Xen Project software uses Qemu to emulate PC hardware, including BIOS, IDE disk controller, VGA graphic adapter, USB controller, network adapter etc. Virtualization hardware extensions are used to boost performance of the emulation. Fully virtualized guests do not require any kernel support. This means that Windows operating systems can be used as a Xen Project HVM guest. For older host operating systems, fully virtualized guests are usually slower than paravirtualized guests, because of the required emulation.
 
Full Virtualization or Hardware-assisted virtualization (HVM) uses virtualization extensions from the host CPU to virtualize guests. HVM requires Intel VT or AMD-V hardware extensions. The Xen Project software uses Qemu to emulate PC hardware, including BIOS, IDE disk controller, VGA graphic adapter, USB controller, network adapter etc. Virtualization hardware extensions are used to boost performance of the emulation. Fully virtualized guests do not require any kernel support. This means that Windows operating systems can be used as a Xen Project HVM guest. For older host operating systems, fully virtualized guests are usually slower than paravirtualized guests, because of the required emulation.
  
To address this, the Xen Project community has upstreamed PV drivers to Linux and other open source operating systems. On operating systems with [[DomU_Support_for_Xen|Xen Support]], these drivers will be automatically used when you select the HVM virtualization mode. On Windows this requires that appropriate PV drivers are installed. You can find more information at
+
To address this, the Xen Project community has upstreamed PV drivers and interfaces to Linux and other open source operating systems. On operating systems with [[DomU_Support_for_Xen|Xen Support]], these drivers and software interfaces will be automatically used when you select the HVM virtualization mode. On Windows this requires that appropriate PV drivers are installed. You can find more information at
  
 
* [https://xenproject.org/downloads/windows-pv-drivers.html Windows PV Driver Downloads]
 
* [https://xenproject.org/downloads/windows-pv-drivers.html Windows PV Driver Downloads]
Line 107: Line 73:
  
 
{{Anchor|PV-on-HVM}}
 
{{Anchor|PV-on-HVM}}
HVM mode, even with PV drivers, has a number of things that are unnecessarily inefficient. One example are the interrupt controllers: HVM mode provides the guest kernel with emulated interrupt controllers (APICs and IOAPICs). Each instruction that interacts with the APIC requires a trip up into Xen and a software instruction decode; and each interrupt delivered requires several of these emulations. Many of the the paravirtualized interfaces for interrupts, timers, and so on are available for guests running in HVM mode: they just need to be turned on and used. This includes Viridian (i.e. Hyper-V) enlightenments which ensure that Windows guests are aware they are virtualized, which speeds up Windows workloads running on Xen.This required some changes to operating systems with [[DomU_Support_for_Xen|Xen Support]], which when available are automatically used. When used, we commonly talk about PVHVM guests (using PVHVM or PV-on-HVM or drivers), even though these are just HVM guests.  
+
HVM mode, even with PV drivers, has a number of things that are unnecessarily inefficient. One example are the interrupt controllers: HVM mode provides the guest kernel with emulated interrupt controllers (APICs and IOAPICs). Each instruction that interacts with the APIC requires a call into Xen and a software instruction decode; and each interrupt delivered requires several of these emulations. Many of the paravirtualized interfaces for interrupts, timers, and so on are available for guests running in HVM mode: when available in the guest - which is true in most modern versions of Linux, *BSD and Windows - HVM will use these interfaces. This includes Viridian (i.e. Hyper-V) enlightenments which ensure that Windows guests are aware they are virtualized, which speeds up Windows workloads running on Xen.  
  
[[File:HVMModes.png|thumb|none|750px|This image gives a brief overview of the different HVM variants.]]
+
When HVM improvements were introduced, we used marketing labels to describe HVM improvements. This seemed like a good strategy at the time, but has since created confusion amongst users. For example, we talked about PVHVM guests to describe the capability of HVM guests to use PV interfaces, even though PVHVM guests are just HVM guests. The following table gives an overview of marketing terms that were used to describe stages in the evolution of HVM, which you will find occasionally on the Xen wiki and in other documentation:
  
Compared to PV based virtualization, PVHVM is generally [[Xen_Linux_PV_on_HVM_drivers#Performance_Tradeoffs|faster]].  
+
<gallery widths="750px" heights="500px" mode=nolines>
 +
File:HVMModes.png
 +
</gallery>
 +
 
 +
Compared to PV based virtualization, HVM is generally [[Xen_Linux_PV_on_HVM_drivers#Performance_Tradeoffs|faster]].  
  
 
'''Also see:'''
 
'''Also see:'''
 
* [http://xenbits.xen.org/docs/unstable/man/xl.cfg.5.html#fully_virtualised__hvm__guest_specific_options HVM specific Config options]
 
* [http://xenbits.xen.org/docs/unstable/man/xl.cfg.5.html#fully_virtualised__hvm__guest_specific_options HVM specific Config options]
* [https://wiki.xenproject.org/wiki/Xen_Linux_PV_on_HVM_drivers Information on PV-on-HVM drivers]
 
* [[Using_Xen_PV_Drivers_on_HVM_Guest|HowTo on PV-on-HVM drivers]]
 
 
 
  
 
{{Anchor|PVH}}
 
{{Anchor|PVH}}
Line 124: Line 90:
 
=== PVH (x86) ===
 
=== PVH (x86) ===
  
A key motivation behind PVH is to combine the best of PV and HVM mode and to simplify the interface between operating systems with Xen Support and the Xen Hypervisor. To do this, we had two options: start with a PV guest and implement a "lightweight" HVM wrapper around it (as we have done for ARM) or start with a HVM guest and remove functionality that is not needed. The first option looked more promising based on our experience with the Xen ARM port, than the second. This is why we started developing an experimental virtualization mode called PVH (now called PVHv1) which was delivered in Xen Project 4.4 and 4.5. Unfortunately, the initial design did not simplify the operating system - hypervisor interface to the degree we hoped: thus, we started a project to evaluate the second option, which was significantly simpler. This led to PVHv2 (which in the early days was also called HVMLite). PVHv2 guests are lightweight HVM guests which use Hardware virtualization support for memory and privileged instructions, PV drivers for I/O and native operating system interfaces for everything else. PVHv2 also does not use QEMU.
+
A key motivation behind PVH is to combine the best of PV and HVM mode and to simplify the interface between operating systems with Xen Support and the Xen Hypervisor. To do this, we had two options: start with a PV guest and implement a "lightweight" HVM wrapper around it (as we have done for ARM) or start with a HVM guest and remove functionality that is not needed. The first option looked more promising based on our experience with the Xen ARM port, than the second. This is why we started developing an experimental virtualization mode called PVH (now called PVHv1) which was delivered in Xen Project 4.4 and 4.5. Unfortunately, the initial design did not simplify the operating system - hypervisor interface to the degree we hoped: thus, we started a project to evaluate a second option, which was significantly simpler. This led to PVHv2 (which in the early days was also called HVMLite). PVHv2 guests are lightweight HVM guests which use Hardware virtualization support for memory and privileged instructions, PV drivers for I/O and native operating system interfaces for everything else. PVHv2 also does not use QEMU for device emulation, but it can still be used for user-space backends (see PV I/O Support)..
  
 
PVHv1 has been replaced with PVHv2 in Xen 4.9, and has been made fully supported in Xen 4.10. PVH (v2) requires guests with Linux 4.11 or newer kernel.  
 
PVHv1 has been replaced with PVHv2 in Xen 4.9, and has been made fully supported in Xen 4.10. PVH (v2) requires guests with Linux 4.11 or newer kernel.  
Line 142: Line 108:
  
 
=== Summary ===
 
=== Summary ===
The following diagram gives an overview of the various virtualization modes implemented in Xen.
+
The following diagram gives an overview of the various virtualization modes implemented in Xen. It also shows what underlying virtualization technique is used for each virtualization mode.
  
[[File:XenModes.png|thumb|none|750px|This image gives a brief overview of the different Xen Virtualization modes]]
+
<gallery widths="750px" heights="500px" mode=nolines>
 +
File:XenModes.png
 +
</gallery>
  
 
'''Footnotes:'''
 
'''Footnotes:'''
Line 154: Line 122:
 
#  ARM guests use EFI boot or Device Tree for embedded applications
 
#  ARM guests use EFI boot or Device Tree for embedded applications
  
== Toolstacks, Managment APIs and Consoles ==
+
From a users perspective, the virtualization mode primarily has the following effects:
Xen Project software employs a number of different toolstacks. Each toolstack exposes an API, which will run different tools. The figure below gives a very brief overview of the choices you have, which commercial products use which stack and examples of hosting vendors using specific APIs.
+
* Performance and memory consumption will differ depending on virtualization mode
 +
* A number of command line and config options will depend on the virtualisation mode
 +
* The boot path and guest install on HVM and PV/PVH are different: the workflow of installing guest operating systems in HVM guests is identical to installing real hardware, whereas installing guest OSes in PV/PVH guests differs. Please refer to the boot/install section of this document
 +
 
 +
<br>
 +
 
 +
== Toolstack and Managment APIs ==
 +
Xen Project software employs a number of different toolstacks. Each toolstack exposes an API, against which a different set of tools or user-interface can be run. The figure below gives a very brief overview of the choices you have, which commercial products use which stack and examples of hosting vendors using specific APIs.
 +
<gallery widths="500px" heights="100px" mode=nolines>
 +
File:ToolStacks.png|Boxes marked in blue are developed by the Xen Project
 +
</gallery>
 +
The Xen Project software can be run with the default toolstack, with [[Libvirt]] and with [[XAPI]]. The pairing of the Xen Project hypervisor and XAPI became known as [https://wiki.xenserver.org/index.php?title=Category:XCP XCP] which has been superceded by open source [http://XenServer.org/ XenServer] and [https://xcp-ng.org/ XCP-ng]. The diagram above shows the various options: all of them have different trade-offs and are optimized for different use-cases. However in general, the more on the right of the picture you are, the more functionality will be on offer.
 +
 
 +
'''Which to Choose?'''<br>
 +
The article [[Choice_of_Toolstacks|Choice of ToolStacks]] gives you an overview of the various options, with further links to tooling and stacks for a specific API exposed by that toolstack.
 +
 
 +
=== xl ===
 +
For the remainder of this document, we will however assume that you are using the default toolstack with it's command line tool XL. These are described in the project's [[Xen Man Pages|Man Pages]]. xl has two main parts:
 +
* The '''[https://xenbits.xen.org/docs/unstable/man/xl.1.html xl command line tool]''', which can be used to create, pause, and shutdown domains, to list current domains, enable or pin VCPUs, and attach or detach virtual block devices. It is normally run as root in Dom0
 +
* Domain '''[https://xenbits.xen.org/docs/unstable/man/xl.cfg.5.html configuration files]''', which describe per domain/VM configurations and are stored in the Dom0 filesystem
 +
 
 +
<br>
 +
 
 +
== I/O Virtualization in Xen ==
 +
The Xen Project Hypervisor supports the following techniques for I/O Virtualization:
 +
* '''The PV split driver model''': in this model, a virtual front-end device driver talks to a virtual back-end device driver which in turn talks to the physical device via the (native) device driver. This enables multiple VMs to use the same Hardware resource while being able to re-use native Hardware support. In a standard Xen configuration, (native) device drivers and the virtual back-end device drivers reside in Dom0. Xen does allow running device drivers in so-called [[Driver_Domain|driver domains]]. PV based I/O virtualization is the primary I/O virtualization method for disk and network, but there are a host of PV drivers for DRM, Touchscreen, Audio, … that have been developed for non-server use of Xen. This model is independent of the virtualization mode used by Xen and merely depends on the presence of the relevant drivers. These are shipped with Linux and *BSD out-of-the box. For Windows drivers have to be downloaded and installed into the guest OS.
 +
* '''Device Emulation Based I/O''': HVM guests emulate hardware devices in software. In Xen, QEMU is used as Device Emulator. As the performance overhead is high, Device Based emulation is normally only used during system boot or installation and for low-bandwidth devices.
 +
* '''Passthrough''': allows you to give control of physical devices to guests. In other words, you can use [[PCI passthrough]] to assign a PCI device (NIC, disk controller, HBA, USB controller, firewire controller, sound card, etc) to a virtual machine guest, giving it full and direct access to the PCI device. Xen supports a number of flavours of PCI passthrough, including VT-d passthrough and SR-IOV. However note that using passthrough has security implications, which are well documented [https://docs.openstack.org/security-guide/compute/hardening-the-virtualization-layers.html here].
 +
 
 +
<br>
 +
=== PV I/O Support ===
 +
The following two diagrams shows two variants of the  PV split driver model as implemented in Xen:
 +
<gallery widths="500px" heights="300px" mode=nolines>
 +
File:IOVirt_PV.png|I/O Virtualization using the split driver model
 +
File:IOVirt_QEMU.png|I/O Virtualization using QEMU user space back-end drivers
 +
</gallery>
 +
In the first model, a PV front-end driver will talk directly to a PV back-end driver in the Dom0 kernel. This model is primarily used for plain networking and storage virtualization with LVM, iSCSI, DRBD, etc. Note that the above figures are simplified representations of what is happening in a Xen stack, as even in the simplest cases there will be the Linux/BSD network/block stack in between the back-end driver and the real hardware device.
 +
 
 +
In the second model, a QEMU user-space backend will interpret formatted file data (such as qcow2, vmdk, vdi, etc.) and presents a raw disk interface to its own PV back-end implementation.
 +
 
 +
From a user's or guests' perspective, there is no visible difference to whether a back-end driver runs in user or kernel space. Xen will automatically choose the appropriate combination of front and back-end drivers based on the configuration option used.
 +
 
 +
=== HVM I/O Support ===
 +
The following diagram shows how Device Emulation is used in isolation and used together with PV I/O Support.
 +
<gallery widths="500px" heights="300px" mode=nolines>
 +
File:IOVirt_HVM.png
 +
</gallery>
 +
This support is only available for HVM Guests and primarily used to emulate legacy devices that are needed during the boot process of a guest. It is also used for low bandwidth devices, such as the serial console for HVM guests.
 +
 
 +
=== Storage ===
 +
The following picture gives a brief overview of Storage Options with Xen
 +
 
 +
<gallery widths="700px" heights="300px" mode=nolines>
 +
File:Disk.png
 +
</gallery>
 +
 
 +
Defining storage is relatively straightforward, but requires some planning when used at scale. This applies to Xen Project Software as well as other virtualization solutions. For more information see:
 +
* [https://xenbits.xen.org/docs/unstable/man/xl-disk-configuration.5.html xl-disk-configuration(5)]
 +
* [[Storage options]]
 +
 
 +
=== Networking ===
 +
With xl, the host networking configuration is not configured by the toolstack. In general, the xl toolstack follows the philosophy of not implementing functionality that is available in the Host OS: setting up networking as well as managing system services are examples. Thus, the host administrator needs to setup an appropriate network configuration in Dom0 using native Linux/BSD tools using one of the following common networking styles: Bridging (most common), Open vSwitch, Routing, NAT. This is usually done immediately '''after''' Xen has been installed. See picture below:
 +
 
 +
<gallery widths="300px" heights="300px" mode=nolines>
 +
File:Networking-Prep.png
 +
</gallery>
 +
 
 +
To do this you may have to:
 +
* '''Step 1:''' install bridging software packages, if not present
 +
* '''Step 2:''' set up a network bridge (xenbr0) in Dom0. This is distro specific: you can find a number of examples on how to do this [https://wiki.xenproject.org/wiki/Network_Configuration_Examples_(Xen_4.1%2B) here].
 +
 +
As we outlined earlier, a paravirtualised network device consists of a pair of network devices. The first of these (the frontend) will reside in the guest domain while the second (the backend) will reside in the backend domain (typically Dom0).
 +
* The frontend devices appear much like any other physical Ethernet NIC in the guest domain. Typically under Linux it is bound to the xen-netfront driver and creates a device called '''ethN'''. Under NetBSD and FreeBSD the frontend devices are named xennetN and xnN respectively.
 +
* The backend device is typically named such that it contains both the guest domain ID and the index of the device. Under Linux such devices are by default named '''vifDOMID.DEVID''' while under NetBSD xvifDOMID.DEVID is used.
 +
 
 +
<gallery widths="300px" heights="300px" mode=nolines>
 +
File:Networking-Config.png
 +
</gallery>
  
[[File:ToolStacks.png|500px|thumb|Boxes marked in blue are developed by the Xen Project]]
+
* '''Step 3:''' To connect these virtual network devices to the network, a '''vif''' entry is added for each backend device in the respective domain configuration file.
  
The Xen Project software can be run with the default toolstack, with [[Libvirt]] and with [[XAPI]]. The pairing of the Xen Project hypervisor and XAPI became known as [https://wiki.xenserver.org/index.php?title=Category:XCP XCP] which has been superceded by open source [http://XenServer.org/ XenServer]. The diagram above shows the various options: all of them have different trade-offs and are optimized for different use-cases. However in general, the more on the right of the picture you are, the more functionality will be on offer.
+
This will look like this:
 +
<pre>
 +
vif = ['mac=…, bridge=xenbr0’ ]
 +
</pre>
 +
By default, most Xen toolstacks will select a random MAC address. Depending on the toolstack this will either be static for the entire life time of the guest (e.g. Libvirt, XAPI) or will change each time the guest is started (e.g. XL). For the latter, it is best practice to assign a defined MAC address, such that IP addresses remain static when used with a DHCP server.
  
'''Which to Choose?'''
+
Although the networking set-up in Xen may seem daunting, it is fairly straightforward. For more information see:
* The article [[Choice_of_Toolstacks|Choice of ToolStacks]] gives you an overview of the various options, with further links to tooling and stacks for a specific API exposed by that toolstack.
+
* [https://xenbits.xen.org/docs/unstable/man/xl-network-configuration.5.html xl-network-configuration(5)]
* [[Xen_/_XCP_/_XAPI_Overview|Xen or XCP]] also provides good pointers on whether to use Xen or XCP (which has been supplanted by open source XenServer).
+
* [[Xen_Networking|Xen Networking Guide]]
 +
* [[Scripts/make-mac.sh|Script to assign unique mac addresses to backend devices]]
 +
* [[Scripts/centos-bridge-setup.sh|Script to set up bridges on CentOS 7]]
  
Of course there are also additional management tools available for different API's.  
+
== Connecting to Guests: Console, ssh, VNC ==
For more information see:
+
The following diagram gives an overview of the different methods of connecting to Xen guests
 +
<gallery widths="500px" heights="300px" mode=nolines>
 +
File:Console.png
 +
</gallery>
  
'''Xen Project''':
+
'''Also See'''
* [http://xenproject.org/directory/directory.html Ecosystem Listing of Projects and Commercial Products which employ Xen Project software]
+
* [[Xen_FAQ_Console|FAQ: Console]]
* <b>DEPRECATED</b> [[Xen Management Tools|Xen Project Management Tools]]
+
* [https://www.virtuatopia.com/index.php/Configuring_a_VNC_based_Graphical_Console_for_a_Xen_Paravirtualized_domainU_Guest Set up a VNC console for PV guests]
<!--
+
* [https://www.virtuatopia.com/index.php/Running_and_Connecting_to_VNC_Servers_on_a_Xen_Guest_(domainU)_System Running and Connecting to VNC Servers on a Xen Guest]
* <b>DEPRECATED</b> [http://xen.org/community/vendors/XenProjectsPage.html Open Source Tools and Software interfacing with Xen]
+
* [[Xen_Project_Beginners_Guide|Beginners Guide - includes VNC setup]]
* <b>DEPRECATED</b> [http://xen.org/community/vendors/XenProductsPage.html Commercial Tools and Software interfacing with Xen]
+
 
-->
+
== Boot options for Xen ==
 +
When a VM is created, it does not contain a bootable operating system. A user in principle has the following primary options
 +
* Install the OS using normal operating system installation procedures: aka using an ISO based installation medium, network installation (e.g. PXE) or similar.
 +
* Create clones of a previously created VM instance. Note that pre-built [[Guest VM Images]] for Xen are available from a number of sources. Clones can be used to set up a network of identical virtual machines, and they can also be distributed to other destinations. Some Xen project based products and distributions provide the capability to export and import VM images (e.g. any libvirt based Xen variant, XenServer and XCP-ng): xl does not provide such functionality: however, saving the master disk image and configuration file and creating a clone using a file copy of the disk image and configuration file (which will need to be adapted) is sufficient.
 +
* Some Xen based products (e.g. XenServer and XCP-ng) as well as the libvirt toolstack provide a mechanism called templates to streamline the process of creating VM clones. Templates are instances of a virtual machine that are designed to be used as a source for cloning. You can create multiple clones from a template and make minor modifications to each clone using the provided template tooling.
 +
* In addition, there are provisioning tools such as  [[Xen tools]]
 +
 
 +
<gallery widths="500px" heights="300px" mode=nolines>
 +
File:Install.png
 +
</gallery>
 +
 
 +
'''Also See'''
 +
* [[Booting Overview]]
 +
* [[Xenpvnetboot|PV Netboot]]
 +
* [[PvGrub2]]
 +
* [[PyGrub]]
 +
 
 +
== xl ==
 +
=== Minimal Config file ===
 +
The following code snipped shows a minimal xl [https://xenbits.xen.org/docs/unstable/man/xl.cfg.5.html configuration file]. Note that there are config file templates in /etc/xen
 +
 
 +
<pre>
 +
# Guest name and type, Memory Size and VCPUs
 +
name = “myguestname”
 +
type = “TYPE”
 +
memory = MMM
 +
vcpus = VVV
 +
 
 +
# Boot related information, unless type='hvm’ … one of the following
 +
# See https://wiki.xenproject.org/wiki/Booting_Overview
 +
# for an explanation
 +
 
 +
# Netboot/Direct Kernel Boot/PV GRUB
 +
kernel = "/…/vmlinuz”
 +
ramdisk = "/…/initrd.gz”
 +
extra = …
 +
# To use PVGrub (if installed)
 +
firmware="pvgrub32|pvgrub64
 +
# Boot from disk
 +
bootloader=“pygrub”
 +
 
 +
# Disk specifications
 +
disk = [' ']
 +
# Network specifications
 +
vif = [' ']
 +
</pre>
 +
 
 +
=== Common [https://xenbits.xen.org/docs/unstable/man/xl.1.html xl commands] ===
 +
'''VM control'''
 +
* xl create [configfile] [OPTIONS]
 +
* xl shutdown [OPTIONS] -a|domain-id
 +
* xl destroy [OPTIONS] domain-id
 +
* xl pause domain-id
 +
* xl  unpause domain-id Information
 +
 
 +
'''Information'''
 +
* xl info [OPTIONS]
 +
* xl list [OPTIONS] [domain-id ...]
 +
* xl top
 +
 
 +
'''Debug'''
 +
* xl dmesg [OPTIONS]
 +
* xl -v … logs from /var/log/xen/xl-${DOMNAME}.log, /var/log/xen/qemu-dm-${DOMNAME}.log, …
  
<!--
+
=== Xen filesystem locations ===
'''XCP''' (DEPRECATED):
+
* /etc/xen : scripts, config file examples, your config files
* [[XCP Management Tools]]
+
* /var/log/xen : log files
* [http://xen.org/community/vendors/XCPProjectsPage.html Open Source Tools and Software interfacing with XCP]
+
* /usr/lib64/xen/bin : xen binaries
* [http://xen.org/community/vendors/XCPProductsPage.html Commercial  Tools and Software interfacing with XCP]
+
* /usr/lib64/xen/boot : xen firmware and boot related binaries
-->
+
* /boot : boot and install images
  
 
== Getting Xen Project, Host and Guest Install ==
 
== Getting Xen Project, Host and Guest Install ==
Line 190: Line 306:
 
* Xen Project Hypervisor version that ships with the distro
 
* Xen Project Hypervisor version that ships with the distro
 
* Whether you can get commercial support (if you need it)
 
* Whether you can get commercial support (if you need it)
If you use [[:Category:XCP|XCP]], you typically will not be interfacing much with Dom0. That is unless you are a power user.
+
If you use XenServer or XCP-ng, you typically will not be interfacing much with Dom0. That is unless you are a power user.
 
   
 
   
 
'''Also See'''
 
'''Also See'''
Line 223: Line 339:
 
     It is re-used in a couple of places -->
 
     It is re-used in a couple of places -->
 
{{Template:Distro Resources}}
 
{{Template:Distro Resources}}
 +
 +
== Getting Started Tutorial (running Xen within VirtualBox) ==
 +
The following tutorial allows you to experiment with Xen running within VirtualBox.
 +
* [http://xenbits.xen.org/people/larsk/xenexercise-ossna18-prep.pdf System Requirements and Setup]
 +
* [http://xenbits.xen.org/people/larsk/xenexercise-ossna18-script.pdf Exercise Script]
 +
* [http://xenbits.xen.org/people/larsk/xenexercise-ossna18.pdf Accompanying Presentation]
 +
* [http://xenbits.xen.org/people/larsk/xenexercise-ossna18.zip Images and Files to download (7.7G)]
  
 
== Getting Help! ==
 
== Getting Help! ==
Line 250: Line 373:
 
XenProject.org maintains a number of mailing lists for users of the hypervisor and other projects. English is used by readers on this list.
 
XenProject.org maintains a number of mailing lists for users of the hypervisor and other projects. English is used by readers on this list.
  
* [http://lists.xenproject.org/mailman/listinfo/xen-users xen-users] is the list for technical support and discussions for the Xen Project hypervisor. If you are not sure where your question belongs start here!  
+
* [http://lists.xenproject.org/mailman/listinfo/xen-users xen-users] is the list for technical support and discussions for the Xen Project hypervisor. If you are not sure where your question belongs start here!
* [http://lists.xenproject.org/mailman/listinfo/xen-api xen-api] is the DEPRECATED list for technical support and discussions for the Xen Cloud Platform (XCP).
 
 
 
  
 
=== IRC ===
 
=== IRC ===
 
[http://www.wikipedia.org/wiki/IRC Internet Relay Chat (IRC)] is a great way to connect with Xen Project community members in real time chat and for support.
 
[http://www.wikipedia.org/wiki/IRC Internet Relay Chat (IRC)] is a great way to connect with Xen Project community members in real time chat and for support.
  
* '''##xen''' is the channel for technical support and discussions for the Xen Project hypervisor. If you are not sure where your question belongs start here!
+
* '''#xen''' is the channel for technical support and discussions for the Xen Project hypervisor. If you are not sure where your question belongs start here!
 
* Check out our [http://xenproject.org/help/irc.html IRC page] if you are not familiar with IRC.
 
* Check out our [http://xenproject.org/help/irc.html IRC page] if you are not familiar with IRC.
  
Line 279: Line 400:
 
* [[:Category:Host Configuration]] contains documents related to bootloader, console and network configuration
 
* [[:Category:Host Configuration]] contains documents related to bootloader, console and network configuration
 
* [[Guest VM Images]] provides pointers to various preinstalled guest images.
 
* [[Guest VM Images]] provides pointers to various preinstalled guest images.
* [[LiveCD]] provides pointers to Live CDs of the Xen Project hypervisor
+
 
 
=== Release Information ===
 
=== Release Information ===
 
* [[:Category:Manual]] contains Xen Project manual documents
 
* [[:Category:Manual]] contains Xen Project manual documents
 
* [[:Category:Release Notes]] contain Xen Project release notes
 
* [[:Category:Release Notes]] contain Xen Project release notes
 
* [[Xen_Release_Features|Xen Release Features]] contains a matrix of features against Xen Project versions
 
* [[Xen_Release_Features|Xen Release Features]] contains a matrix of features against Xen Project versions
* [[:Category:Xen 4.4]] contain articles related to Xen Project 4.4 features, benchmarks, planning, etc.
 
  
 
=== Specialist Topics: Networking, Performance, Security, NUMA, VGA, ... ===
 
=== Specialist Topics: Networking, Performance, Security, NUMA, VGA, ... ===
Line 299: Line 419:
 
* [[:Category:Tutorial]] contains various Tutorials
 
* [[:Category:Tutorial]] contains various Tutorials
  
 +
<!--
 
[[Category:Xen]]
 
[[Category:Xen]]
 
[[Category:Overview]]
 
[[Category:Overview]]
Line 306: Line 427:
  
 
{{Languages|Xen Overview}}
 
{{Languages|Xen Overview}}
 +
-->

Revision as of 16:16, 13 September 2018

What is the Xen Project Hypervisor?

The Xen Project hypervisor is an open-source type-1 or baremetal hypervisor, which makes it possible to run many instances of an operating system or indeed different operating systems in parallel on a single machine (or host). The Xen Project hypervisor is the only type-1 hypervisor that is available as open source. It is used as the basis for a number of different commercial and open source applications, such as: server virtualization, Infrastructure as a Service (IaaS), desktop virtualization, security applications, embedded and hardware appliances. The Xen Project hypervisor is powering the largest clouds in production today.

Here are some of the Xen Project hypervisor's key features:

  • Small footprint and interface (is around 1MB in size). Because it uses a microkernel design, with a small memory footprint and limited interface to the guest, it is more robust and secure than other hypervisors.
  • Operating system agnostic: Most installations run with Linux as the main control stack (aka "domain 0"). But a number of other operating systems can be used instead, including NetBSD and OpenSolaris.
  • Driver Isolation: The Xen Project hypervisor has the capability to allow the main device driver for a system to run inside of a virtual machine. If the driver crashes, or is compromised, the VM containing the driver can be rebooted and the driver restarted without affecting the rest of the system.
  • Paravirtualization: Fully paravirtualized guests have been optimized to run as a virtual machine. This allows the guests to run much faster than with hardware extensions (HVM). Additionally, the hypervisor can run on hardware that doesn't support virtualization extensions.

This page will explore the key aspects of the Xen Project architecture that a user needs to understsand in order to make the best choices.

Introduction to Xen Project Architecture

Below is a diagram of the Xen Project architecture. The Xen Project hypervisor runs directly on the hardware and is responsible for handling CPU, Memory, timers and interrupts. It is the first program running after exiting the bootloader. On top of the hypervisor run a number of virtual machines. A running instance of a virtual machine is called a domain or guest. A special domain, called domain 0 contains the drivers for all the devices in the system. Domain 0 also contains a control stack and other system services to manage a Xen based system. Note that through Dom0 Disaggregation it is possible to run some of these services and device drivers in a dedicated VM: this is however not the normal system set-up.

Components in detail:

  • The Xen Project Hypervisor is an exceptionally lean (<65KSLOC on Arm and <300KSLOC on x86) software layer that runs directly on the hardware and is responsible for managing CPU, memory, and interrupts. It is the first program running after the bootloader exits. The hypervisor itself has no knowledge of I/O functions such as networking and storage.
  • Guest Domains/Virtual Machines are virtualized environments, each running their own operating system and applications. The hypervisor supports several different virtualization modes, which are described in more detail below. Guest VMs are totally isolated from the hardware: in other words, they have no privilege to access hardware or I/O functionality. Thus, they are also called unprivileged domain (or DomU).
  • The Control Domain (or Domain 0) is a specialized Virtual Machine that has special privileges like the capability to access the hardware directly, handles all access to the system’s I/O functions and interacts with the other Virtual Machines. he Xen Project hypervisor is not usable without Domain 0, which is the first VM started by the system. In a standard set-up, Dom0 contains the following functions:
    • System Services: such as XenStore/XenBus (XS) for managing settings, the Toolstack (TS) exposing a user interface to a Xen based system, Device Emulation (DE) which is based on QEMU in Xen based systems
    • Native Device Drivers: Dom0 is the source of physical device drivers and thus native hardware support for a Xen system
    • Virtual Device Drivers: Dom0 contains virtual device drivers (also called backends).
    • Toolstack: allows a user to manage virtual machine creation, destruction, and configuration. The toolstack exposes an interface that is either driven by a command line console, by a graphical interface or by a cloud orchestration stack such as OpenStack or CloudStack. Note that several different toolstacks can be used with Xen
  • Xen Project-enabled operating systems: Domain 0 requires a Xen Project-enabled kernel. Paravirtualized guests require a PV-enabled guest. Linux distributions that are based on Linux kernels newer than Linux 3.0 are Xen Project-enabled and usually include packages that contain the hypervisor and Tools (the default Toolstack and Console). All but legacy Linux kernels older than Linux 2.6.24 are PV-enabled, capable of running PV guests.

Also see:

Guest Types

This following diagrams shows how guest types gave evolved for Xen.

On ARM hosts, there is only one guest type, while on x86 hosts the hypervisor supports the following three types of guests:

  • Paravirtualized Guests or PV Guests: PV is a software virtualization technique originally introduced by the Xen Project and was later adopted by other virtualization platforms. PV does not require virtualization extensions from the host CPU, but requires Xen-aware guest operating systems. PV guests are primarily of use for legacy HW and legacy guest images and in special scenarios, e.g. special guest types, special workloads (e.g. Unikernels), running Xen within another hypervisor without using nested hardware virtualization support, as container host, …
  • HVM Guests: HVM guests use virtualization extensions from the host CPU to virtualize guests. HVM requires Intel VT or AMD-V hardware extensions. The Xen Project software uses QEMU device models to emulate PC hardware, including BIOS, IDE disk controller, VGA graphic adapter, USB controller, network adapter, etc. HVM Guests use PV interfaces and drivers when they are available in the guest (which is usually the case on Linux and BSD guests). On Windows, drivers are available to download via our download page. When available, HVM will use Hardware and Software Acceleration, such as Local APIC, Posted Interrupts, Viridian (Hyper-V) enlightenments and make use of guest PV interfaces where they are faster. Typically HVM is the best performing option on for Linux, Windows, *BSDs.
  • PVH Guests: PVH guests are lightweight HVM-like guests that use virtualization extensions from the host CPU to virtualize guests. Unlike HVM guests, PVH guests do not require QEMU to emulate devices, but use PV drivers for I/O and native operating system interfaces for virtualized timers, virtualized interrupt and boot. PVH guests require PVH enabled guest operating system. This approach is similar to how Xen virtualizes ARM guests, with the exception that ARM CPUs provide hardware support for virtualized timers and interrupts.


Icon Ambox.png IMPORTANT: Guest types are selected through builder configuration file option for Xen 4.9 or before and the type configuration file option from Xen 4.10 onwards in the (also see man pages).


PV (x86)

Paravirtualization (PV) is a virtualization technique originally introduced by Xen Project, later adopted by other virtualization platforms. PV does not require virtualization extensions from the host CPU and is thus ideally suited to run on older Hardware. However, paravirtualized guests require a PV-enabled kernel and PV drivers, so the guests are aware of the hypervisor and can run efficiently without emulation or virtual emulated hardware. PV-enabled kernels exist for Linux, NetBSD and FreeBSD. Linux kernels have been PV-enabled from 2.6.24 using the Linux pvops framework. In practice this means that PV will work with most Linux distributions (with the exception of very old versions of distros).

Also see:

HVM and its variants (x86)

Full Virtualization or Hardware-assisted virtualization (HVM) uses virtualization extensions from the host CPU to virtualize guests. HVM requires Intel VT or AMD-V hardware extensions. The Xen Project software uses Qemu to emulate PC hardware, including BIOS, IDE disk controller, VGA graphic adapter, USB controller, network adapter etc. Virtualization hardware extensions are used to boost performance of the emulation. Fully virtualized guests do not require any kernel support. This means that Windows operating systems can be used as a Xen Project HVM guest. For older host operating systems, fully virtualized guests are usually slower than paravirtualized guests, because of the required emulation.

To address this, the Xen Project community has upstreamed PV drivers and interfaces to Linux and other open source operating systems. On operating systems with Xen Support, these drivers and software interfaces will be automatically used when you select the HVM virtualization mode. On Windows this requires that appropriate PV drivers are installed. You can find more information at

HVM mode, even with PV drivers, has a number of things that are unnecessarily inefficient. One example are the interrupt controllers: HVM mode provides the guest kernel with emulated interrupt controllers (APICs and IOAPICs). Each instruction that interacts with the APIC requires a call into Xen and a software instruction decode; and each interrupt delivered requires several of these emulations. Many of the paravirtualized interfaces for interrupts, timers, and so on are available for guests running in HVM mode: when available in the guest - which is true in most modern versions of Linux, *BSD and Windows - HVM will use these interfaces. This includes Viridian (i.e. Hyper-V) enlightenments which ensure that Windows guests are aware they are virtualized, which speeds up Windows workloads running on Xen.

When HVM improvements were introduced, we used marketing labels to describe HVM improvements. This seemed like a good strategy at the time, but has since created confusion amongst users. For example, we talked about PVHVM guests to describe the capability of HVM guests to use PV interfaces, even though PVHVM guests are just HVM guests. The following table gives an overview of marketing terms that were used to describe stages in the evolution of HVM, which you will find occasionally on the Xen wiki and in other documentation:

Compared to PV based virtualization, HVM is generally faster.

Also see:

PVH (x86)

A key motivation behind PVH is to combine the best of PV and HVM mode and to simplify the interface between operating systems with Xen Support and the Xen Hypervisor. To do this, we had two options: start with a PV guest and implement a "lightweight" HVM wrapper around it (as we have done for ARM) or start with a HVM guest and remove functionality that is not needed. The first option looked more promising based on our experience with the Xen ARM port, than the second. This is why we started developing an experimental virtualization mode called PVH (now called PVHv1) which was delivered in Xen Project 4.4 and 4.5. Unfortunately, the initial design did not simplify the operating system - hypervisor interface to the degree we hoped: thus, we started a project to evaluate a second option, which was significantly simpler. This led to PVHv2 (which in the early days was also called HVMLite). PVHv2 guests are lightweight HVM guests which use Hardware virtualization support for memory and privileged instructions, PV drivers for I/O and native operating system interfaces for everything else. PVHv2 also does not use QEMU for device emulation, but it can still be used for user-space backends (see PV I/O Support)..

PVHv1 has been replaced with PVHv2 in Xen 4.9, and has been made fully supported in Xen 4.10. PVH (v2) requires guests with Linux 4.11 or newer kernel.


Also see:


ARM Hosts

On ARM hosts, there is only one virtualization mode, which does not use QEMU.

Summary

The following diagram gives an overview of the various virtualization modes implemented in Xen. It also shows what underlying virtualization technique is used for each virtualization mode.

Footnotes:

  1. Uses QEMU on older hardware and hardware acceleration on newer hardware – see 3)
  2. Always uses Event Channels
  3. Implemented in software with hardware accelerator support from IO APIC and posted interrupts
  4. PVH uses Direct Kernel Boot or PyGrub. EFI support is currently being developed.
  5. PV uses PvGrub for boot
  6. ARM guests use EFI boot or Device Tree for embedded applications

From a users perspective, the virtualization mode primarily has the following effects:

  • Performance and memory consumption will differ depending on virtualization mode
  • A number of command line and config options will depend on the virtualisation mode
  • The boot path and guest install on HVM and PV/PVH are different: the workflow of installing guest operating systems in HVM guests is identical to installing real hardware, whereas installing guest OSes in PV/PVH guests differs. Please refer to the boot/install section of this document


Toolstack and Managment APIs

Xen Project software employs a number of different toolstacks. Each toolstack exposes an API, against which a different set of tools or user-interface can be run. The figure below gives a very brief overview of the choices you have, which commercial products use which stack and examples of hosting vendors using specific APIs.

The Xen Project software can be run with the default toolstack, with Libvirt and with XAPI. The pairing of the Xen Project hypervisor and XAPI became known as XCP which has been superceded by open source XenServer and XCP-ng. The diagram above shows the various options: all of them have different trade-offs and are optimized for different use-cases. However in general, the more on the right of the picture you are, the more functionality will be on offer.

Which to Choose?
The article Choice of ToolStacks gives you an overview of the various options, with further links to tooling and stacks for a specific API exposed by that toolstack.

xl

For the remainder of this document, we will however assume that you are using the default toolstack with it's command line tool XL. These are described in the project's Man Pages. xl has two main parts:

  • The xl command line tool, which can be used to create, pause, and shutdown domains, to list current domains, enable or pin VCPUs, and attach or detach virtual block devices. It is normally run as root in Dom0
  • Domain configuration files, which describe per domain/VM configurations and are stored in the Dom0 filesystem


I/O Virtualization in Xen

The Xen Project Hypervisor supports the following techniques for I/O Virtualization:

  • The PV split driver model: in this model, a virtual front-end device driver talks to a virtual back-end device driver which in turn talks to the physical device via the (native) device driver. This enables multiple VMs to use the same Hardware resource while being able to re-use native Hardware support. In a standard Xen configuration, (native) device drivers and the virtual back-end device drivers reside in Dom0. Xen does allow running device drivers in so-called driver domains. PV based I/O virtualization is the primary I/O virtualization method for disk and network, but there are a host of PV drivers for DRM, Touchscreen, Audio, … that have been developed for non-server use of Xen. This model is independent of the virtualization mode used by Xen and merely depends on the presence of the relevant drivers. These are shipped with Linux and *BSD out-of-the box. For Windows drivers have to be downloaded and installed into the guest OS.
  • Device Emulation Based I/O: HVM guests emulate hardware devices in software. In Xen, QEMU is used as Device Emulator. As the performance overhead is high, Device Based emulation is normally only used during system boot or installation and for low-bandwidth devices.
  • Passthrough: allows you to give control of physical devices to guests. In other words, you can use PCI passthrough to assign a PCI device (NIC, disk controller, HBA, USB controller, firewire controller, sound card, etc) to a virtual machine guest, giving it full and direct access to the PCI device. Xen supports a number of flavours of PCI passthrough, including VT-d passthrough and SR-IOV. However note that using passthrough has security implications, which are well documented here.


PV I/O Support

The following two diagrams shows two variants of the PV split driver model as implemented in Xen:

In the first model, a PV front-end driver will talk directly to a PV back-end driver in the Dom0 kernel. This model is primarily used for plain networking and storage virtualization with LVM, iSCSI, DRBD, etc. Note that the above figures are simplified representations of what is happening in a Xen stack, as even in the simplest cases there will be the Linux/BSD network/block stack in between the back-end driver and the real hardware device.

In the second model, a QEMU user-space backend will interpret formatted file data (such as qcow2, vmdk, vdi, etc.) and presents a raw disk interface to its own PV back-end implementation.

From a user's or guests' perspective, there is no visible difference to whether a back-end driver runs in user or kernel space. Xen will automatically choose the appropriate combination of front and back-end drivers based on the configuration option used.

HVM I/O Support

The following diagram shows how Device Emulation is used in isolation and used together with PV I/O Support.

This support is only available for HVM Guests and primarily used to emulate legacy devices that are needed during the boot process of a guest. It is also used for low bandwidth devices, such as the serial console for HVM guests.

Storage

The following picture gives a brief overview of Storage Options with Xen

Defining storage is relatively straightforward, but requires some planning when used at scale. This applies to Xen Project Software as well as other virtualization solutions. For more information see:

Networking

With xl, the host networking configuration is not configured by the toolstack. In general, the xl toolstack follows the philosophy of not implementing functionality that is available in the Host OS: setting up networking as well as managing system services are examples. Thus, the host administrator needs to setup an appropriate network configuration in Dom0 using native Linux/BSD tools using one of the following common networking styles: Bridging (most common), Open vSwitch, Routing, NAT. This is usually done immediately after Xen has been installed. See picture below:

To do this you may have to:

  • Step 1: install bridging software packages, if not present
  • Step 2: set up a network bridge (xenbr0) in Dom0. This is distro specific: you can find a number of examples on how to do this here.

As we outlined earlier, a paravirtualised network device consists of a pair of network devices. The first of these (the frontend) will reside in the guest domain while the second (the backend) will reside in the backend domain (typically Dom0).

  • The frontend devices appear much like any other physical Ethernet NIC in the guest domain. Typically under Linux it is bound to the xen-netfront driver and creates a device called ethN. Under NetBSD and FreeBSD the frontend devices are named xennetN and xnN respectively.
  • The backend device is typically named such that it contains both the guest domain ID and the index of the device. Under Linux such devices are by default named vifDOMID.DEVID while under NetBSD xvifDOMID.DEVID is used.
  • Step 3: To connect these virtual network devices to the network, a vif entry is added for each backend device in the respective domain configuration file.

This will look like this:

vif = ['mac=…, bridge=xenbr0’ ]

By default, most Xen toolstacks will select a random MAC address. Depending on the toolstack this will either be static for the entire life time of the guest (e.g. Libvirt, XAPI) or will change each time the guest is started (e.g. XL). For the latter, it is best practice to assign a defined MAC address, such that IP addresses remain static when used with a DHCP server.

Although the networking set-up in Xen may seem daunting, it is fairly straightforward. For more information see:

Connecting to Guests: Console, ssh, VNC

The following diagram gives an overview of the different methods of connecting to Xen guests

Also See

Boot options for Xen

When a VM is created, it does not contain a bootable operating system. A user in principle has the following primary options

  • Install the OS using normal operating system installation procedures: aka using an ISO based installation medium, network installation (e.g. PXE) or similar.
  • Create clones of a previously created VM instance. Note that pre-built Guest VM Images for Xen are available from a number of sources. Clones can be used to set up a network of identical virtual machines, and they can also be distributed to other destinations. Some Xen project based products and distributions provide the capability to export and import VM images (e.g. any libvirt based Xen variant, XenServer and XCP-ng): xl does not provide such functionality: however, saving the master disk image and configuration file and creating a clone using a file copy of the disk image and configuration file (which will need to be adapted) is sufficient.
  • Some Xen based products (e.g. XenServer and XCP-ng) as well as the libvirt toolstack provide a mechanism called templates to streamline the process of creating VM clones. Templates are instances of a virtual machine that are designed to be used as a source for cloning. You can create multiple clones from a template and make minor modifications to each clone using the provided template tooling.
  • In addition, there are provisioning tools such as Xen tools

Also See

xl

Minimal Config file

The following code snipped shows a minimal xl configuration file. Note that there are config file templates in /etc/xen

# Guest name and type, Memory Size and VCPUs
name = “myguestname” 
type = “TYPE”
memory = MMM
vcpus = VVV

# Boot related information, unless type='hvm’ … one of the following
# See https://wiki.xenproject.org/wiki/Booting_Overview
# for an explanation

# Netboot/Direct Kernel Boot/PV GRUB
kernel = "/…/vmlinuz”
ramdisk = "/…/initrd.gz”
extra = …
# To use PVGrub (if installed)
firmware="pvgrub32|pvgrub64
# Boot from disk
bootloader=“pygrub”

# Disk specifications
disk = [' ']
# Network specifications
vif = [' '] 

Common xl commands

VM control

  • xl create [configfile] [OPTIONS]
  • xl shutdown [OPTIONS] -a|domain-id
  • xl destroy [OPTIONS] domain-id
  • xl pause domain-id
  • xl unpause domain-id Information

Information

  • xl info [OPTIONS]
  • xl list [OPTIONS] [domain-id ...]
  • xl top

Debug

  • xl dmesg [OPTIONS]
  • xl -v … logs from /var/log/xen/xl-${DOMNAME}.log, /var/log/xen/qemu-dm-${DOMNAME}.log, …

Xen filesystem locations

  • /etc/xen : scripts, config file examples, your config files
  • /var/log/xen : log files
  • /usr/lib64/xen/bin : xen binaries
  • /usr/lib64/xen/boot : xen firmware and boot related binaries
  • /boot : boot and install images

Getting Xen Project, Host and Guest Install

Choice of Control Domain (Dom0)

As stated earlier, the Xen Project hypervisor requires a kernel as control domain. Most Xen Project-enabled kernels are very similar from the perspective of the hypervisor itself. Choosing the right Dom0 for you comes down to:

  • How familiar you are with a specific distro (e.g. packaging system, etc.)
  • Xen Project Hypervisor version that ships with the distro
  • Whether you can get commercial support (if you need it)

If you use XenServer or XCP-ng, you typically will not be interfacing much with Dom0. That is unless you are a power user.

Also See

Getting Xen Project software

The Xen Project hypervisor is available as source distribution from XenProject.org. However, you can get recent binaries as packages from many Linux and Unix distributions, both open source and commercial.

Xen Project Source Distributions The Xen Project community delivers the hypervisor as a source distribution, following the delivery model of the Linux kernel. The software is released approximately once every 6-9 months, with several update releases per year containing security fixes and critical bug fixes. To build Xen Project software from source, you can either download a source release or you can fetch the source tree from the source repository. Each source release and the source tree contain a README file in the root directory, with detailed build instructions for the hypervisor. The release notes for each release also contain build instructions and so does the Compiling Xen Project software page.


Xen Project software in Linux/Unix Distributions Most Linux and many Unix distributions contain built binaries of the Xen Project hypervisor that can be downloaded and installed through the native package management system. If your Linux/Unix distribution includes the hypervisor and a Xen Project-enabled kernel, we recommend to use them as you will benefit from ease of install, good integration with the distribution, support from the distribution, provision of security updates etc. Installing the hypervisor in a distribution typically requires the following basic steps: a) Install your favourite distribution, b) Install Xen Project package(s) or meta-package, c) check boot settings and d) reboot. After the reboot, your system will run your favourite Linux/Unix distribution as Control Domain on top of the hypervisor.

Host and Guest Install

The following documents

This table contains a list of Xen Project resources for various Linux and Unix distributions.

[edit]

Distro Main website Description Resources
Arch Linux archlinux.org Arch Linux is a lightweight and flexible Linux® distribution that tries to “keep it simple”.


Alpine Linux alpinelinux.org A security-oriented, lightweight Linux distribution based on musl libc and busybox.


CentOS 5 centos.org CentOS is an Enterprise-class Linux Distribution derived from sources freely provided to the public by a prominent North American Enterprise Linux vendor. CentOS conforms fully with the upstream vendor's redistribution policy and aims to be 100% binary compatible. (CentOS mainly changes packages to remove upstream vendor branding and artwork.) CentOS is free.


CentOS 6 centos.org CentOS is an Enterprise-class Linux Distribution derived from sources freely provided to the public by a prominent North American Enterprise Linux vendor. CentOS conforms fully with the upstream vendor's redistribution policy and aims to be 100% binary compatible. (CentOS mainly changes packages to remove upstream vendor branding and artwork.) CentOS is free.

CentOS 6.0 - 6.3 does not include Xen Project software, but you can get support from various sources. The following articles may be useful

CentOS 6.4+ does include Xen Project support and can be used as a dom0 and domU out-of-the-box, thanks to the Xen4CentOS project

Xen packages in CentOS 6 and commercial support are also available from "Xen made easy!"


Debian debian.org The Debian project produces an entirely free operating system that empowers its users to be in control of the software running their computers.


Fedora fedoraproject.org Fedora is a RPM-based distribution with a 6-month release cycle, and is the community-supported base of RHEL releases.


FreeBSD freebsd.org FreeBSD® is an advanced operating system for modern server, desktop, and embedded computer platforms.


Finnix finnix.org Finnix is a sysadmin utility Linux LiveCD, and includes out-of-the-box Xen Project guest support.


Gentoo Linux gentoo.org Gentoo Linux is a special flavor of Linux that can be automatically optimized and customized for just about any application or need. Extreme performance, configurability and a top-notch user and developer community are all hallmarks of the Gentoo experience.


NetBSD netbsd.org NetBSD is a free, fast, secure, and highly portable Unix-like open source operating system.


Oracle Linux oracle.com Oracle Corporation distributes Oracle Linux the Unbreakable Enterprise Kernel. Oracle states that the Unbreakable Enterprise Kernel is compatible with RHEL, Oracle middleware and 3rd-party RHEL-certified applications. Oracle Linux supports KVM, Xen Project, and Oracle VM Server for x86, which is based on Xen.


openSuSE opensuse.org openSuSE is a free and Linux-based operating system for your PC, Laptop or Server.


Red Hat Enterprise Linux (RHEL) 5.x redhat.com RHEL 5.x includes the Xen Project 3.4 Hypervisor as well as a Xen Project-enabled kernel, and can be used as a dom0 and domU


Red Hat Enterprise Linux (RHEL) 6.x redhat.com RHEL 6.x does not include the Xen Project Hypervisor. But, a Dom0 capable kernel, Xen Project hypervisor, and libvirt packages for use with RedHat Enterprise Linux 6 and its derivatives are available from either the Xen4CentOS project or the "Xen made easy!" effort.


Ubuntu ubuntu.com Fast, secure and stylishly simple, the Ubuntu operating system is used by 20 million people worldwide every day.



Getting Started Tutorial (running Xen within VirtualBox)

The following tutorial allows you to experiment with Xen running within VirtualBox.

Getting Help!

The Xen Project community contains many helpful and friendly people. We are here for you. There are several ways to get help and keep on top of what is going on!

  • Read News!
  • Read Documentation!
  • Contact other users, to ask the questions and discuss the hypervisor or other Xen Project-related projects

News Sources

Documentation

Documentation for projects hosted on XenProject.org is available on the Xen Project Wiki. Our wiki is active and community maintained. It contains a lot of useful information and uses categories extensively to make it easy to find information. You may also want to check:

Mailing Lists

Search Mailing Lists All XenProject.org mailing lists are archived using the MarkMail system at xen.markmail.org. Before you ask a question, it is worth checking whether somebody else has asked the question before

Main Mailing Lists XenProject.org maintains a number of mailing lists for users of the hypervisor and other projects. English is used by readers on this list.

  • xen-users is the list for technical support and discussions for the Xen Project hypervisor. If you are not sure where your question belongs start here!

IRC

Internet Relay Chat (IRC) is a great way to connect with Xen Project community members in real time chat and for support.

  • #xen is the channel for technical support and discussions for the Xen Project hypervisor. If you are not sure where your question belongs start here!
  • Check out our IRC page if you are not familiar with IRC.

Other places

There are a number of other places, where you can get help on Xen Project software. For example:

Raising Bugs

If you find a bug, you can report bugs against the software. Before you raise a bug, please read Reporting Bugs!

Roadmaps, Release Cadence, Maintenance Releases

The Xen Project community releases the Xen Project Hypervisor with a release cadence of 6 months (in June and December of each year). Roadmap information is tracked at Xen Roadmap. You can find information on the maintenance release cycle at Xen Project Maintenance Releases.

Also See

Installation

Release Information

Specialist Topics: Networking, Performance, Security, NUMA, VGA, ...

Specialized Xen Project topics:

  • Category:Networking contains articles related to networking
  • Category:NUMA contains all articles related to the running (or to improving the support for doing so) of the Xen Project Hypervisor on NUMA architectures
  • Category:Performance contains documents, tuning instructions and benchmarks related to the performance of Xen Project software
  • Category:Security contains documents related to Xen Project security
  • Category:VGA contains documents related to VGA, VTd, GPY passthrough, etc.

FAQs, HowTos, ...