Difference between revisions of "Xen Project 4.13 Feature List"

From Xen
(Created page with " = Change Logs = Change logs for Xen 4.13.0 can be found at * [http://xenbits.xenproject.org/gitweb/?p=xen.git;a=shortlog;h=refs/tags/RELEASE-4.13.0 xenbits.xenproject.org/gi...")
 
 
Line 1: Line 1:
 +
 +
= Major Features =
 +
== SECURITY==
 +
=== Core Scheduling (contributed by SUSE) ===
 +
 +
Core scheduling is a newly introduced experimental technology that allows Xen to group virtual CPUs into virtual Cores and schedules these on physical cores. Furthermore, since switching between virtual cores on a physical core is synchronized, there are never virtual cpus of different virtual cores running at the same time on a single physical core. Core scheduling on Xen has been implemented in a scheduler-agnostic fashion, which means that the algorithm works with all Xen Project schedulers.
 +
 +
Before the introduction of core scheduling virtual CPUs would be scheduled on any of the threads and cores of a CPU, making workloads vulnerable to side-channel attacks leveraging information leaks from shared core resources such as caches and micro-architectural buffers, for which the only existing mitigation is to disable hyper-threading.
 +
 +
Core scheduling is a necessary, but not sufficient milestone enabling users to re-enable hyperthreading and reclaiming performance benefits while reducing or eliminating the risk of HW security issues. In conjunction with the Secret-free Xen Project Hypervisor currently being worked on by Amazon or synchronized scheduling currently being investigated by SUSE and Citrix it will be possible in the next release(s) to provide better trade-offs between security and performance.
 +
 +
Initial benchmarks have shown that for many workloads, core scheduling allows to reclaim lost performance when used alone. We are encouraging our users to test Core scheduling such that we can tune it for future releases.
 +
 +
=== Branch hardening to mitigate against Spectre v1 (contributed by Citrix) ===
 +
 +
Xen 4.13 we have made the Hypervisor more resilient to Spectre v1 gadgets through branch hardening. This removes a number of potential gadgets reducing the attack surface using Spectre v1.
 +
 +
== SERVICEABILITY ==
 +
 +
=== Late uCode loading (contributed by Intel) === 
 +
 +
uCode updates typically contain mitigations for HW vulnerabilities and are typically updated during system initialization or kernel boot, which requires a reboot and implies a long down-time. Xen 4.13 introduces late uCode loading in which the Xen Hypervisor deploys a uCode update with no need to reboot the system.
 +
 +
=== Improved live-patching build tools (contributed by AWS) ===
 +
 +
Numerous improvements to the live-patch built tools have been added, such as the capability of patching inline assembly, improvements to stacked modules, support for module parameters, additional hooks and replicable apply/revert actions, extended python bindings for  automation and the concept of expectations for additional validation of live patches.
 +
 +
== EMBEDDED AND SAFETY-CRITICAL APPLICATIONS ==
 +
 +
=== OP-TEE support (contributed by EPAM) ===
 +
 +
While Xen on x86 supports guest Trusted Execution Environment via TXT and TPM access for quite some time, Xen on Arm was not allowing TrustZone access for unprivileged guests. With Xen 4.13 support was added so all guests can concurrently run Trusted Applications on Arm’s TrustZone without interfering one with another. Required changes were also released in Linux kernel 5.2 and OP-TEE 3.6. This feature was tested with Android P running as a DomU Xen guest with experimental Android HALs – Keymaster, Gatekeeper on Renesas R-Car H3 SoC.
 +
 +
Xen OP-TEE support is fully functional in Xen 4.13 (some improvements will still be upstreamed), but there is still work to be done in OP-TEE. The most notable missing feature is the sharing of hardware (like crypto accelerators or RPMB) between VM contexts in OP-TEE.
 +
 +
This feature was developed in cooperation with the Linaro Secure Working Group, which maintains OP-TEE. To use this feature you need to build and install OP-TEE with virtualization support as described at https://optee.readthedocs.io/en/latest/architecture/virtualization.html. You also need to build Xen with OP-TEE mediator support (this feature is in "Technology Preview" state and is not enabled by default).
 +
 +
=== Renesas R-CAR IPMMU-VMSA driver (contributed by EPAM) ===
 +
 +
Modern automotive computing systems use hypervisors for vehicle functions centralization or isolation in mixed-critical systems. In both cases, peripherals access from guests (e.g. for shared GPU) must be protected with IO-MMU’s, improving overall system performance and security. Xen 4.13 extends its automotive processors support by adding driver for the VMSA compatible IO-MMU of Renesas Electronics Arm-based 3rd generation R-Car system-on-chips. This is the first IO-MMU in Xen that supports functional safety, which is an important milestone towards making Xen compliant with ASIL-B requirements.
 +
 +
The IOMMU sub-system on Arm was updated to support generic IOMMU DT bindings:
 +
https://www.kernel.org/doc/Documentation/devicetree/bindings/iommu/iommu.txt. Added a generic way to register DT device (which is behind the IOMMU) using the generic IOMMU DT bindings before assigning that device to a domain; while newly added IPMMU driver supports generic IOMMU DT bindings, Arm’s SMMU driver doesn't – this is still to be done.
 +
 +
Renesas IPMMU-VMSA support is considered as Technological Preview feature for now and is supposed to work only with newest R-Car Gen3 SoCs revisions (H3 ES3.0, M3-W+, etc.).
 +
 +
=== Dom0-less passthrough and ImageBuilder (contributed by XILINX) ===
 +
 +
Dom0-less support in Xen 4.13 had been extended to include device assignment. It is now possible to assign devices to Dom0-less VMs, which is essential because dom0-less VMs don't have access to any PV devices. With dom0-less device assignment a user can setup a pure static partitioning system where each VM has access to a portion of the devices on the board.
 +
 +
In addition, a new tool called ImageBuilder (see https://wiki.xenproject.org/wiki/ImageBuilder and https://gitlab.com/ViryaOS/imagebuilder) has been added, that can be used to automate building Xen dom0-less configurations for U-Boot. The tool takes care of all the loading addresses generation and device tree editing, making using dom0-less Xen much
 +
much easier.
 +
 +
== Support for new Hardware ==
 +
Xen 4.13 brings support for a variety of hardware platforms. Most notably, Xen 4.13 introduces support for AMD 2nd Generation EPYC™. Support for the latest and extremely popular AMD EPYC CPUs with exceptional performance-per-dollar, connectivity options, and security features. In addition, Xen 4.13 also supports Hygon Dhyana 18h processor family, Raspberry Pi4 and Intel AVX512.
 +
  
 
= Change Logs =
 
= Change Logs =

Latest revision as of 10:34, 16 December 2019

Major Features

SECURITY

Core Scheduling (contributed by SUSE)

Core scheduling is a newly introduced experimental technology that allows Xen to group virtual CPUs into virtual Cores and schedules these on physical cores. Furthermore, since switching between virtual cores on a physical core is synchronized, there are never virtual cpus of different virtual cores running at the same time on a single physical core. Core scheduling on Xen has been implemented in a scheduler-agnostic fashion, which means that the algorithm works with all Xen Project schedulers.

Before the introduction of core scheduling virtual CPUs would be scheduled on any of the threads and cores of a CPU, making workloads vulnerable to side-channel attacks leveraging information leaks from shared core resources such as caches and micro-architectural buffers, for which the only existing mitigation is to disable hyper-threading.

Core scheduling is a necessary, but not sufficient milestone enabling users to re-enable hyperthreading and reclaiming performance benefits while reducing or eliminating the risk of HW security issues. In conjunction with the Secret-free Xen Project Hypervisor currently being worked on by Amazon or synchronized scheduling currently being investigated by SUSE and Citrix it will be possible in the next release(s) to provide better trade-offs between security and performance.

Initial benchmarks have shown that for many workloads, core scheduling allows to reclaim lost performance when used alone. We are encouraging our users to test Core scheduling such that we can tune it for future releases.

Branch hardening to mitigate against Spectre v1 (contributed by Citrix)

Xen 4.13 we have made the Hypervisor more resilient to Spectre v1 gadgets through branch hardening. This removes a number of potential gadgets reducing the attack surface using Spectre v1.

SERVICEABILITY

Late uCode loading (contributed by Intel)

uCode updates typically contain mitigations for HW vulnerabilities and are typically updated during system initialization or kernel boot, which requires a reboot and implies a long down-time. Xen 4.13 introduces late uCode loading in which the Xen Hypervisor deploys a uCode update with no need to reboot the system.

Improved live-patching build tools (contributed by AWS)

Numerous improvements to the live-patch built tools have been added, such as the capability of patching inline assembly, improvements to stacked modules, support for module parameters, additional hooks and replicable apply/revert actions, extended python bindings for automation and the concept of expectations for additional validation of live patches.

EMBEDDED AND SAFETY-CRITICAL APPLICATIONS

OP-TEE support (contributed by EPAM)

While Xen on x86 supports guest Trusted Execution Environment via TXT and TPM access for quite some time, Xen on Arm was not allowing TrustZone access for unprivileged guests. With Xen 4.13 support was added so all guests can concurrently run Trusted Applications on Arm’s TrustZone without interfering one with another. Required changes were also released in Linux kernel 5.2 and OP-TEE 3.6. This feature was tested with Android P running as a DomU Xen guest with experimental Android HALs – Keymaster, Gatekeeper on Renesas R-Car H3 SoC.

Xen OP-TEE support is fully functional in Xen 4.13 (some improvements will still be upstreamed), but there is still work to be done in OP-TEE. The most notable missing feature is the sharing of hardware (like crypto accelerators or RPMB) between VM contexts in OP-TEE.

This feature was developed in cooperation with the Linaro Secure Working Group, which maintains OP-TEE. To use this feature you need to build and install OP-TEE with virtualization support as described at https://optee.readthedocs.io/en/latest/architecture/virtualization.html. You also need to build Xen with OP-TEE mediator support (this feature is in "Technology Preview" state and is not enabled by default).

Renesas R-CAR IPMMU-VMSA driver (contributed by EPAM)

Modern automotive computing systems use hypervisors for vehicle functions centralization or isolation in mixed-critical systems. In both cases, peripherals access from guests (e.g. for shared GPU) must be protected with IO-MMU’s, improving overall system performance and security. Xen 4.13 extends its automotive processors support by adding driver for the VMSA compatible IO-MMU of Renesas Electronics Arm-based 3rd generation R-Car system-on-chips. This is the first IO-MMU in Xen that supports functional safety, which is an important milestone towards making Xen compliant with ASIL-B requirements.

The IOMMU sub-system on Arm was updated to support generic IOMMU DT bindings: https://www.kernel.org/doc/Documentation/devicetree/bindings/iommu/iommu.txt. Added a generic way to register DT device (which is behind the IOMMU) using the generic IOMMU DT bindings before assigning that device to a domain; while newly added IPMMU driver supports generic IOMMU DT bindings, Arm’s SMMU driver doesn't – this is still to be done.

Renesas IPMMU-VMSA support is considered as Technological Preview feature for now and is supposed to work only with newest R-Car Gen3 SoCs revisions (H3 ES3.0, M3-W+, etc.).

Dom0-less passthrough and ImageBuilder (contributed by XILINX)

Dom0-less support in Xen 4.13 had been extended to include device assignment. It is now possible to assign devices to Dom0-less VMs, which is essential because dom0-less VMs don't have access to any PV devices. With dom0-less device assignment a user can setup a pure static partitioning system where each VM has access to a portion of the devices on the board.

In addition, a new tool called ImageBuilder (see https://wiki.xenproject.org/wiki/ImageBuilder and https://gitlab.com/ViryaOS/imagebuilder) has been added, that can be used to automate building Xen dom0-less configurations for U-Boot. The tool takes care of all the loading addresses generation and device tree editing, making using dom0-less Xen much much easier.

Support for new Hardware

Xen 4.13 brings support for a variety of hardware platforms. Most notably, Xen 4.13 introduces support for AMD 2nd Generation EPYC™. Support for the latest and extremely popular AMD EPYC CPUs with exceptional performance-per-dollar, connectivity options, and security features. In addition, Xen 4.13 also supports Hygon Dhyana 18h processor family, Raspberry Pi4 and Intel AVX512.


Change Logs

Change logs for Xen 4.13.0 can be found at

Change logs for QEMU upstream for Xen 4.13.0

Change logs for QEMU traditional for Xen 4.13.0

XSA Patch Level

Xen 4.13.0 is up-to-date up to and including XSA-311. For more information see xenbits.xenproject.org/xsa