Xen 4.7 RC test instructions
|If you come to this page before or after the Test Day is completed, your testing is still valuable, and you can use the information on this page to test, post any bugs and test reports to xen-devel@. If this page is more than two weeks old when you arrive here, please check the current schedule and see if a similar but more recent Test Day is planned or has already happened.|
- 1 What needs to be tested
- 2 Installing
- 3 Known issues
- 4 Test instructions
- 4.1 General
- 4.2 Specific ARM Test Instructions
- 4.3 Specific x86 Test Instructions
- 4.4 RC specific things to test
- 5 Reporting Bugs (& Issues)
- 6 Reporting success
What needs to be tested
- Making sure that Xen 4.7 compiles and installs properly on different software configurations; particularly on distros
- Making sure that Xen 4.7, along with appropriately up-to-date kernels, work on different hardware.
For more ideas about what to test, please see Testing Xen.
ARM Smoke Testing
If you use ARM Hardware, which is not widely available or not rackable (and thus not part of our automated test suite), please check out Xen ARM Manual Smoke Test. Helping out to manually test ARM boards (which will only take a few minutes) will guarantee that Xen 4.7 will work on the board that you use. If you want to see which boards need testing, check Xen ARM Manual Smoke Test/Results.
Getting a RC
For the expressions/examples below, set the following bash/sh/... variable to the release candidate number (e.g. one of
rc2, ... )
RC="<release candidate number>" # rc1, rc2 ...
With a recent enough
git (>= 188.8.131.52), just pull from the proper tag (
4.7.0-$RC) from the main repo directly:
git clone -b 4.7.0-$RC git://xenbits.xen.org/xen.git
With an older
git version (and/or if that does not work, e.g., complaining with a message like this:
Remote branch 4.7.0-$RC not found in upstream origin, using HEAD instead), do the following:
git clone git://xenbits.xen.org/xen.git ; cd xen ; git checkout 4.7.0-$RC
- XSM denials with 4.7.0 RC1 - fixed in RC2
- Regression in Xen 4.7-rc1 - can't boot HVM guests with more than 64 vCPUS (this is caused by a bug in the Linux kernel, not a bug in Xen)
- Remove any old versions of Xen toolstack and userspace binaries (including
- Remove any Xen-related udev files under /etc because Xen 4.7 doesn't use those anymore.
- Download and install the most recent Xen 4.7 RC, as described above. Make sure to check the
INSTALLfor changes in required development libraries and procedures. Some particular things to note:
Once you have Xen 4.7 RC installed check that you can install a guest etc and use it in the ways which you normally would, i.e. that your existing guest configurations, scripts etc still work.
USB Support for xl
xl introduces PVUSB support as well as the following commands:
usbctrl-attach usbctrl-detach usbdev-attach usbdev-detach usb-list
which corresponds to
usbctrl=[ "USBCTRL_SPEC_STRING", "USBCTRL_SPEC_STRING", ... ] usbdev=[ "USB_SPEC_STRING", "USB_SPEC_STRING", ... ]
RTDS scheduler improvements
The RTDS scheduler was improved in the following way
- The RTDS scheduler has been changed from a quantum-driven model to an event-driven model, which will not invoke the scheduler unnecessarily: if you use this scheduler, you may want to test your workload using the RC and check whether there are any unexpected side effects
- Support to get/set RTDS scheduling parameters on a per-VCPU basis has been added to libxl and xl
So, for instance:
- to see the scheduling parameters of all VCPUs of a VM use
xl sched-rtds -d vm1 -v all Name ID VCPU Period Budget vm1 1 0 300 150 vm1 1 1 400 200 vm1 1 2 10000 4000 vm1 1 3 1000 500
- to change (or check) the scheduling parameters of VCPUs 0 and 3 only, use
# xl sched-rtds -d vm1 -v 0 -p 100 -b 50 -v 3 -p 300 -b 150 # xl sched-rtds -d vm1 -v 0 -v 3 Name ID VCPU Period Budget vm1 1 0 300 150 vm1 1 3 1000 500
For more information and examples, see the xl manual page (search for sched-rtds).
Credit2 runqueue arrangement and hard-affinity support
Xen 4.7 allows one to specify how host CPUs are arranged in runqueues, within the Credit2 scheduler. Valid alternatives are
More fine grained runqueue arrangement (as in with
core) means more accurate load balancing (e.g., it will deal better with hyperthreading), but also more overhead.
To make this effective (e.g., to use Credit2 per-socket runqueues) add the following to the hypervisor boot command line:
More information here (search for credit2_runqueue)
In Xen 4.7, Credit2 supports hard-affinity. It can be set by means of the
xl vcpu-pin xl subcommand. If set for a VCPU, hard-affinity restricts the set of PCPUs where such VCPU can run.
To check that it works, give to the VCPUs of a VM an hard-affinity, by doing as follows:
# xl vcpu-pin 1 all 16-18
And then check where they actually execute, by looking at:
# xl vcpu-list 1 Name ID VCPU CPU State Time(s) Affinity (Hard / Soft) debian.guest.osstest 1 0 17 r-- 5.3 16-18 / all debian.guest.osstest 1 1 18 r-- 3.3 16-18 / all
What we want is that the values in the
CPU column to be (for VCPUs that are running) always within the set of PCPUs we specified.
Hotplug disk backends (drbd, iscsi, etc.) for HVM guests
If you use drbd, iscsi, nbd, or other hotplug-script-based disk backends, try them with HVM guests.
Xen 4.7 introduces the removal of core Xen Hypervisor features at compile time via KCONFIG. We expect that this functionality is initially only going to be used for security and embedded applications, primarily targeting integration via the Yocto project. Yocto integrates with Xen via its meta-virtualization layer and the xen-image-minimal build support. The Yocto project currently integrates with Xen 4.6.1 (Yocto kergoth release). We expect that Xen with KCONFIG will be integrated with upstream Yocto, once Xen 4.7.0 has been released.
If you do want to test specific aspects of this new feature before Yocto integration has completed, please send a mail to xen-devel@ and CC the maintainer (cardoe AT cardoe DOT com) for further instructions.
Specific ARM Test Instructions
Boards and hardware we do not test in our CI Loop
Although we do have automated Test Infrastructure for the project, we only include rackable hardware into our CI Loop. We do have a mixture of Allwinner and Exynos processors in a custom chassis. If you have one of the following boards and want to ensure it runs on Xen 4.7, please make sure you run the Xen ARM Manual Smoke Test on an RC.
Boards not tested by our CI Loop: Allwinner sun6i/A31, DRA7[J6] EVM, Exynos5410, HiKey board from 96boards.org, Mustang (XC-1), OMAP5432, Renesas R-Car H2, Versatile Express and Xilinx Zynq Ultrascale MPSoC
We are also not able to include non-production servers that require a legal agreement such as an NDA into our Test Infrastructure.
ACPI support on ARM
ACPI support requires a platform with support for ACPI 6.0 (or later). Currently there is no publicly available hardware where this can be tested, with the exception of the AEMv8A Foundation Model. For more information, see
Xen now exposes the wallclock time to guests. Checking the date and time in an ARM guest is all that is needed to verify this, as long as the guest doesn't run ntpdate or doesn't have access to network.
Specific x86 Test Instructions
Huge PV Domains
The Xen Project Hypervisor supports starting a Dom0 with very large memory. PV guest limit restrictions of 512GB have been removed to allow the creation of huge PV domains in the TB range via the XL command line interface.
To test, create a PV domain with >512 GB of RAM.
Intel Code and Data Prioritization (CDP)
Code and Data Prioritization (CDP) Technology is an extension of CAT, which is available on Intel Broadwell and later server platforms. CDP enables isolation and separate prioritization of code and data fetches to the L3 cache in a software configurable manner, which can enable workload prioritization and tuning of cache capacity to the characteristics of the workload. CDP extends Cache Allocation Technology (CAT) by providing separate code and data masks per Class of Service (COS).
For more information see:
COLO - Coarse Grain Lock Stepping
COLO or Coarse Grain Lock Stepping is an High Availability solution that builds on top of Remus.
COLO is different from traditional High Availability solutions, which are either based on instruction level lock stepping (excessive overheads) and periodic checkpointing such as Remus (high network latency, large VM checkpointing overhead). On Xen, COLO builds on top of Remus and uses a “relaxed approach to checkpointing: in other words, COLO only checkpoints if absolutely necessary, which for many use-cases provides near native performance.
The COLO Manager component is now part of Xen 4.7, while other components will eventually be part of QEMU (they can be downloaded from a specific git repository).
xSplice - binary patching of the hypervisor
xSplice is a Xen technology that enables to binary patch the running hypervisor with a payload file that is intended to primarily contain security updates (but not necessarily only so). v1 of xSplice is in technology preview mode and compile-disabled by default. It also has some restrictions on what payloads can be encoded in the payload file, most notably support to generate payloads against .data sections and payloads that NOP (remove) existing functions. Xen 4.7 comes with built in Hypervisor support and the
xen-xsplice upload|apply|replace|revert tool to manage payloads (the code is in tools/misc). Additional tools such as
xsplice-build to create a payload are at this stage not shipped with Xen, but are available out-of-tree.
To test xSplice, check out:
- Build Xen with xSplice enabled, see Enabling xSplice in hypervisor
- Patch hypervisor as it is running. There are three simple built-in simple examples, see How to build built-in examples on how to build, install and test it.
- Alternatively, see xsplice-build-tools on how to build, install, and test it.
- Works for Linux and FreeBSD dom0's only
- Does not yet work on the ARM architecture
- Cannot generate payloads for patches with .data sections in the ELF file (in other words, patches that introduce global or static variables cannot be encoded)
- Cannot generate payloads that remove (NOP) functions from the hypervisor
RC specific things to test
- XSM and driver domain: start xl devd in driver domain, and see if there is XSM denial message shown in xl dmesg.
Reporting Bugs (& Issues)
- Use Freenode IRC channel #xentest to discuss questions interactively
- Report any bugs / missing functionality / unexpected results.
- Please put [TestDay] into the subject line
- Also make sure you specify the RC number you are using
- Make sure to follow the guidelines on Reporting Bugs against Xen (please CC the relevant maintainers and the Release Manager - wei dot liu2 at citrix dot com).
We would love it if you could report successes by e-mailing
firstname.lastname@example.org, preferably including:
- Hardware: Please at least include the processor manufacturer (Intel/AMD). Other helpful information might include specific processor models, amount of memory, number of cores, and so on
- Software: If you're using a distro, the distro name and version would be the most helpful. Other helpful information might include the kernel that you're running, or other virtualization-related software you're using (e.g., libvirt, xen-tools, drbd, &c).
- Guest operating systems: If running a Linux version, please specify whether you ran it in PV or HVM mode.
- Functionality tested: High-level would include toolstacks, and major functionality (e.g., suspend/resume, migration, pass-through, stubdomains, &c)
The following template might be helpful: should you use
Xen 4.7.0-<Some RC> for testing, please make sure you state that information!
Subject: [TESTDAY] Test report * Hardware: * Software: * Guest operating systems: * Functionality tested: * Comments:
Subject: [TESTDAY] Test report * Hardware: Dell 390's (Intel, dual-core) x15 HP (AMD, quad-core) x5 * Software: Ubuntu 10.10,11.10 Fedora 17 * Guest operating systems: Windows 8 Ubuntu 12.10,11.10 (HVM) Fedora 17 (PV) * Functionality tested: xl suspend/resume pygrub * Comments: Window 8 booting seemed a little slower than normal. Other than that, great work!