Archived/2014 GSoC and OPW Round 8 Projects

From Xen

We are pleased to announce that a total of 7 applicants are participating in OPW and GSoC this year. This page lists the accepted projects and provides a space for students / interns to link to their own pages.

Also see:

Accepted Projects

GSoC Projects

Implement Xen PVUSB support in xl/libxl toolstack

Date of insert: 01/12/2012; Verified: Not updated in 2020; GSoC: Yes
Technical contact: Mentor: George Dunlap, Student: Bo Cao
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: xl/libxl does not currently support Xen PVUSB functionality. Port the feature from xm/xend to xl/libxl. Necessary operations include:
  • Task 1: Implement PVUSB in xl/libxl, make it functionally equivalent to xm/xend.
  • Send to xen-devel mailinglist for review, comments.
  • Fix any upcoming issues.
  • Repeat until merged to xen-unstable.
  • See above for PVUSB drivers for dom0/domU.
  • Xen PVUSB supports both PV domUs and HVM guests with PV drivers.
  • More info: http://wiki.xen.org/xenwiki/XenUSBPassthrough
Outcomes: Not specified, project outcomes


Lazy restore using memory paging

Date of insert: 01/20/2014; Verified: Not updated in 2020; GSoC: Yes
Technical contact: Mentor: Andres Lagar-Cavilla, Student: Dushyant Behl
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Medium
Skills Needed: A good understanding of save/restore, and virtualized memory management (e.g. EPT, shadow page tables, etc). In principle the entire project can be implemented in user-space C code, but it may be the case that new hypercalls are needed for performance reasons.
Description: VM Save/restore results in a boatload of IO and non-trivial downtime as the entire memory footprint of a VM is read from IO.

Xen memory paging support in x86 is now mature enough to allow for lazy restore, whereby the footprint of a VM is backfilled while the VM executes. If the VM hits a page not yet present, it is eagerly paged in.

There has been some concern recently about the lack of docs and/or mature tools that use xen-paging. This is a good way to address the problem.
Outcomes: Expected outcome:
  • Mainline patches for libxc and libxl
  • Pictogram voting comment 15px.png dushyant Hi, I am working on this project.


HVM per-event-channel interrupts

Date of insert: 01/30/2013; Verified: Not updated in 2020; GSoC: yes
Technical contact: Mentor: Paul Durrant, Student: Yandong Han
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: C, some prior knowledge of Xen useful
Description: Windows PV drivers currently have to multiplex all event channel processing onto a single interrupt which is registered with Xen using the HVM_PARAM_CALLBACK_IRQ parameter. This results in a lack of scalability when multiple event channels are heavily used, such as when multiple VIFs in the VM as simultaneously under load. Goal: Modify Xen to allow each event channel to be bound to a separate interrupt (the association being controlled by the PV drivers in the guest) to allow separate event channel interrupts to be handled by separate vCPUs. There should be no modifications required to the guest OS interrupt logic to support this (as there is with the current Linux PV-on-HVM code) as this will not be possible with a Windows guest.
Outcomes: Code is submitted to xen-devel@xen.org for inclusion in xen-unstable


Mirage OS cloud API support

Date of insert: 28/11/2013; Verified: Not updated in 2020; GSoC: yes
Technical contact: Mentor: Dave Scott; Student: Jyotsna Prakash
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: medium
Skills Needed: OCaml
Description: MirageOS (see http://xenproject.org/developers/teams/mirage-os.html, http://www.openmirage.org/) is a type-safe unikernel written in OCaml which generates highly specialised "appliance" VMs that run directly on Xen without requiring an intervening kernel. A MirageOS application typically runs via several communicating kernel instances on the cloud. Today these instances are difficult to manage; we would like to explore strategies for managing these distributed computations using common public cloud APIs such as those exposed by Amazon EC2 and Rackspace.

First we need to create pure OCaml API bindings for (e.g.) EC2 and Rackspace (purity is needed to ensure portability). These API bindings can then be used to provide operating-system-level abstractions to the unikernels. For example, a traditional VM might hotplug a vCPU; while a MirageOS application would request a "VM create" using the cloud API and "connect" the new instance to the existing network. We should be able to spin up 1000s of "CPUs" by using such APIs in a cluster environment.

As well as helping Xen/Mirage, the public cloud API bindings will be very useful to other people in other contexts-- a nice side-effect.

See https://fedoraproject.org/wiki/User:Gholms/EC2_Primer for a primer on how to use EC2
Outcomes: 1. one or more public cloud API bindings plus examples, in a standalone repo on github; 2. an example mirage app which uses these APIs to spin up a new VM


Parallel xenwatch kthread

Date of insert: 01/08/2012; Verified: Not updated in 2020; GSoC: Yes
Technical contact: Mentor: Boris Ostrovsky, Student: Tülin İZER
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Low-Medium
Skills Needed: You need to have understanding of:
  • locks - spinlocks and mutexes
  • build Linux kernel
Description: Xenwatch is locked with a coarse lock. For a huge number of guests this represents a scalability issue. The need is to rewrite the xenwatch locking in order to support full scalability.

See https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/xen/xenbus/xenbus_xs.c#n768

for the code.
Outcomes: Expected outcome:
  • Have upstream patches or a draft of them.
  • benchmark report of with and without.

Xen related GSoC Projects by other mentoring Organizations

openSUSE: Add Snapshot management API to libvirt Xenlight driver

Date of insert: 21/04/2014; Verified: Not updated in 2020; GSoC: Unknown
Technical contact: Mentor: Jim Fehlig, Intern: David Kiarie
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: This project aims implement Xen virtual machines snapshot management API to enable Xen users to easily manage snapshots using libvirt client applications.
Outcomes: Not specified, project outcomes

OPW Round 8

Improvements to the block I/O paravirtualized Xen drivers

Date of insert: 21/04/2014; Verified: Not updated in 2020; GSoC: No
Technical contact: Mentor: Konrad Rzeszutek Wilk, Intern: Arianna Avanzini
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: The block I/O layer of the Linux kernel has recently been improved with a per-CPU queue support; this new component aims to reduce the lock contention and cache effects provoked by the presence of a single per-device queue of I/O request, thus addressing one of the biggest bottlenecks in the kernel. Xen's block I/O paravirtualized drivers could also benefit from using the multi-queue API to allocate per-CPU block threads, thus hopefully increasing throughput and reducing service latency of I/O requests. See here for more details about the implementation plan.
Outcomes: The expected outcome of the project includes both patches allowing to the Xen block I/O PV driver to exploit the new multi-queue API of the block layer, and benchmark reports for the newly-implemented mechanism. More in detail, the produced patches should:
  • add support to the blkfront driver to negotiate with the backend a number of queues used by the driver;
  • add support to the blkback driver to determine the number of hardware queues used by the device-specific driver, negotiate with the frontend the number of queues used by the driver and allocate an adequate number of I/O rings.
Patches for this functionality has been posted on LKML and will be reposted so that they can go in Linux 3.19.


Mirage OS contributions and improvements

Date of insert: 21/04/2014; Verified: Not updated in 2020; GSoC: Unknown
Technical contact: Mentor: Richard Mortier, Intern: Mindy Preston
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: 1 booting on the myriad cloud providers remains a total pain, so figuring out how to create one command that takes some credentials and gets a unikernel started on rackspace/amazon/openstack would be very handy. There's also cloud-init to look at.

2 protocol bisimulations against existing implementations: for a number of our libraries, we'd like a way to test our protocol code against standard implementations and verify that they are functionally equivalent. In certain cases, we will of course know that they are not, so we should be able to mark those as TODOs in our code. Good protocol testing choices: the TCP/IP stack vs Linux, the Cohttp web stack vs Nginx/Apache, the emerging SSL stack vs OpenSSL (important!)

3 new functionality: adding IPv6 support into mirage-net would be fairly straightforward and rather useful. Multipath TCP and/or TCPcrypt are more difficult but in scope.

4 If you feel like low-level hackery, porting Xen MiniOS to ARM would be a difficult but exceedinly rewarding project, as Mirage would then run on embedded devices like the Cubieboard2. This is a kernel hacking heavy project.

5 you could also pen test the heck out of the libraries to find and fix denial of services (e.g. unbounded reads in Cohttp for long headers, that sort of thing). We know of quite a few, but a structured set of attacks would help keep them out.
Outcomes: Not specified, project outcomes

Students and Interns

This section allows our students and interns to link to their own areas on this wiki.