Huge Page Support

From Xen
Jump to: navigation, search

What Are Huge Pages?

  • Huge pages are also known as "superpages" in FreeBSD (or "large pages" in the Microsoft Windows world)
  • Newer AMD64 processors can use 1GB pages in long mode.
  • Linux has supported huge pages on several architectures since the 2.6 series via the hugetlbfs filesystem.
  • Xen Project supports allocating huge pages for HVM and PVH guests (use in PV guests is not supported). The hypervisor itself uses huge pages wherever it can.

Using Huge Pages

  • In the Hypervisor: In recent versions, huge page support is enabled by default. Older versions (and custom builds with different defaults) may need to specify the hypervisor boot command line flag "allowsuperpage" (formerly called "allowhugepage").
  • In the guest: The ballooning driver does not support hugepages, so keep the memory of the DomU constant. Create the DomU with minimum memory equal to maximum memory so the balloon driver is never called. Never execute the xl mem-set command against the DomU to change its memory size. Then, within the VM, execute the following:
   # echo 20 > /proc/sys/vm/nr_hugepages

   # cat /proc/meminfo
   ...
   AnonHugePages:         0 kB
   HugePages_Total:      20
   HugePages_Free:       20
   HugePages_Rsvd:        0
   HugePages_Surp:        0
   Hugepagesize:       2048 kB
   DirectMap4k:     1056768 kB
   DirectMap2M:           0 kB

Huge Pages: Internals

If you use an HVM or PVH guest in Hardware Assisted Paging (HAP) mode (the default), and minimize memory ballooning, you will be maximizing your use of hugepages from the hypervisor's perspective.

Superpages have two advantages, both of which translate to reduced overhead due to TLB misses on workloads that involve accessing large amounts of memory.

1. Superpages in the pagetable translate to hugepages in the Translation Lookaside Buffer (TLB). On x86, the architectural limit of the TLB is 16, so having hugepages in the TLB increases its coverage from 64kiB (16x4k) to 32MiB (16x2MiB). This translates to fewer TLB misses.

2. Superpages skip one level of the pagetable on a TLB miss, making TLB misses less expensive.

Superpages might be used in several places:

  • The guest pagetables.
  • The hypervisor's pagetables
  • For an HVM or PVH guest, the Physical-to-Machine (p2m) table (which is inside Xen)
  • For a guest running in shadow mode, the shadow pagetables

Xen will always use superpages in its own pagetables when possible.

Xen will always use superpages in the p2m table when possible. On a clean machine that has never done any ballooning, this should always happen. Ballooning can fragment the p2m, making it not possible to use superpages.

Xen has no support for superpages in the pagetables of PV guests. Oracle did some work to make this possible some time back, but it was never upstreamed, and they switched to pursuing PVH instead.

HVM and PVH guests can always put superpages in their pagetables.

At then moment, shadow pagetables never have superpage entries.

When HVM and PVH guests are running in HAP mode (the default), the TLB will contain superpage entries, and the cost of a TLB miss will go down compared to having no superpage entries.

When HVM and PVH guests are running in shadow mode, they can use superpages in their own pagetables; however, the shadow tables, used by the actual hardware, will not have superpages. This means that the TLB will not contain superpage entries, nor will the cost of a TLB miss go down compared to having no superpage entries.

However, the cost of a TLB miss when running in HAP is much more expensive than a TLB miss when running in shadow mode. So whether HAP or shadow provides better performance depends on the parameters of the particular workload that's being run. (In most cases, HAP will provide better performance.)

External References