Hyperthreading

From Xen


Icon todo.png Needs Refactor

Mixes Glossary, FAQ & HowTo elements


Hyperthreading FAQ

What is hyperthreading?

Hyper-threading is the Intel technology name, and commonly known name, for the Computing concept known as simultaneous multithreading. AMD have an equivalent technology known as Cluster Multi-threading.

Hyperthreading is a technology designed to improve the computational performance of superscalar processors. The basic idea is that each core is split into multiple threads, each of which behave as a separate processor as far as the OS is concerned, but which share a significant portion of the resources with other threads on the same core.

A simple example which illustrates the point

Statistically, 1 in 6 instructions are memory accesses. Memory (RAM) is getting slower and slower in comparison to processor cycles. When a processor needs to do a memory read, it must sit around waiting, which is bad for performance. With hyperthreading, the processor can be waiting for a memory read for one thread at the same time it is also executing instructions from the other thread. Therefore, the overall throughput of instructions is higher, even though the two threads are sharing the processor resources.

The more complicated explanation

SMT is designed to exploit Thread Level Parallelism to help reduce instruction dependences. With Out Of Order execution, being able to fill the re-order buffer with instructions from two independent threads means that the average number of instruction dependences are halved. This means that dispatch unit has, on average, twice as many instructions to choose from, increasing the likelyhood that every single execution unit is performing useful work, rather than waiting for the dependent instructions to be completed. Overall, this statistically increases the processors instruction throughput.

Is Xen hyperthreading aware?

The short answer is, "yes".

The long answer is that Xen is able to understand the CPU topology of both Intel Hyperthreads and AMD execution units, and pass this information on to the Xen scheduler. Xen has four available schedulers, which can be selected at boot time with a Xen command-line option. Both the two general purpose schedulers, Credit1 and Credit2, are HT-aware, and they will avoid running vCPUs on two sibling threads if there are idle cores available.

Credit1 is HT-aware since long time, while Credit2 has seen an improvement in its HT support in Xen 4.7, and can be considered fully HT-aware starting from Xen 4.8.

The other two schedulers, ARINC653 and RTDS --both real-time oriented schedulers-- are not HT-aware. RTDS may become so at some point. The old real-time SEDF scheduler, removed in Xen 4.7, was never HT-aware.

Should I enable hyperthreading?

In theory, for software that is HT-aware, enabling HT should never hurt, and should only help. However, people have experienced situations where enabling hyperthreading results in signficantly worse performance than with it disabled. This may be due to many factors, including increased cache footprint, or quirks in the microarchitectural implementation of the particular processor.

We at xen-devel mailing list would like to know about these so we can fix them. The ones that existed for some time were the lack of PV aware locking mechanism in Linux kernels. That has since been fixed for both HVM (3.11 and later) and PV (2.6.32 and later) - and are enabled by CONFIG_PARAVIRT_SPINLOCK enabled.

NB that because the Xen Credit1 scheduler is HT-aware, guest OS software does not need to be HT-aware.

So if you are able to make an accurate benchmark for the workload you expect to run, try it with HT and without, and do what's best.

If you're not able to run a benchmark, and you're using the default Xen scheduler, go ahead and enable HT; but if you encounter performance problems, try disabling it and see if it helps.