Understanding vNUMA (Virtual Non-Uniform Memory Access)

Andrey Pogosyan

Andrey Pogosyan is a Virtualization Architect who’s focus is on infrastructure virtualization involving mainly VMware and Citrix products. Having worked in the IT industry for 10+ years, Andrey has had the opportunity to fulfill many different roles ranging from Desktop Support and all the way up to Architecture and Implementation. Most recently, Andrey has taken a great interest in the datacenter technology stack encompassing Virtualization, mainly VMware vSphere\View, Citrix XenApp\XenDesktop and Storage (EMC, HP, NetApp).

10 Responses

  1. Awesome article Andrey! Thanks for sharing

  2. Great post Andrey, well explained.

    Thought I would share the fact that if you enable CPU HotAdd, this disables vNUMA see KB http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2040375

  3. Gabriel Santamaria says:

    Very clear and concise, nice visuals

  4. Vincent says:

    Hello Andrew,

    Great blog about vNUMA. I’ve still one thing that’s not clear to me.

    By using the default setting of Node Interleaving (disabled), the system will build a System Resource Allocation Table (SRAT).
    ESX uses the SRAT to understand which memory bank is local to a pCPU and tries to allocate local memory to each vCPU of the virtual machine.
    By using local memory, the CPU can use its own memory controller and does not have to compete for access to the shared interconnect (bandwidth) and reduce the amount of hops to access memory (latency).
    Source: http://frankdenneman.nl/2010/12/28/node-interleaving-enable-or-disable/

    Is vNUMA enabled/disabled based on the availability of NUMA architecture or depends it on if node interleaving is enabled or disabled?

    E.g. vNUMA is enabled when a vm have 9 vCpu’s or more while node interleaving is disabled(by default) and ESXi uses SRAT.

    Cheers,
    Vincent

    • Node interleaving is typically introduced with NUMA ready servers. What that means is that aside from just having NUMA capable server, the option of Node Interleaving should also be disabled. This will force the ESXi host to use SRAT to better place virtual machines on the correct NUMA node.

      Node Interleaving simply lets the CPU chose where to place the memory so if you think about it, when disabled, ESXi will need to rely on SRAT to properly place the virtual machine on the correct NUMA node.

      In some cases, enabling Node Interleaving can increase the performance, but not in the case of ESXi where you’re hosting multiple instances of virtual machines

      NUMA architecture + Interleaving Disabled = vNUMA / SRAT

Leave a Reply