Site icon Virtual Maestro

VMware vSphere 7.x Memory Reclamation-Part 3: Transparent Page Sharing (TPS)

Advertisements

Previously in Part-1:Basics and Part 2: MinFree, we discussed about memory reclamation, need for memory reclamation, ESXi memory states, sliding scale method for calculating MemMinFreePct value. In this post, we will explore sharing memory pages with memory reclamation technique called as Transparent Page Sharing or TPS. On an ESXi host, many workloads present opportunities for sharing memory across virtual machines as well as within a single virtual machine.

In other words, transparent page sharing concept is something similar to deduplication. With page sharing, the hypervisor instead of creating multiple duplicate copies of memory pages, it finds the page sharing opportunities, which is shared by multiple workloads in the host physical memory. As result of this, the total host memory consumption is reduced and  memory overcommitment can be accommodated.

However, since vSphere 6.0 onwards, due to security concerns, inter-VM transparent page sharing is disabled by default. Default page sharing scope restricted to intra-virtual machine memory sharing. This means page sharing does not occur across virtual machines and only occurs inside single virtual machine. If required, memory page sharing can be enabled for inter-VM page sharing with help of Salting. We will talk about Salting process later in this post after discussing page sharing concepts.

How TPS works? 

In the Part-1:Basics of this series, I mentioned a note that, memory is not virtualized due to performance reason, and the content of virtual machine memory and virtual memory is ultimately loaded into the host physical memory. Below is the image from that discussion.

However, ESXi generates the hash value for the content of memory pages and stores the hash value in global Hash table. A hash value is generated based on the virtual machines physical page’s (GA) content and stored in global hash table.

Image: Vmware

Each entry in hash table includes a hash value and the physical page number of a shared page. These entries in hash table are used to find page sharing opportunities as follows. If the hash value of virtual machines physical page matches an existing entry in hash table, a bit-by-bit comparison of the page contents is performed to exclude any false match.

I used page sharing example between VMs (interVM), however, similar process is applicable for intra-VM as well.

NUMA and Transparent Page Sharing

Transparent page sharing or TPS is optimised for NUMA architecture in ESXi host. On NUMA enabled ESXi host, page sharing is performed within the NUMA nodes. So each NUMA node has its own shared memory pages. VMs that are sharing memory pages, do not access remote memory on another NUMA node.

What is Salting in TPS?

Salting is used to allow more granular management of the virtual machines participating in TPS. A host config option Mem.ShareForceSalting is introduced to enable or disable salting. With the salting settings, the virtual machines can share pages only if the salt value and contents of the pages are identical.

By default, Mem.ShareForceSalting = 2 is configured on ESXi host and each virtual machine has a different salt. This means page sharing does not occur across the virtual machines (inter-VM TPS) and only happens inside a virtual machine (intra VM).

When salting is enabled (Mem.ShareForceSalting = 1 or 2) in order to share a page between two virtual machines both salt and the content of the page must be same. A salt value is a configurable vmx option for each virtual machine. You can manually specify the salt values in the virtual machine’s vmx file with the vmx option sched.mem.pshare.salt. If this option is not present in the virtual machine’s vmx file, then the value of vc.uuid vmx option is taken as the default value. Since the vc.uuid is unique to each virtual machine, by default TPS happens only among the pages belonging to a particular virtual machine (Intra-VM).

The following table shows how different settings for TPS that are used together to determine how TPS operates for individual VMs:

Mem. ShareForceSalting (host setting)sched.mem.pshare.salt (per VM setting)vc.uuid (per VM setting)Salt value of VMTPS between VMs (Inter-VM)TPS within a VM (Intra-VM)
0IgnoredIgnored0Yes, among all VMs on host.yes
1PresentIgnoredsched.mem.pshare.saltOnly among VMs with same saltyes
1Not PresentIgnored0Yes, among all VMsyes
2PresentIgnoredsched.mem.pshare.saltOnly among VMs with same saltyes

(default)
Not Present
(default)
Present (default)vc.uuidNo inter-VM TPSyes
2Not PresentNot Presentrandom numberNo inter-VM TPSyes

To determine the effectiveness of memory sharing for a workload, use esxtop to observe the actual savings. Find the information in the PSHARE field of the interactive mode in the Memory page.

Note:

Wrapping Up:

So in this post, we discussed what is TPS and how it works. Also TPS is not enabled by default for inter-VM but can be enabled with Salting. As discussed in previous post, TPS runs in all memory States of an ESXi host. However, in High memory state, TPS runs only on small memory pages (4KB).

Large memory pages (2 MB) are not shared directly. Large memory pages are broken into small memory pages prior sharing the pages, in Clear memory state and onwards of an ESXi host.

An ESXi host will not swap out large pages, the large pages will be broken into small pages (4KB) during host swapping so that pre-generated hashes can be used to share the small pages before they are swapped out.

In hardware-assisted memory virtualization (Intel EPT and AMD RVI) systems, ESXi will not share large pages because:

When using EPT or RVI systems, the esxtop tool might show zero or few shared pages, because TPS uses small pages (4 KB) and EPT and RVI use large pages (2 MB).

That is all for this post. See you folks in next one on Ballooning.

Exit mobile version