VMware vSphere 7.x Memory Reclamation-Part 3: Transparent Page Sharing (TPS)

Previously in Part-1:Basics and Part 2: MinFree, we discussed about memory reclamation, need for memory reclamation, ESXi memory states, sliding scale method for calculating MemMinFreePct value. In this post, we will explore sharing memory pages with memory reclamation technique called as Transparent Page Sharing or TPS. On an ESXi host, many workloads present opportunities for sharing memory across virtual machines as well as within a single virtual machine.

In other words, transparent page sharing concept is something similar to deduplication. With page sharing, the hypervisor instead of creating multiple duplicate copies of memory pages, it finds the page sharing opportunities, which is shared by multiple workloads in the host physical memory. As result of this, the total host memory consumption is reduced and  memory overcommitment can be accommodated.

However, since vSphere 6.0 onwards, due to security concerns, inter-VM transparent page sharing is disabled by default. Default page sharing scope restricted to intra-virtual machine memory sharing. This means page sharing does not occur across virtual machines and only occurs inside single virtual machine. If required, memory page sharing can be enabled for inter-VM page sharing with help of Salting. We will talk about Salting process later in this post after discussing page sharing concepts.

How TPS works? 

In the Part-1:Basics of this series, I mentioned a note that, memory is not virtualized due to performance reason, and the content of virtual machine memory and virtual memory is ultimately loaded into the host physical memory. Below is the image from that discussion.

However, ESXi generates the hash value for the content of memory pages and stores the hash value in global Hash table. A hash value is generated based on the virtual machines physical page’s (GA) content and stored in global hash table.

Image: Vmware

Each entry in hash table includes a hash value and the physical page number of a shared page. These entries in hash table are used to find page sharing opportunities as follows. If the hash value of virtual machines physical page matches an existing entry in hash table, a bit-by-bit comparison of the page contents is performed to exclude any false match.

  • As in above image, there are two memory pages written in ESXi host memory, say green and Red pages with hash values ‘A’ and ‘B’ respectively.
  • We will focus on red memory page for this example. Let’s say, VM2 already had memory page (red) stored on ESXi host memory with hash value A.
  • Now assume that VM1 is trying to load a memory page with similar content to that of VM2 memory page.
  • At this time, ESXi host calculates the hash value for the content of VM1 memory page and since the content is similar hash value will be A. This Hash value is already present in hash table for VM2.
  • Since Hash values of VM1 and VM2 memory pages are same, bit-by-bit comparison of memory page content is performed.
  • If the memory page content of VM1 matches bit-by-bit with the content of an existing memory page of VM2, VM1 instead of creating another memory page, it point to existing memory page of VM2. So basically, it shares the memory page.
  • This remapping is invisible to the virtual machine and inaccessible to the guest operating system. Because of this invisibility, sensitive information cannot be leaked from one virtual machine to another.
  • Any attempt to write to the shared pages will generate a minor page fault. In the page fault handler, the hypervisor will transparently create a private copy of the page for the virtual machine and remap the affected guest physical page to this private copy. A standard copy-on-write (CoW) technique is used to handle writes to the shared host physical pages.

I used page sharing example between VMs (interVM), however, similar process is applicable for intra-VM as well.

NUMA and Transparent Page Sharing

Transparent page sharing or TPS is optimised for NUMA architecture in ESXi host. On NUMA enabled ESXi host, page sharing is performed within the NUMA nodes. So each NUMA node has its own shared memory pages. VMs that are sharing memory pages, do not access remote memory on another NUMA node.

What is Salting in TPS?

Salting is used to allow more granular management of the virtual machines participating in TPS. A host config option Mem.ShareForceSalting is introduced to enable or disable salting. With the salting settings, the virtual machines can share pages only if the salt value and contents of the pages are identical.

By default, Mem.ShareForceSalting = 2 is configured on ESXi host and each virtual machine has a different salt. This means page sharing does not occur across the virtual machines (inter-VM TPS) and only happens inside a virtual machine (intra VM).

When salting is enabled (Mem.ShareForceSalting = 1 or 2) in order to share a page between two virtual machines both salt and the content of the page must be same. A salt value is a configurable vmx option for each virtual machine. You can manually specify the salt values in the virtual machine’s vmx file with the vmx option sched.mem.pshare.salt. If this option is not present in the virtual machine’s vmx file, then the value of vc.uuid vmx option is taken as the default value. Since the vc.uuid is unique to each virtual machine, by default TPS happens only among the pages belonging to a particular virtual machine (Intra-VM).

The following table shows how different settings for TPS that are used together to determine how TPS operates for individual VMs:

Mem. ShareForceSalting (host setting)sched.mem.pshare.salt (per VM setting)vc.uuid (per VM setting)Salt value of VMTPS between VMs (Inter-VM)TPS within a VM (Intra-VM)
0IgnoredIgnored0Yes, among all VMs on host.yes
1PresentIgnoredsched.mem.pshare.saltOnly among VMs with same saltyes
1Not PresentIgnored0Yes, among all VMsyes
2PresentIgnoredsched.mem.pshare.saltOnly among VMs with same saltyes

Not Present
Present (default)vc.uuidNo inter-VM TPSyes
2Not PresentNot Presentrandom numberNo inter-VM TPSyes

To determine the effectiveness of memory sharing for a workload, use esxtop to observe the actual savings. Find the information in the PSHARE field of the interactive mode in the Memory page.


  • Use the Mem.ShareScanTime and Mem.ShareScanGHz advanced settings to control the rate at which the system scans memory to identify opportunities for sharing memory.
  • Same salting values can be specified to achieve the page sharing across virtual machines.
  • You can also configure sharing for individual virtual machines by setting the sched.mem.pshare.enable option.

Wrapping Up:

So in this post, we discussed what is TPS and how it works. Also TPS is not enabled by default for inter-VM but can be enabled with Salting. As discussed in previous post, TPS runs in all memory States of an ESXi host. However, in High memory state, TPS runs only on small memory pages (4KB).

Large memory pages (2 MB) are not shared directly. Large memory pages are broken into small memory pages prior sharing the pages, in Clear memory state and onwards of an ESXi host.

An ESXi host will not swap out large pages, the large pages will be broken into small pages (4KB) during host swapping so that pre-generated hashes can be used to share the small pages before they are swapped out.

In hardware-assisted memory virtualization (Intel EPT and AMD RVI) systems, ESXi will not share large pages because:

  • The probability of finding two large pages having identical contents is low
  • The overhead of doing a bit-by-bit comparison for a 2 MB page is much larger than for a 4KB page

When using EPT or RVI systems, the esxtop tool might show zero or few shared pages, because TPS uses small pages (4 KB) and EPT and RVI use large pages (2 MB).

That is all for this post. See you folks in next one on Ballooning.

Leave a Reply