This will be the last post in this series. In this post, we are going to discuss hypervisor swapping as next method after compression. ESXi employs hypervisor swapping to reclaim memory, if other memory reclamation techniques like ballooning, transparent page sharing, and memory compression are not sufficient to reclaim memory.
Transparent Page Sharing (TPS) speed is dependent of possibility to share memory pages, another reclamation technique, ballooning also depends on guest operating system response for memory allocation. Unlike other techniques, Hypervisor swapping is a guaranteed technique to reclaim a specific amount of memory within a specific amount of time.
When virtual machine is powered on, the hypervisor creates a swap file for the virtual machine (vm_name.vswp) inside virtual machine folder by default unless changed the swap file location. This file is used by hypervisor to directly swap out virtual machine physical memory to the swap file. This frees the host physical memory and hence can be used by other virtual machines.
However, hypervisor swapping is used as a last resort to reclaim memory from the virtual machine as there will be performance impact on virtual machine due to some of known issues as listed below.
- High swap-in latency
- Page selection problems due to no visibility of guest OS pages.
ESXi host can employ local system swap method to address the limitations mentioned above that improves hypervisor swapping performance. Host swap cache is an optional memory reclamation technique that uses local flash storage to cache a VM’s memory pages. By using local flash storage, the VM avoids the latency associated with a storage network that would be used if it swapped memory pages to VSWP files. However host swap cache needs to be configured.
Memory is reclaimed by taking data out of memory and writing it to background storage. Accessing the data from background storage is slower than accessing data from memory, so it is important to carefully select where to store the swapped data.
If a flash storage device such as SSD is installed in the host, we can choose to configure a host Cache.
Since SSD read latency, which is normally around a few hundred microseconds, is much faster than typical disk access latency, this optimization significantly reduces the swap-in latency and hence greatly improves the application performance in high memory over commitment scenarios.
How system Swap works?
Using swap to host cache does not mean placing VMs swap files on host cache. Even if you enable swap to host cache, the host still needs to create regular swap files.
The amount of space required for the system swap is 1 GB. If you create system swap space of more than 1 GB, multiples of 1 GB sized .vswp file chunks will be created inside system swap..
As shown in above image, 2 GB flash storage has 2 x 1GB xxxx.vswp files created inside it. These files can be seen by browsing the datastore.
These xxxx.vswp files are not specific to VMs swap files that we have in VMs folder by default. Each VM has its own regular .vswp in storage inside their specific VM folders. However, the .vswp files inside system swap will be shared by virtual machines whenever there is need for swapping.
ESXi will use the host cache first to store the swapped out pages instead of putting them directly in the regular VM swap file (VM_Name.vswp). Upon the next access to a page in the host cache, the page will be pushed back to the guest memory and then removed from the host cache.
Memory reclamation techniques are used by ESXi host to accommodate memory overcommitment, avoid performance issues related to swapping to great extent. Memory-reclamation techniques allow ESXi to take the inactive or unused host physical memory away from the VMs and give it to other VMs that will actively use it.
Configure settings for these methods such as Salting for TPS, Balloon driver deployment, and others as required.
That is all in this series of posts on Memory reclamation techniques. I hope it was informative.