Sunday, 22 May 2016

VMware Memory Reclamation:Hypervisor Swapping PART6

ESXi employs hypervisor swapping to reclaim memory, if other memory reclamation techniques like ballooning, transparent page sharing, and memory compression are not sufficient to reclaim memory.

Transparent Page Sharing (TPS) speed is dependent of possibility to share memory pages, another reclamation technique of ballooning also depends on guest operating system response for memory allocation. Due to all this, these techniques may take time to reclaim memory.

Unlike other techniques, Hypervisor swapping is a guaranteed technique to reclaim a specific amount of memory within a specific amount of time.

At virtual machine start up, the hypervisor creates a separate swap file for the virtual machine (.vswp) inside virtual machine folder by default unless changed the swap file location. This file is used by hypervisor to directly swap out virtual machine physical memory to the swap file. This frees host physical memory and can be used by other virtual machines.

However, hypervisor swapping is used as a last resort to reclaim memory from the virtual machine as there will be performance impact on virtual machine due to some of known issues as listed below.

  • High swap-in latency
  • Page selection problems due to no visibility of guest OS pages.
  • Double paging problems
ESXi employs below methods to address the limitations mentioned above that improves hypervisor swapping performance:
  • Memory compression: To reduce the amount of pages that need to be swapped out while reclaiming the same amount of host memory. For more details on how compression work, do check my other article on the same.
  • SSD Swapping: If an SSD device is installed in the host, we can choose to configure a host SSD Cache. Using swap to host cachedoes not means placing regular swap files on SSD-backed datastores. Even if you enable swap to host cache, the host still needs to create regular swap files. ESXi will use the host cache (SSD) to store the swapped out pages first instead of putting them directly in the regular hypervisor swap file (.vswp). Upon the next access to a page in the host cache, the page will be pushed back to the guest memory and then removed from the host cache. Since SSD read latency, which is normally around a few hundred microseconds, is much faster than typical disk access latency, this optimization significantly reduces the swap-in latency and hence greatly improves the application performance in high memory over commitment scenarios.

How SSD Swap works?

Multiples of 1GB sized .vswp file chunks will be created inside SSD swap. As shown in below figure, 10GB SSD has ten .vswp files created inside it. These files can be seen by browsing the datastore. These .vswp files are not specific to VMs like one we have in shared storage. Each VM has its own regular .vswp in shared storage inside their specific VM folders. However, the .vswp files inside SSD swap will be shared by virtual machines whenever there is need for swapping. 

Below is the list of articles in this series for further reading.

PART1: Run cycle of reclamation techniques
PART2: Mem.minfreepct and sliding scale method 
PART3: Transperent Page sharing 
PART4: VMware Ballooning 
PART5: VMware Memory Compression

No comments:

Post a Comment

Popular Posts This Week