ESXi provides a memory compression cache to improve virtual machine performance when you use memory over commitment. If the virtual machine’s memory usage reaches to the level at which host-level swapping will be required, ESXi uses memory compression to reduce the number of memory pages it will need to swap out. Because the decompression latency is much smaller than the swap-in from disk latency, compressing memory pages has significantly less impact on performance than swapping out those pages.
Lets see how compression helps improving performance of virtual machines that are running on over-committed ESXi Host. Below video from VMware is for old version of ESXi, however it gives us idea about Memory compression impact on VM Performance.
How Memory Compression Works:
Memory Compression is enabled by default. You can disable it if you want from advanced configuration settings (mem.memzipenable), also you can set the maximum size for the compression cache using the Advanced Settings.
We have two types of pages in memory as listed below that are compressed.
Large Pages (2MB)
Small Pages (4KB)
Note 1: ESXi does not directly compress 2MB large pages, rather 2MB large pages are chopped down to 4KB pages first and later they are compressed to 2KB pages.
Note 2: if a page’s compression ratio is larger than 75%, ESXi will store the compressed page using a 1KB quarter-page space.
There are couple of conditions for pages that will be considered for compression. If memory pages are meeting below criteria then only memory pages are compressed.
Memory pages that are any way marked for swapping out to disk only those pages. AND
Memory pages that can be compressed at least 50%.
Any page that is not meeting above criteria, will be swapped out to disk.
Let’s understand how compression works with an example.
Let’s assume that ESXi needs to reclaim 8 KB physical memory (two 4KB pages) from Virtual machines.If we consider host swapping, two swap candidate pages, page A and B, are directly swapped to disk (Image A).
With compression, a swap candidate page is compressed and stored using 2KB of space in a per-VM compression cache. Hence, each compressed page yields 2KB memory space for ESXi to reclaim. In order to reclaim 8 KB physical memory, 4 swap candidate pages need to be compressed (Image B).
If memory requests comes in to access a compressed page, the page is decompressed and pushed back to the guest memory. The page is then removed from the compression cache.
What is Per-VM Compression Cache:
The memory for the compression cache is not allocated separately as an extra overhead memory. The compression cache size starts with zero when host memory is under committed and grows when the virtual machine memory starts to be swapped out.
If the compression cache is full, one compressed page must be replaced in order to make room for a new compressed page. The page which has not been accessed for the longest time will be decompressed and swapped out. ESXi will not swap out compressed pages.
If the pages belonging to compression cache need to be swapped out under severe memory pressure, the compression cache size is reduced and the affected compressed pages are decompressed and swapped out.
The maximum compression cache size is important for maintaining good VM performance. Since compression cache is accounted for by the VM’s guest memory usage, a very large compression cache may waste VM memory and unnecessarily create host memory pressure.
In vSphere 5.0, the default maximum compression cache size is conservatively set to 10% of configured VM memory size. This value can be changed through Advanced Settings by changing the value for Mem.MemZipMaxPct
Below is the list of articles in this series for further reading.
2 thoughts on “VMware Memory Reclamation: Memory Compression Explained PART5”
It's very good
Very nice explanation.