Saturday, 21 May 2016

Supported coexistence deployments of Exchange 2016 and list of supported outlook clients

Exchange Server 2007 and earlier versions:

Coexistence with Exchange 2007 or earlier is not supported.

Exchange Server 2010:

Supported with Update Rollup 11 for Exchange 2010 SP3 or later on all Exchange 2010 servers in the organization, including Edge Transport servers.

Exchange Server 2013:

Supported with Exchange 2013 Cumulative Update 10 or later on all Exchange 2013 servers in the organization, including Edge Transport servers.

Mixed Exchange 2010 and Exchange 2013 organization:

  • Supported with the following minimum versions of Exchange:Update Rollup 11 Exchange 2010 SP3 or later on all Exchange 2010 servers in the organization, including Edge Transport servers.
  • Exchange 2013 Cumulative Update 10 or later on all Exchange 2013 servers in the organization, including Edge Transport servers.

Hybrid Deployment:

Exchange 2016 supports hybrid deployments with Office 365 tenants that have been upgraded to the latest version of Office 365.

Supported Clients:

Exchange 2016 and Exchange Online support the following versions of Outlook:
  • Outlook 2016
  • Outlook 2013
  • Outlook 2010 with KB2965295
  • Outlook for Mac for Office 365
  • Outlook for Mac 2011
  

Thursday, 19 May 2016

VMware Memory Reclamation: Memory Compression Explained PART5

Do check my previous articles on TPS and Ballooning in this series on VMware Memory reclamation, as compression will start after TPS and Ballooning.

ESXi provides a memory compression cache to improve virtual machine performance when you use memory over commitment. 

If the virtual machine’s memory usage reaches to the level at which host-level swapping will be required, ESXi uses memory compression to reduce the number of memory pages it will need to swap out. Because the decompression latency is much smaller than the swap-in from disk latency, compressing memory pages has significantly less impact on performance than swapping out those pages.

Lets see how compression helps improving performance of virtual machines that are running on over-committed ESXi Host. Below video from VMware is for old version of ESXi, however it gives us idea about Memory compression impact on VM Performance.




How Memory Compression Works:


Memory Compression is enabled by default. You can disable it if you want from advanced configuration settings (mem.memzipenable), also you can set the maximum size for the compression cache using the Advanced Settings.

We have two types of pages in memory as listed below that are compressed.
  • Large Pages (2MB)
  • Small Pages (4KB)
Note 1: ESXi does not directly compress 2MB large pages, rather 2MB large pages are chopped down to 4KB pages first and later they are compressed to 2KB pages.

Note 2: if a page’s compression ratio is larger than 75%, ESXi will store the compressed page using a 1KB quarter-page space.


There are couple of conditions for pages that will be considered for compression. If memory pages are meeting below criteria then only memory pages are compressed.
  1. Memory pages that are any way marked for swapping out to disk only those pages. AND
  2. Memory pages that can be compressed at least 50%.  
Any page that is not meeting above criteria, will be swapped out to disk.

Lets understand how compression works with an example.

 
Image: VMware
Lets assume that ESXi needs to reclaim 8 KB physical memory (two 4KB pages) from Virtual machines.If we consider host swapping, two swap candidate pages, page A and B, are directly swapped to disk (Image A). 
With compression, a swap candidate page is compressed and stored using 2KB of space in a per -VM compression cache. Hence, each compressed page yields 2KB memory  space for ESXi to reclaim.In order to reclaim  8 KB physical memory, four swap candidate pages need to be compressed (Image B).
If memory requests comes in to access a compressed page, the page is decompressed and pushed back to the guest memory. The page is then removed from the compression cache. 
What is Per-VM Compression Cache:
The memory for the compression cache is not allocated separately as an extra overhead
memory. The compression cache size starts with zero when host memory is under committed and grows when the virtual machine memory starts to be swapped out. 
If the compression cache is full, one compressed page must be replaced in order to make room for a new compressed page. The page which has not been accessed for the longest time will be decompressed and swapped out. ESXi  will not swap out compressed pages.
If the pages belonging to compression cache need to be swapped out under severe memory pressure, the compression cache size is reduced and the affected compressed pages are decompressed and swapped out.
The maximum compression cache size is important for maintaining good VM performance. Since compression cache is accounted for by the VM’s guest memory usage, a very large compression cache may waste VM memory and unnecessarily create host memory pressure.
In vSphere 5.0, the default maximum compression cache size is conservatively set to 10% of configured VM memory size. This value can be changed through Advanced Settings by changing the value for Mem.MemZipMaxPct

Below is the list of articles in this series for further reading.

PART1: Run cycle of reclamation techniques
PART2: Mem.minfreepct and sliding scale method 
PART3: Transperent Page sharing 
PART4: VMware Ballooning 

PART6: Hypervisor Swapping and Host SSD Swap



 

Wednesday, 18 May 2016

Exchange Server 2013 Transport Services

Front End Transport service

This service runs on the Client Access server. This service acts as a stateless proxy component to all incoming and outgoing SMTP traffic that is external to the Exchange
organization. 

The service accepts the SMTP connections from other SMTP servers on the Internet, receives messages, and initiates SMTP connections for message sending. However, this service cannot perform message queuing. This service is capable of filtering based on IP connections, domains, senders, or recipients. 

Internally, this service only communicates with the Hub Transport service that resides on the Mailbox Server role.

Transport service

This service is similar to the Hub Transport server role in Exchange Server 2007 and Exchange Server 2010. It runs on all of the Mailbox servers in an Exchange Server 2013 organization. 

This service handles all internal SMTP flow, and performs message categorization and content inspection. The most important difference between this service and the Hub Transport server role in previous Exchange versions is that the Hub Transport service, in Exchange Server 2013, never communicates directly with the mailbox databases. The Transport service routes messages between the Front End Transport service and the Mailbox Transport service. The Mailbox Transport service, in turn, communicates with the mailbox database.

Mailbox Transport service

Mailbox Transport service runs on a Mailbox Server role. This service has components as listed below:
  • Mailbox Transport Deliver: 
This service receives SMTP messages from the Hub Transport service and then establishes the Remote Procedure Call (RPC) connection to the mailbox database to deliver the message to the appropriate mailbox. 
  • Mail Transport Submission: 
This service works in the opposite direction of the Mailbox Transport Delivery service. While it also connects the RPC to the mailbox database, its purpose is to retrieve messages for sending rather than to deliver messages. It then submits the received messages to the Hub Transport service by using the SMTP protocol. Unlike the Hub Transport service, the Mailbox Transport service cannot perform local message queuing.

Message Flow:
 
  1. Messages that are coming from the Internet enter the Exchange transport pipeline through a Receive connector on the Front End Transport service on a Client Access server. 
  2. Later messages are routed to the Hub Transport service on a Mailbox server. Messages inside the organization come directly to the Hub Transport service on a Mailbox server, through the Receive connector, the Mailbox Transport service, or the agent submission.

If you have an Edge Transport server deployed in your perimeter network, Internet
mail flow occurs directly between the Hub Transport service on the Mailbox server and the Edge Transport server, without passing through Front End Transport on Client Access server.

Tuesday, 17 May 2016

Memory Reclamation-Ballooning PART4


Do check my previous articles on TPS and Compression in this series on VMware Memory reclamation, as compression will start after TPS and Ballooning.


Ballooning in simple terms is a process where the hypervisor reclaims memory back from the virtual machine. Ballooning gets initiated when the ESXi host is running out of physical memory. The demand of the virtual machine is too high for the host to handle.

But before I describe Ballooning, it is good idea to understand why we need to reclaim memory from Virtual machine.

In order to understand why reclamation? Let’s understand, how operating system manages memory allocation in a physical system. Below Diagram provides us idea on how the memory pages are handled by operating system.



For example, when I open MS outlook for the first time on my computer, it takes some amount of time to load all pages of that program. Now let’s just say, I closed the outlook, but after couple of minutes I tried to re-open outlook again, now I may not need to wait same amount of time, in fact this time it will be quicker. So what happened in back end?

Well, when I started application first time, it loaded all the required pages of that program into the memory which we call as Active Pages or MRU. But when I closed the application, memory pages of that application which were loaded into MRU are not deleted from memory, rather operating system keeps those pages back in LRU or Idle pages, considering application may require those pages if request comes in again like in my example I started application again.

Now this is really good approach of managing memory pages and ensuring performance by keeping pages in LRU. But this approach is good for physical systems. The challenge that we face in virtual machine due to this approach is as below.

  • Hypervisor has no visibility of Free list, LRU and MRU memory pages that are managed by Operating system of a virtual machine. 
  • So if multiple VMs are demanding memory resources and later keeping memory pages in LRU even after workload is no longer present, this results in unnecessary consumption of host memory of ESXi host which can cause memory contention when multiple VMs puts high demand for memory resources. 
  • On the other hand, operating system of virtual machine is also not aware that ESXi server is under memory contention as virtual machine operating system also does not have visibility of ESXi memory consumption and cannot detect the host’s memory shortage.
So to overcome host memory contention due to above mentioned issues, we use Ballooning reclamation technique. Balloon driver (VMMEMCTL) is loaded into the guest operating system when we install VMware tools.





In Figure (A), four guest physical pages are mapped in the host physical memory. Two of the pages are used by the guest application and the other two pages (marked by stars) are in the guest operating system free list. Note that since the hypervisor cannot identify the two pages in the guest free list, it cannot reclaim the host physical pages that are backing them. Assuming the hypervisor needs to reclaim two pages from the virtual machine, it will set the target balloon size to two pages. 


After obtaining the target balloon size, the balloon driver allocates two guest physical pages inside the virtual machine and pins them, as shown in Figure (B). Here, “pinning” is achieved through the guest operating system interface, which ensures that the pinned pages cannot be paged out to disk under any circumstances.

Once the memory is allocated, the balloon driver notifies the hypervisor the page numbers of the pinned guest physical memory so that the hypervisor can reclaim the host physical pages that are backing them. In Figure (B), RED and GREEN are representing these pages.

The hypervisor can safely reclaim this host physical memory because neither the balloon driver nor the guest operating system relies on the contents of these pages.

If any of these pages are re-accessed by the virtual machine for some reason, the hypervisor will treat it as normal virtual machine memory allocation and allocate a new host physical page for the virtual machine.

OK. Now the above description is as per the VMware Documentation. In order to understand this in simple terms lets discuss this same process further.

  • When ESXi host is under memory contention, ESXi host sets the target for balloon driver. 
  • As per the target, balloon driver inside virtual machine, will fake itself as another application and demand memory from Operating system of virtual machine. 
  • Considering the request from application (FAKE), VM operating system will start allocating memory pages to balloon driver from Free list, LRU and if required from MRU as well in case there is situation to satisfy reservation demand. 
  • As soon as balloon driver receives memory pages from operating system of VM, it starts inflating from its initial size just like what happens with actual balloon when we pump air into it. 
  • Memory pages that are consumed by balloon driver, are pinned (Red and Green pages in above figure) so that they are not swapped out. 
  • Balloon driver communicates with the hypervisor through a private channel and informs hypervisor about pinned pages. 
  • Hypervisor then reclaims these pages by setting up lower target, this causes balloon driver to deflate back to initial state, just like in actual balloon, if we take air out of it, it comes back to initial state. 
  • Below image describes this process graphically.
Image: VMware


ESXi host will try to reclaim memory from virtual machines as per target received. How much memory is reclaimed from each VM is calculated with the help of Memory Taxing (mem.idletax). 


Like if you earn more bucks, you pay more tax, so if any VM holding more number of idle memory, it is charged (Taxed) more. :P
 
If a virtual machine is not actively using all of its currently allocated memory, ESXi charges more for idle memory than for memory that is in use. 



I hope this clarifies the mystery around ballooning. 


Below is the list of articles in this series for further reading.

PART1: Run cycle of reclamation techniques
PART2: Mem.minfreepct and sliding scale method 
PART3: Transperent Page sharing 

PART5: VMware Memory Compression
PART6: Hypervisor Swapping and Host SSD Swap











Popular Posts This Week