Tuesday, February 18, 2014

Multi-NIC vMotion on ESXi 5.5

What is Multi-NIC vMotion?

Multi-NIC vMotion allows you to send multiple vMotion streams (even when migrating a single virtual machine), and if configured properly provides a higher overall throughput to the vMotion process. The configuration is straightforward, the vMotion service is configured on multiple VMkernel adapters, and each VMkernel adapter is associated with a single physical interface. With multiple gigabit interfaces the throughput of the vMotion migration scales linearly with amount of adapters used, with 10G you will very likely hit some other performance barriers once you have more than two 10G interfaces involved. 

This feature was added in ESXi 5.0 and remains mostly unchanged. In my experience it is not a heavily used feature, although I depend on it in our production environment and have been running it since early 2012 in 5.0 and now 5.5.

Why do I need it?

There are two scenarios where the available throughput for vMotion can come into play. The first is simple, hosts with a large amount of RAM and a very large number of VMs. The second can be a little more complicated, virtual machines under heavy load that are dirtying memory pages very rapidly. 

The benefit of mulit-NIC vMotion when dealing with large hosts with many virtual machines is obvious. We have many hosts with 1TB of ram, and a few hosts that have 2TB of ram. Placing these hosts into maintenance mode would take FOREVER over a single gig interface, and quite some time over a single 10G interface. 

The second use case is for migrating a large single VM under heavy load. During vMotion the guests memory is copied, then a delta is copied containing the pages that were changed or "dirtied" during the first copy. If the guest is dirtying memory pages faster than they can be copied this will cause Stun During Page Send to kick in and slow the rate that the guest OS can dirty the memory pages so the copy can finish. This stun has caused us some problems with big databases servers, and the only way to avoid it is to throw more bandwidth at vMotion. 

Dive into configuration after the break.

How do I turn it on?

For the "how to" I'll discuss the VMware distributed switch. If you want to run mulit-NIC vMotion on the VMware standard switch it should be simple to figure out from the distributed switch info. 

The first step to enable multi-NIC vMotion is to create additional VMkernel interfaces, and enable the vMotion service on them. These additional interfaces should be placed in the same IP subnet as the existing VMkernel vMotion interfaces. In my case I already have an existing VMkernel interface for vMotion in the vMotion-A port group with an IP address of 192.168.2.186. We will be adding a second port group and VMkernel interface for Multi-NIC vMotion.

1. Create a second vMotion port-group




Next create a new port group, named vMotion-B, using the same VLAN as the vMotion-A port group. From the Networking tab click on the "Create a New Distributed Port Group" icon.


I'll create the port group with default settings, simply setting the name to vMotion-B and the VLAN. I will come back later and modify the failover settings. 

2. Create an additional vMotion VMkernel adapter. 


Return to the Hosts and Clusters tab and navigate to the VMkernel adapter settings for the host. Click on the Add Host Networking to add the second VMkernel Interface.


Select VMkernel Network Adapter and click next. Click Browse and select the newly created vMotion-B port group.


Check the box to enable vMotion on the VMkernel and click next.


Provide an IP address on the same IP subnet as the existing vMotion VMkernel adapter, click Next, and then click finish to create the adapter.

This host now has two VMkernel adapters with vMotion enabled, each on a separate port group. At this point multi-NIC vMotion is enabled, but an important step is missing. The port groups must be configured to use separate physical NICs by editing the port group failover settings. If the switch is left to pin the port groups to physical NICs automatically all vMotion traffic could still be routed over the same physical NIC. The port groups must be configured use specific uplinks.

3. Configure Active / Standby Physical NICs for each port group.

From the networking tab, select the vMotion-A port group and click Edit.

Select "Teaming and failover" on the left. Change the load balancing method to "Use explicit failover order" and set Uplink 1 as active with all other uplinks as standby. Click OK.


Repeat the process for the vMotion-B port group, but Uplink-2 should be active while all other uplinks should be standby.

The host now has two VMkernel interfaces, and each VMkernel is in a separate port group. Each port group is mapped to a specific physical NIC ensuring vMotion traffic used both NICs. As hosts are added to the cluster the same vMotion-A and vMotion-B port groups can be used for each host on the distributed switch.

Monitoring vMotion Throughput

SSH into your host and tail the vmkernel.log (tail –f /var/log/vmkernel.log) file to monitor and troubleshoot vMotion. You will get throughput numbers and see which VMkernel adapter are paired on each host. The adapter pairings can be very useful when troubleshooting Multi-NIC vMotion. 

Next week I'll grab some performance metrics from our converged networking environment with four 10G vMotion adapters (shared with other traffic) and see where the "speed limit' is on 5.5. On 5.0 we were hitting 18Gb to 22Gb for a vMotion migration with Jumbo frames off. 

3 comments:

'Binding' each portgroup to a vmnic makes sense. However, what about the 'binding' from kernel to portgroup?
Let's say I want to vmotion (2) vms. The kernel chooses the portgroup to carry out my request. But can't the kernel simply choose vmotion-A twice?

Post a Comment