Friday, February 28, 2014

Building a Nested ESXi Lab on VMware Workstation

If you are studying for the VCAP5-DCA you definitely need a lab. If you are studying for the VCP probably need a lab unless you are in vCenter all day at work. Nested virtualization runs one hypervisor upon another; so a nested ESXi lab runs the ESXi hypervisor on another hypervisor like VMware Workstation. So why build a nested lab instead of a physical lab?

Flexibility - A nested lab on workstation is going to provide more flexibility than a physical lab.  I have both, and I love having the ability to create another ESXi host in minutes by cloning it from a template. I can also turn off my 5.0 lab I am using to study for the VCAP5-DCA and turn on my 5.5 lab and show a coworker a new feature.

Cost - The cost of a nested lab can be cheaper than the cost of a physical lab, especially if you have box that you can simply upgrade the RAM in. When building a computer to run a nested lab the cost could be similar or more than buying used servers from eBay, but the power consumption should be much less. Building a low power solution like Intel NUC or MAC Mini's combined with a Synology will cost more than building a nested lab.

Portability - A small nested lab can run on a laptop allowing you to study on the road.

What do I need to build a nested lab?

Computer - One that supports VT-x (or the AMD version). Preferably one that supports EPT; without EPT support you will be limited to running 32bit guest virtual machines inside your nested ESXi instances. The 32bit restriction isn't a big deal, but it would be nice to not have to deal with it. If you are unsure of the virtualization features of your processor you can look it up at the Intel or AMD site. You should be aware that these features may not be on by default, you will need to check in the BIOS. 

RAM - Lot's of RAM. Did I mention RAM? Can you afford any more RAM? With ESXi 5.0 8GB of ram would allow you to get two ESXi hosts, vCenter, and an openfiler running. ESXi 5.5 brings higher minimum RAM requirements with all of the new features, 16GB really becomes the new minimum for two hosts, a vCenter, and an openfiler. If you want to lab larger scenarios like SRM or NSX you will need 32GB and up. 

I have a a Dell Precision T7500 Workstation with 48GB of RAM I jumped on when an engineer from our HPC group upgraded to newer model. It has an older processor, the Intel Xeon E5507, but it is quad core and supports VT-x with EPT so it meets my needs.  

VMware Workstation - Fusion will work as well, but I like the interface and memory overcomitment of Workstation. If you have your VCP they were providing workstation license keys upon passing, I'm not 100% sure if they still are. If you are a VMUG Advantage subscriber one of the benefits is a discount on the Workstation license. 

Dive into configuration after the break.


Tuesday, February 18, 2014

Multi-NIC vMotion on ESXi 5.5

What is Multi-NIC vMotion?

Multi-NIC vMotion allows you to send multiple vMotion streams (even when migrating a single virtual machine), and if configured properly provides a higher overall throughput to the vMotion process. The configuration is straightforward, the vMotion service is configured on multiple VMkernel adapters, and each VMkernel adapter is associated with a single physical interface. With multiple gigabit interfaces the throughput of the vMotion migration scales linearly with amount of adapters used, with 10G you will very likely hit some other performance barriers once you have more than two 10G interfaces involved. 

This feature was added in ESXi 5.0 and remains mostly unchanged. In my experience it is not a heavily used feature, although I depend on it in our production environment and have been running it since early 2012 in 5.0 and now 5.5.

Why do I need it?

There are two scenarios where the available throughput for vMotion can come into play. The first is simple, hosts with a large amount of RAM and a very large number of VMs. The second can be a little more complicated, virtual machines under heavy load that are dirtying memory pages very rapidly. 

The benefit of mulit-NIC vMotion when dealing with large hosts with many virtual machines is obvious. We have many hosts with 1TB of ram, and a few hosts that have 2TB of ram. Placing these hosts into maintenance mode would take FOREVER over a single gig interface, and quite some time over a single 10G interface. 

The second use case is for migrating a large single VM under heavy load. During vMotion the guests memory is copied, then a delta is copied containing the pages that were changed or "dirtied" during the first copy. If the guest is dirtying memory pages faster than they can be copied this will cause Stun During Page Send to kick in and slow the rate that the guest OS can dirty the memory pages so the copy can finish. This stun has caused us some problems with big databases servers, and the only way to avoid it is to throw more bandwidth at vMotion. 

Dive into configuration after the break.

Wednesday, February 5, 2014

Removing the Nexus 1000v

The other day I needed to remove three nexus 1000v distributed switches from one of our lab vCenter environments in order to prepare for NSX testing. Removing the Nexus 1000v should be a fairly straightforward process. In my case it seemed like the supervisor modules had become self aware and knew I was trying to kill them.

The first step is to use the migration tool and migrate all virtual machine and VMkernel networking to a standard or VMware distributed switch.

Once all VM and VMkernel networking is migrated the next task is to remove the hosts from the distributed switch object. Click Inventory, then Networking, and select the 1000v you wish to remove from the list of distributed switch objects. Click on the Hosts tab, right click on the host, and select Remove from vSphere Distributed Switch. Repeat for each host.


Sunday, February 2, 2014

Convert an older Linksys wireless router to a wireless bridge.

I've got a couple of devices at the house that don't have integrated wifi (Dish Hopper and Onkyo Receiver), but it would be nice to get them on the network. I can't fish Cat5 to the location easily and the wifi add-on adapters from Dish and Onkyo are expensive.

My solution to this problem was to take my old Linksys WRT54GL running DD-WRT firmware and convert it from an AP to a wireless bridge that can connect these two wired devices to my home wireless (provided by my Asus RT-NSSU). This solution is free since I have the Linksys sitting around; I recently replaced it with the Asus.


  1. The first thing you need is an old wireless router (secondary router). I used a WRT54GL from linksys, an 802.11G box. There are several "aftermarket" software versions for these routers but I know this will work with DD-WRT. DD-WRT is easy to install and there is plenty of information to help you get it on your router if you are still running the stock software. 
  2. I needed to make a couple of changes to the primary Asus router to let the secondary router connect to the wireless. On the 2.4Ghz wireless settings I changed the authentication method to "WPA-Auto-Personal" and the encryption to "TKIP+AES". This allows less secure older WPA clients to connect.
  3. Connect to the secondary router LAN port, and access the admin page.
  4. Reset the secondary router to factory defaults. 
  5. Change the IP address. It will default to 192.168.1.1, which is probably in use by your primary router. 
  6. Navigate to the Wireless and Basic Settings tab. Change the wireless mode from the default of AP to "Client Bridge" and click Apply Settings. Configure the network name field with the wireless SSID used on your primary router. Click Apply Settings.
  7. Navigate to the Wireless Security tab. Set the security mode to WPA Personal, and the encryption to TKIP+AES. Enter the shared key in use on the wireless network.
  8. The wireless light on the secondary router should turn solid green, and at this point you should be able to pull DHCP from the primary router and access the internet while plugged into the LAN ports on the secondary router. It should be ready to hook up your non-wifi devices. 
NOTE : I had trouble using WPA2 with AES, I had to set the primary router to WPA Auto and the secondary router to WPA to get this to work. This does expose the network to having the key compromised. I don't consider this a risk at in my environment but it is something you should be aware of.