Tuesday, December 22, 2015

Slow Deploy from Template

I have run into this same issue at a couple of different customers, it doesn't get much press considering how broad the impact of this issue. Normally this manifests itself as slow deploy from template operations at customers with larger vsphere environments, and the root cause is a vcenter bug relating to how deploy from template operations are performed. 


If we look at this KB article we can see that the way that vCenter does the clone operation changed with 5.1u2 and 5.5. In versions previous to this vCenter told the DESTINATION ESX host to perform the clone operation, starting in 5.1u2 and 5.5 vCenter began telling the SOURCE esx host to perform the clone operation. 

This doesn't cause issues in small environments where all ESX hosts have access to all datastores, but what happens in larger environments? In larger environments the performance will depend on where the template VM is registered. Many times the template VMs are registered in management clusters that can't see the production storage. If vCenter tells the source ESX host to execute the copy (where the template is registered) and it can't see the destination datastore it will copy the VMDK to the destination host over the MGMT VMkernel interface. This leads to slow clone times, and timeouts during multiple clone operations since the ESX host where the template is registered is doing every clone operation. 

In order to resolve the issue it is recommended to upgrade to 5.5u2d or 6.0 release of vCenter. This returns the behavior to the destination ESX host performing the copy operation. As long as all hosts have access to the mgmt datastore where the templates are registered then the destination ESX hosts can copy them directly over the storage network, many times with VAAI acceleration. During multiple deployments DRS assigns the new VMs to multiple ESX hosts, so the process doesn't overwhelm the single host where the templates are registered. 

So to resolve : 

1. Create a MGMT datastore available to all ESX hosts at the site.
2. Place all templates on MGMT datastore
3. Make sure vCenter is at 5.5u2d or greater. 

Monday, December 7, 2015

Static Routes on vCenter Server Appliance

Sometimes in the lab I have to make some interesting choices in how I make things work, that might not specifically be the best practice. I ran into one of these decisions today, as I needed to create static routes in the vCenter Server Appliance to create connectivity to a vRA install in the lab. This VRA install was behind an NSX edge router, while vCenter was on the same VLAN as the uplink of the NSX edge.

In this case I'm using the 5.5 vCenter Server Appliance, but this would probably work on other versions of the appliance.

I logged into the host with SSH, and checked the routing table (route -n). In this case my vCenter appliance had the IP address of 192.168.1.120, and the only route I had was to the default gateway of 192.168.1.1. I need to create a specific route for the 192.168.110.0/24 network, and point it to the upstream interface of the NSX edge at 192.168.1.124.

I could add the route using the route command, but it would need to be reentered after a reboot. Since I need this to be sticky I'll create the route with the ifroute-eth0 config file, located at /etc/sysconfig/network/ifroute-eth0 . I'll pipe the config I need to the file to route all traffic headed for 192.168.110.0/24 to 192.168.1.124, then restart networking.

echo 192.168.110.0 192.168.1.124 255.255.255.0 eth0 > /etc/sysconfig/network/ifroute-eth0

Restart networking:

service network restart

And make sure the route took by checking the routing table with route -n.

Simple and helped my get some things running in the lab that would have otherwise slowed down a project.

Sunday, November 22, 2015

Licensing VMware Horizon View with VSPEX Blue (EVO:RAIL)

I've been getting questions lately on the most effective way to license Horizon View with VSPEX Blue, the answer like many things in this world is "It Depends". Luckily the VMware licensing guide for View does a good job of laying it out.


VSPEX Blue is a great EUC platfrom, and VMware is easing the licensing confusion.

The Horizon View Licensing FAQ has a good illustration of licensing combinations of Horizon with the EVO:Rail and the loyalty program, I found this really useful working on a "start with 100 desktops" design and wanted to pass this on. They even have EVO:Rail add-on specific SKUs for Horizon now, helping with the licensing overlap (still exists with VSAN on advanced and enterprise but getting better).

Friday, May 8, 2015

Recoverpoint for VMs - Pre-Flight Checklist

As I've worked with customers deploying Recoverpoint for VMs pilots in their environment I've put together a small "pre-flight" checklist to help people jumpstart their deployment. Several customers got stalled in the process, not because the product is difficult to install, but because they weren't aware of all of the IP address requirements and needed to go back to the network team.

If you aren't aware Recoverpoint for Virtual Machines is a storage agnostic version of EMC's Recoverpoint. Recoverpoint provides the ability to rewind a system to a specific point in time, as well as remote replication of the changes. Both Recoverpoint (for storage arrays) and Recoverpoint for VMs leverage a journal volume for all writes to provide this point in time recovery ability.

The key difference between Recoverpoint for VMs and the array based products is the location of the write splitter. In the array based product writes are split to the journal volume in the array, meaning than individual LUNs become the level of granularity. With Recoverpoint for VMs writes are split to the journal volume in the ESXi host, meaning that granularity is moved all the way to the VM level. This level of granularity allows you to replicate some VMs on a datastore without having to replicate all, and by the way it is completely storage agnostic working on any array or even local disk.

Did I mention it's even free? Recoverpoint for VMs can be freely downloaded from EMC, and all features used without time limits. Test to your hearts content, then buy support if you want to take it into production.

You can download Recoverpoint for VMs here.

Dive into deployment architectures and the pre-flight checklist after the break.


Monday, February 16, 2015

Running a nested ESXi lab on vCloud Air OnDemand

People looked at me like I was crazy when I said I wanted to run ESXi nested in vCloud Air OnDemand. Everyones first question was why? I have some specific use cases where I need to test integrations between layers of a VMware based Cloud Management Platform and push the scalability beyond what I could run in my lab. I needed this test platform to be long lived (which ruled out using EMC vLab), but I didn't need it to be powered up all the time. Combine that variably of demand with the fact that I only have 96GB of RAM available in my home lab and you have the perfect public cloud use case. The icing on the cake for me is that I can create geographically distributed vCenters to simulate a large F500 enterprise environment.

The catch is that the virtual switches in vCloud Air aren't configured for promiscuous access, so any guest VMs that you run on your nested hosts won't have internet access. Not a problem for me since this will be part of a geographically distributed scaling exercise and I don't need them to do anything.

So, why would anyone else want to run ESXi in vCloud Air OnDemand? Number one reason is a VCP / VCAP lab as a service on vCloud Air.

If you have lab needs beyond the VMware Hands on Labs, but don't have they equipment at home vCloud Air OnDemand could give you a lab with persistence while you prepped for the vCAP-DCA or other lab based exam. When your instances are powered off you only carry the storage costs. You could get by without a public IP simply running a small lab and accessing your Windows based vCenter through the vCloud Air console, removing the public IP reservation removed the majority of the powered off costs.

As a test run I built a small vCenter and two ESXi hosts on vCloud Air on Demand before I build out my larger environment. Deep dive after the break.