Monday, February 16, 2015

Running a nested ESXi lab on vCloud Air OnDemand

People looked at me like I was crazy when I said I wanted to run ESXi nested in vCloud Air OnDemand. Everyones first question was why? I have some specific use cases where I need to test integrations between layers of a VMware based Cloud Management Platform and push the scalability beyond what I could run in my lab. I needed this test platform to be long lived (which ruled out using EMC vLab), but I didn't need it to be powered up all the time. Combine that variably of demand with the fact that I only have 96GB of RAM available in my home lab and you have the perfect public cloud use case. The icing on the cake for me is that I can create geographically distributed vCenters to simulate a large F500 enterprise environment.

The catch is that the virtual switches in vCloud Air aren't configured for promiscuous access, so any guest VMs that you run on your nested hosts won't have internet access. Not a problem for me since this will be part of a geographically distributed scaling exercise and I don't need them to do anything.

So, why would anyone else want to run ESXi in vCloud Air OnDemand? Number one reason is a VCP / VCAP lab as a service on vCloud Air.

If you have lab needs beyond the VMware Hands on Labs, but don't have they equipment at home vCloud Air OnDemand could give you a lab with persistence while you prepped for the vCAP-DCA or other lab based exam. When your instances are powered off you only carry the storage costs. You could get by without a public IP simply running a small lab and accessing your Windows based vCenter through the vCloud Air console, removing the public IP reservation removed the majority of the powered off costs.

As a test run I built a small vCenter and two ESXi hosts on vCloud Air on Demand before I build out my larger environment. Deep dive after the break.




Register and activate a vCloud Air OnDemand account

The first step is to register and activate a vCloud Air OnDemand account, using your MyVMware account. Once registred and payment method is confirmed you should receive a couple of emails, including one to set your first password. Then you simply log in at https://vca.vmware.com and you are ready to start consuming on demand resources. 


Click on "Virtual Private Cloud OnDemand", then select the region for your first VPC.


Upload ESXi and vCenter Media

Before we begin our install we need to upload the media we will need. For this test I went with two nested ESXi hosts, a Windows 2012 vCenter, and a FreeNAS VM to provide NFS. Before I can start provisioning these machines I need to upload the media using the vCloud Director interface. To get started I had to upload the ESXi ISO, FreeNAS ISO, and the vCenter install ISO for Windows.

To get started click on the "Create your first Virtual Machine" link


Then click on the "Create my Virtual Machine from Scratch" link to launch the vCloud Director interface.


From the Catalogs heading navigate to "My Organizations Catalog" and then "Media and Other". From this interface you can upload your custom media that you will then use to create your virtual machines.


Click on the green plus to open the upload interface. Once you upload the ESXi ISO repeat for the rest of the required media. 


Create an "Lab" vApp


Once the media is uploaded, click on "My Cloud" and click on the "Build New vApp" link. This will kick off the wizard to build a new vApp. You need to provide the name, location, and lease properties. There are two leases, the runtime lease and the storage lease. I set the runtime lease to 8 hours to make sure if I forget about the lab it will be powered off, helping to ensure I don't get a surprise on my bill.



The next step is to add virtual machines to your vApp. You can add them from a public catalog, add them from your custom catalog, or manually create them. I simply clicked on New Virtual Machine to manually create them.


The new virtual machine dialog window will let you manually create a VM with no operating system,  allowing you to mount the ESXi ISO to install later. For ESXi I created a VM with 2 vCPUs and 4GB of RAM, and make sure and check the "Expose hardware assisted virtualization" box.


Repeat the process to create another ESXi host, and a storage appliance. For the storage appliance I used 1 vCPU and 4GB of ram, as well as two 16GB disks, one for the OS and one for the datastore volume. Set the IP address to "Static - Manual" and apply static IPs manually as you install the operating systems. Once you have the three machines added finish the "New vApp" wizard.


The next step is to provision a VM for vCenter. While I could use the appliance, I opted to use a Windows 2012 VM delivered by vCloud Air OnDemand. This let's me run the vSphere client locally on the server, and use the server as a jump host to easily access other boxes in the virtual data center. 

To deploy a Windows 2012R2 VM return to the vCloud Air OnDemend interface, and select a Windows 2012R2 machine from the windows catalog. This will deploy a a server into a new vApp. You can power the server down and select Move from the action menu to move the server from the new vApp to the same vApp as the other three servers. 


At this point you have a vApp with 4 servers, three of which don't have an operating system. From here on out this is just like building any other nested ESXi lab.

Complete the Software Install

Using the vCloud Director interface it's easy to mount the ESXi ISOs to the servers that don't have an operating system. The install follows the same process as building any other nested lab.

  1. Install ESXi - Mount the ISO, hit F11 and Enter until the install is complete.
  2. Install FreeNAS or other virtual storage appliance.
  3. Install vCenter - Mount the vCenter ISO to the Windows 2012R2 box and run the simple install.
Note - In order for the simple install to complete the server needs internet access to download the .NET 3.5 framework. Follow the guide here assign a public IP to the edge gateway and provide internet access to the vCenter Server. Once the install is finished you can remove the public IP from the edge gateway in order to reduce the cost.

Note - In order to make the orchestrated shutdown of the ESXi hosts work you should install VMware Tools into ESXi. Instructions here.

Last steps are to configure a volume and NFS share on the virtual storage appliance, and allow the IP range to the share.

Once things are running it's a good idea to set the startup and shutdown priorities and delays to allow the environment to come up in the right order. If you have tools installed on everything you can change from "power down" to "shutdown" as well. 


I ended up with a working two host lab with NFS storage that can easily run Damn Small Linux instances.


What does it cost?

Once I had everything running I removed the public IP from the edge gateway to reduce the costs. From the vCloud Air interface navigate to Gateways, remove the IP from NAT and Firewall rules, and then click on the X next to the public IP to remove it. This will stop the ongoing charge for the public IP, but you can add it back from this same interface.



I'l update standby and running costs in a bit when I have it sorted out. Right now my running costs for two windows hosts and a vCenter look to be about $0.46 an hour. The majority of that powered off cost is for a public IP, which I think is an impact of the bug I mentioned. I'm going to open a support SR since based on the documentation that charge should come off of there when the SNAT rules are removed, I'll update the powered off cost when that is resolved.

I also deleted the media I used for the install to reduce the storage costs.

Update : Here are my configuration and costs. 

EDIT : I pulled FreeNAS out and just run NFS on Windows vCenter

So after a few days FreeNAS blew up. Instead of spinning up another virtual filer I took the windows box from 4GB to 6GB of ram, added a second NIC, and added a second HDD. I installed the "Server for NFS" feature of windows, and mapped the share with anonymous access and root permissions. Here are the edited startup / shutdown priorities when using the Windows VM for NFS Storage.

NFS on Windows for shared storage is working fine in this environment.



Update on costs :

Here is my resource configuration running once I started running storage on the Windows box:



My powered off costs are $0.009 per hour ($0.216 per day) for storage. So if I keep this simple lab powered off for an entire 30 day month I'm looking at $6.48 as the "carrying cost". Powered up my costs are $0.40 per hour. So if I spent 20 hours working in the lab in a month my total costs would be $14.48, that is pretty cost effective when you think about the costs to acquire and power gear at your house. 














6 comments:

Hi Heath Reynolds,
Thanks for thiis article which will be useful for many people who struggle to setup a homegear with high costs.
Appreciate your time,sharing this and keep it up
Best Regards,
R.S.Sundar
www.linkedin.com/in/sundarrs
Twitter - @sundarrs1

No problem. Are you running a lab on vCloud Air On Demand? Mine is still up there, but I've finished the vCAC testing I needed to do so I will delete soon.

Really Helpful!! Thanks a lot!!

Thanks for the article and sharing information. Many people here doesn't share their skill and knowledge. They think knowledge is their own property. Very good writing
Kicks Lab

I don't know if they have changed something but I've followed all the steps and everytime i try to start the ESXi host I get :

Error loading /k.b00
Fatal error: 10 (Out of resources)

I found a fix that works for my vSphere 6.0 Autolab build
These labs are great. I'm really appreciative for the work that has gone into these and I'm hoping to get quite a bit of use out of the labs. However, while trying to setup Autolab 2.6 to run vSphere 6.0, I've run into the same error reported by many others here:
VMware vSphere Update Manager ; Error 25085.Setup failed to register VMware vSphere Update Manager extension to VMware vCenter Server: vc.lab.local
I tried numerous fixes reported here and on other sites with no luck. Then I realized that this lab worked when built and was probably built using the original vSphere 6.0.0 files. So, I cleared and rebuild my lab using the original files (as closely as I could find) and it worked. No errors or issues.

I strongly suggest that whether you are building vSphere 5.0, 5.5 or 6.0 labs, you stick to the original version of the ISO and exe files as you can find.

Here are the files I used to successfully build my vSphere 6.0 lab:

- AutoLab 2_6-Workstation.zip

ESXi ISO image (Includes VMware Tools 10.0.0) - 351.05MB /
- VMware-VMvisor-Installer-6.0.0-2494585.x86_64.iso

VMware vCenter Server 6.0 Update1b and Modules for windows - 2.683GB /
- VMware-VIMSetup-all-6.0.0-2656757.iso

VMware vSphere PowerCLI 6.3.0 R1 Patch 1 - Installer - 79.66MB /
- VMware-PowerCLI-6.0.0-3205540.exe

VMware vSphere CLI 6.0.0 - 93.34MB
- VMware-vSphere-CLI-6.0.0-2503617.exe

Microsoft Windows Server 2012 R2 - Evaluation , 180 Days - 4.230GB /
- 9600.17050.WINBLUE_REFRESH.140317-1640_X64FRE_SERVER_EVAL_EN-US-IR3_SSS_X64FREE_EN-US_DV9.ISO

VMware Workstation VMware Tools Windows.iso: located at C:\Program Files (x86)\VMware\VMware Workstation\ /
- windows.iso (75,072KB, 7/2/2014)

Microsoft Windows Server 2003 R2 - Evaluation, 180 Days - 701.11MB /
- SW_DVD5_Windows_Svr_Ent_2003_R2_64Bit_IA64_English_IA_64_DVD_MLF_X13-50179.ISO

I think the issues we've all been experiencing are a result of changes VMware makes when publishing updates to the original ISO and EXE files. These seem to break the Autolab script files.

Please try this suggestion when first populating your Autolab or when trying to figure out why you're getting strange errors when building the lab for the first time.

Hope this suggestion works for you as it did for me.

Shaun.

Post a Comment