Saturday, July 26, 2014

Homelab Build : Dell dcs6005 / 6105 FreeNAS and ESXi Lab

As I prepare to change jobs one of the things I will miss about my current employer is the LAB (capitalized because a lab this awesome deserves it). Nexus 7k, 6k, 5k, two UCS fabric, UCS blades, and UCS C460s all backed by a VNX plus whatever storage they are beta testing for EMC.


I'll probably never be able to replicate the level of lab I had access to outside of VCE or Cisco, but I need some type of home lab to continue my work with VCAC, LogInsight, NSX and to continue preparing for my VCDX defense. Two 16GB MAC minis with a synology would make a nice, quiet, cool, power efficient homelab with high wife acceptance. Unfortunately I need more RAM than that, and the cost can get quite high.

I looked into Intel NUCs and white boxes, but the best value for me turned out to be older Dell "cloud systems" boxes that are wholesaled on eBay. These boxes are a 2U chassis designed to house four individual servers allowing shared web-hosting companies to drive server density. They don't have the intelligence, IO flexibility and blade removal capability of a real blade chassis and are simply designed to provide cheap density.


There are two main flavors of these boxes, the C6100 which is Intel powered and the dsc6005 / C6105 that is powered by the AMD 6 Core Opteron 2419 EE. The Intel powered option has gone up in price, but there are currently a flood of the AMD powered boxes selling on Ebay for good prices. For $479 I got a chassis with three dual socket servers with each server having two Six-Core Opterons and 32GB of RAM. There are 12 3.5in drive bays on the front, and each server is wired to four drive bays.







Dive into the detailed build out after the break. 




The box shows up one day and I excitedly open it up and plug the server in. Holy crap is this thing LOUD. At start up it screams like a jet at takeoff, and once they fans idle down a bit it just sounds like a jet on the tarmac. It has four 80x80x38mm fans behind the drive bays, and they are screamers rated at 140cfm at high static pressure and 70db. The idle speed on these things seems to be 5k RPM and the max is 9k.

I ordered a set of  Sanyo Denki San Ace 80 9g0812p1f031 off of Ebay for $6 each shipped. There are other fans out there from companies like Evercool, but these fans harvested from Dell desktops were the cheapest options. These San Ace 80's are around 60cfm and rated for 45db. They idle at 2100 RPM and max out at 5k RPM. I'm comfortable replacing the fans with much lower CFM fans rated at a lower static pressure since I will only have 6 drives in the front and no cards installed in the back blocking the airflow. My servers will also be lightly loaded.


 In the above picture you can see the fans in the lower portion of the image. The fans are an easy swap, but not plug and play. I had to splice the long fan header cables to the new fan and install the blue rubber mounts. It took about an hour and a half with the splicing. The noise was reduced to 1/3 of the stock configuration and I can now easily sit in the room with the server and have a phone conversation, which wasn't possible with the original fans.

While I had the case open I decided to shuffle the RAM around so I had 16GB, 32GB, and 48GB. I also installed a small USB drive in the on MB USB header on each since I'm going to install ESXi and FreeNAS directly to USB. The small SanDisk Cruzer Fit is low enough to fit in the vertical USB port without hitting the blade or lid above. You can see the USB port at the top of the image above.

Next I decided to shuffle my drive assignments. Each blade has 6 SATA ports, and is wired to four drive bays. Since I am using one bay for storage, and two for ESXi nodes that I plan to run diskless from USB I pulled the bundle of SATA cables from blade two up to blade one so I could connect all 6 ports to the drive bays. You can easily shuffle which drive bays in the front you want to use by moving the cables, and both ends of the SATA cables are labeled. When I was finished I had the servers configured as :
Blade 1 | 16GB RAM | USB Drive Boot | 4x2TB HDD |  2x256GB SSD
Blade 2 | 48 GB RAM | USB Drive Boot
Blade 3 | 32 GB RAM | USB Drive Boot

Once I fired the much quieter server back up I used the Supermicro IPMI viewer to discover the IPMI for each server. The IPMI is a piggyback MAC address on the GigE ports on the back of the box. The default password is root / root. Some people have had trouble with the IPMI: it's not the best but I'm able to remotely manage power, access the console, and mount an ISO.

I installed FreeNAS to the 4GB USB on blade one by mounting the ISO to the virtual CD through the remote console. Once it was up I configured a ZFS RaidZ2 pool from the 4x2TB drives with one SSD dedicated as L2ARC and one as a ZIL devices. I know I'm giving up write performance by going with Z2, but data integrity is important since in addition to the lab I'll have 1TB of baby pictures and home movies of the kids growing up. I may rebuild as a Raid 10 if I don't like the Z2 performance.

Once I had my ZFS volume I created and exported an NFS dataset as well and an iSCSI zVOL so I have both available.

Installing ESXi is straight forward, but I did have a problem where the installer hung at "Relocating modules and starting up the kernel". The boot option "ignoreHeadless=TRUE" needs to be set at the boot options prompt by pressing Shift+0 and adding it to the boot string. This will need to be set for each boot until you get the box up and can access the shell where you can use the command "esxcfg-advcfg --set-kernel "TRUE" ignoreHeadless". Detailed instructions can be found here.

Once I had ESXi running it was straightforward to connect to my storage and spin up vCenter. My two hosts will be the services cluster for my virtual data center running vCenter, NSX Manager, NSX controllers, vCAC, vCOPS, and log insight as well as the nested ESXi hosts that will run the NSX distributed firewall and logical router and provide the target for VCAC to provision virtual machines. Lot's of fun lab work coming up.

I was able to get distributed power management working with the dcs6005 / c6105 and vCenter 5.5. I provided the IPMI username, password, IP and MAC in the host options area of the cluster settings. I verified vCenter was able to bring the host out of standby. 

Some resources at servethehome.com that can be useful for these servers :


4 comments:

Heath, nice write up. I just bought two of the DCS6005 models (quite by accident as the ebay listing described them as 6105s) I took a gander and threw one of the drives in the same row as another but never saw an opportunity to define an array. Does that mean these probably do not have a raid controller? I think I am going to follow suit with freenas on a USB stick at least for one blade.

Also, do you happen to know if the onboard HDD controller can support 3TB drives?

Nicely written - Great information. Many many thanks for sharing it with us. Please also visit my "emi testing lab" page if you have any time and let me know what you think.Regards

Does this work okay without a dedicated raid controller? I'm hoping to get one, but I hear if each 'node' doesn't have a raid controller it won't work with esxi? (version 6)

Cheers!

Post a Comment