Building the lab 5: Storage I

This part of my homelab rebuild will touch on something interesting… storage options. Knowing that my lab is going to be a couple levels of nesting, I wanted to look at the different options that are out there.

For years, I have had and use a Synology DS 1512+ storage array. This thing has been running like gangbusters. It serves multiple purposes from backups & fileshares, to NFS storage for the virtual environment.

Over the years, I have upgraded the drives from 1TB to 2TB, to 3TB. Because of this, I have a few drives lying around not in use. I thought that maybe I could spin up a FreeNAS or OpenFiler box for iSCSI within the environment. By creating differing storage arrays, I could introduce Storage Policies within the environment for potential feature consumption down the road.

As I explored the various options out there, I discovered many simulators from various vendors: 3PAR, EMC, NetApp. In addition to these, you have the free options as mentioned above: OpenFiler, FreeNAS, etc. But I also stumbled across this jewel….. XPEnology.

I’m sure you are wondering — What is XPEnology?
Xpenology is a bootloader for Synology’s operating system which is called DSM (Disk Station Manager) and is what they use on their NAS devices. DSM is running on a custom Linux version developed by Synology. Its optimized for running on a NAS server with all of the features you often need in a NAS device. Xpenology creates the possibility to run the Synology DSM on any x86 device like any pc or self-built NAS.

You read that right, it is possible to run XPEnology on bare metal. XPEnology runs as a faux Synology NAS.

Now, before you continue, you should know this. XPEnology is not supported or owned by Synology, micronauts, or anyone in their right mind. The links provided are ONLY for experimentation purposes and should not be used in any production or development environment. It is very possible you could lose all data and put yourself in jeopardy. If you need reliable, dependable storage, then buy a Synology NAS.

PROCEED AT YOUR OWN RISK!

Alex Lopez at ThinkVirtual has a good write up on how to create a Synology Storage VM
https://ithinkvirtual.com/2016/04/30/create-a-synology-vm-with-xpenology/

Alex’s write up was based on Erik Bussink’s build, found here.
https://www.bussink.ch/?p=1672

The original XPEnology website walkthrough on how to create a Synology Storage VM.
http://xpenology.me/installing-dsm-5-1-vmware-esxi5-5u1/

The original Xpenology website has become a ghost-town. I’m not sure if it is being maintained, or if the original creator(s) just don’t have the time to update it any longer. The last updates came out around DSM 5.2-5644.5 (so a while ago). However, the XPEnology forums will provide all kinds of glorious information from the wealth of knowledge within the community.

Additionally, you can get more information from this new XPEnology info site. They also have a pretty good walk-through for a storage VM. The video tutorial even covers how to setup ESXi 5.1 (http://xpenology.org/installation/).

I chose to build on baremetal.
While having a storage VM is great, I think having XPEnology on baremetal is even better. As you read and research how to do this, you are going to discover that it involves grabbing files stashed all over the internet — files ranging from a bootloader to PAT files. Make sure that you read EVERYTHING. I reutilized some hardware and some of my old synology drives and built a XPEnology server on bare metal.

I booked marked this site (https://github.com/XPEnology/JunLoader/wiki/Installing-DSM-6.0-Bare-Metal) as it provides a pretty good walkthrough on how to create a bootable USB drive for the XPEnology OS. I also found this one (http://blog.pztop.com/2017/07/24/Make-Xpenology-boot-loader-1.02b-for-DSM-6.1-on-Ubuntu/). For those of you, like myself, who are on a MAC…. you may need this nugget (https://xpenology.com/forum/topic/1753-create-a-bootable-usb-on-os-x/).

Again, I would like to say, if you need reliable and dependable storage, go purchase a real storage array.

Building the lab 4: Stand up vCloud Director

First, I would like to declare: “vCloud Director is NOT dead!” I can say emphatically, this product did not die, never died, and I don’t believe that it is going to die! It is still actively being developed by VMware.

With this clarified, let’s move on to getting vCD stood up. Again, I followed along with the wonderful guide from Sysadmin Tutorial.

This guide has a very good walk-through for standing up vCloud Director 8.0 for a Proof of Concept (it also works well for 9.0). There are multiple steps that break out each milestone of the installation/deployment. You could follow along each part, as I did. Along the way, I will point out the various things that I did or changed for my environment.

Part One is self explanatory. The walkthrough shows you how to set up a SQL database. Yes, MS SQL is still supported with vCD 9.0. While you may want to migrate or move to a PostGreSQL Database, this guide sets you up for MS SQL. (I will cover how to setup PostGreSQL and migrate the database sometime in the future. You may need or want this down the road when you get ready to upgrade.)

Part Two – setting up a RabbitMQ server, I skipped. Why do you ask? Well, the answer is selfish. My environment is small and is designed for one thing – quick deployment and stand up of an SDDC environment for play and discovery. Unlike many vCD environments that can be found in the wild, I will not be interfacing or integrating with any outside services. Nor will I be standing up mulitple cells. So I have no need of a RabbitMQ server at this time. You and your environment may very well need one.

Part Three of this guide is very good. I like how they dig into the certificate creation and the details of what to do with them. This portion of the walkthrough also includes how to create the cert with a Microsoft CA server. These are details that I would like to see VMware include in their documentation. This is one area that plagues many installations as certificates always seem to be problematic and having a good walkthrough would really go a long way.

Once you complete these steps, you are ready to configure vCloud Director for consumption. Like all VMware products, you should have a good idea of how or what you want to do. Setting this up to play with is one thing. But if you are trying to utilize it beyond “how do I install it?”, then you need to have an idea of what you are trying to accomplish. If you haven’t taken the time to do this, you should.

For me, as I said previously – I want to stand up vCloud Director to be a mechanism where I can quickly deploy full SDDC environments to manipulate and play with. I want to utilize these environments to learn, discover, and grow my skillset. I do not want to destroy and rebuild my lab environment every time I have a different scenario I want to test. My goal is to ‘mimic’ the Hands On Lab environment. Ambitious? Yes.

I’m going to stop here as the next Part of the SysAdmin Tutorial walkthrough was already covered when I stood up NSX in “Building the lab 3: NSX”. Before I continue with the SysAdmin Tutorial on and kick off Part 5, I want to set up more storage.

Change…

The only thing constant is change. Change is the backbone of any IT organization. New widgets, software, and hardware seem to come out daily. Our job as IT professionals is to try and stay aware of these new products. However, while we try and stay ‘cutting-edge’ and ahead of all this change, we always seem to fall behind at some point. What we ought to try and do though, is not fall so far behind that we lose sight of the pack. Thus, we become obsolete and are expendable.

Recently, I went to a vCloud Director 9.x Design Workshop. Yes, my friends — vCloud Director is not DEAD. While the software is primarily for Service Providers, it is still a mighty tool that allows many IT groups the ability to rapidly deploy internal, isolated, “pods”. This training got me to thinking, ‘why am I not using vCD in my lab?’

That’s why, once again, I am updating my homelab. Over the last few years, I’ve torn down and rebuilt my lab numerous times. This has wound up taking weeks and months of time to reset back up — just to test something. It seems most often, the rebuild wastes so much time. This time around, I’m going to explore rebuilding my lab around vCloud Director 9.x.

homelab
Home Lab

Over the years, I have gone from a full 42U rack with Dell PowerEdge servers that consume massive amount of power, cooling, and my personal manpower to maintain. This hurt my wallet (as well as my time) — a lot, which also caused numerous problems with finance (aka: the wife). A while ago, I replaced the Dell PowerEdge servers with a Supermicro Super Server. This has been working out great for me. As a matter of fact, this past year I have made a few hardware modifications to the lab. I wound up running out of space and had to upgrade the hard drives in my synology box from (5) 2TB drives to (5) 3TB Drives. To expand the capabilities, additional hardware was acquired: A new Intel NUC was added as a payload target, and another Supermicro Super Server was obtained at the end of the year (Merry Christmas, right?).

Further blog posts will detail my rebuild journey. I fully intend on sharing what I learn.

VMware Hands On Labs

Pablo Roesch welcomes CEO Pat Gelsinger to HOL 2017
For the last three years, I have had the honor of being a VMware Lab Captain. What does this mean? Well, it means that I volunteer about 300-400 hours of my time to help write, develop, support, encourage, and assist event goers with taking a product focused lab at our VMworld, vFocus, and VMUG events.

For the last two years, I have been a co-captain creating the vRealize Automation (vRA) Challenge Lab (HOL-1890). It has been such a fun way to introduce common vRA problems that Cloud Administrators will come across and not only show them how to troubleshoot and identify them, but to show those same administrators how to resolve them. In my day job within Professional Services (PSO), I use this lab to supplement my vRA Knowledge Transfer.

Now that VMWorld 2017 has completed, our hardwork on those labs are now being released to the public. Over the next short minute, all 81 labs will be released for you to take on your own time (and at home!). For now, here’s Round 1.

https://blogs.vmware.com/hol/2017/09/vmworld-2017-hands-on-labs-released.html

C.R.I.B. The Logical Layout of My HomeLab

C.R.I.B. – Stands for Computer Room In a Box. This is the name I have given my homelab. I’ve used my C.R.I.B. to educate myself, experiment with things, and demo products to my customers.

As you’ve read in previous posts, my homelab has evolved over the years. Currently, I run one physical Supermicro Super Server attached to a Synology DS 1512+ Array, connected to a gigabit switch. Many of my friends and co-workers have asked how I run everything on one physical box — which I am going to call pESX. I’ve tried to explain it and draw it out on a whiteboard. However, until you’ve seen it drawn out, the explanation gets confusing — unless you’re familiar with nested virtualization.

I utilize nested virtualization to expand the capabilities of the C.R.I.B. without additional physical hardware. If you are unfamiliar with nested virtualization, it is the ability to run ESX as a virtual machine — which I will call vESX. (William Lam has written numerous articles on how to do it. Just google “William Lam” and “nested virtualization” if you want more info.) The entire CRIB is accessible from my home network — which is a lifesaver, as I do not have to work in my office. I can access it via VPN or the couch.

CRIB
Computer Room In a Box
A pfSense virtual machine (GATEWAY) was implemented to act as firewall and router to the entire virtualized environment, including the nested layer. The pESX is on the base network with a vmkernel attached to the pfSense Virtual Machine to allow for manipulation and modification of the firewall rules and network configuration. All traffic in and out of the virtual environment will pass through the pfSense VM. The firewall in the pfSense provides isolation as well as communication between the various networks.

All of the infrastructure virtual machines sit on the first virtualization layer – this is considered as the “Management Cluster”. However, this cluster, is only made up of the one physical ESX Host (pESX). Normally, we would want multiple hosts for HA redundancy. (But this is a lab, and I’m on a budget.) The vESX Virtual machines sit in this layer as well. The vESX VMs have a direct connection to the base network for access to the iSCSI storage Array. vESX make up the ESX Compute Resources of the “Payload Cluster”. These clusters; the Management Cluster and the Payload Cluster, are a VMware architectural design best practice. The infrastructure is made up of your basic VMs; a Domain Controller (DC), a SQL Server (SQL), the Management vCenter (vCSA6), a Log Insight Server (LOG), and a VMware Data Protection VM (VDP) for backups. In addition to these VMs, the vRealize Automation (vRA) VMs and Payload vCenter (PAYVC01) also sit in the management cluster. This self-service portal deploys to a vCenter (PAYVC01) endpoint controlling the compute resources of the Payload Cluster.

The Payload Cluster is made of three virtual ESX Hosts (vESX) and provide the various resources; network, CPU, RAM, & Storage, for consumption by vRA or other platform products. There is an Ultimate Deployment Appliance (UDA) VM providing the ability to deploy scripted ESX images. This provides the ability to quickly rebuild the hosts, if needed.

This is just the base. I am in process of deploying NSX into the environment to provide the ability to deploy multi-machine blueprints within vRA. In addition, I intend on exploring SRM integration with vRA.