Virtualization home Lab Extreme

Most if not all labs today run virtual servers, virtualization has allowed myself and others to run software and services that was never within the reach 10 or 15 years go. I would not say that my VMWare cluster is in any way extreme, however it’s small but runs over 50 servers.

We all love our VMWare (or Hyper-V, XEN, KVM etc.) stuff. Creating a Windows Server 2012 R2 server in 10 seconds from a Template is great for labs where you really don’t have a lot of time for installing OS, adding updates and patches. I will not go into great details about how VMWare works, there are great online articles about virtualization that can be read online.

Building a great cluster.

Running VMWare in a cluster is great. It allows for failover to other node(s) if a server breaks down or if you just need to do maintenance.  In a lab it can also be very useful as you will change stuff that you normally would not do to an production system, even considering the fact that your running a lab you might not prefer that all your DNS servers stopps working as you have done some storage changes and one of your ESX hosts looses all it’s storage paths to your storage infrastructure 🙂 It has happened on several occasions that one thing leads to another and you will finally end up in a big mess just as you didn’t think things trough. Of course (once again) that’s the whole purpose of doing labs but loosing your only VMWare ESX host as a result of an unexpected idiotic configuration might not be the best cause of action 🙂

Hardware

So I’m running two host, they all use the same CPUs (you can look at the hardware in the hardware post) as this simplifies Failover within the cluster, so you don’t have to play around with cluster compatibility mode based on different CPUs.

Networking

I’m running a dvSwitch (distributed switch) – Does not really give me a lot of advantages as I’m only having two hosts, but it gives me LACP support that I used before upgrading to 10Gigabit when I bundled two 1xGigE interfaces (see the networking blog entry for more details). If you have not played around with distributed switches in VMWare you can be happy as it can be a pain in the ass sometimes, especially if your doing networking labs and need to move port groups around. Also a functioning vCenter is required to be able to work with distributed switches. So lets say you have to do maintenance on your vSphere server for whatever reason and you reboot it. All your VMs will continue to work as your distributed switch is copied to all your ESX hosts. But for some reason the vCenter does not start and you need to modify an existing VM for some reason, perhaps your virtual firewall dies at the same time and you need to quickly configure a new one – That’s not possible without your vSphere and you cannot change any networking parts without vSphere. I’ve been in situations where my entire networking stack has exploded in my face based on a miss-configuration on the network side leaving my vSphere without network connectivity to my ESX hosts and suddenly I cant modify any of my VMs and the only way back it to restore the hosts networking configuration.

Generally VMWare is quite clever if you modify a portgroup or a distributed switch and as a result the ESX host loses connectivity to vSphere based on a faulty configuration it will rollback to the last known good state. However that is not always working. Generally I do recommendto keep management on separate NIC(s) that belongs to a standard witch. When upgrading to 10G NIC adapters on my ESX hosts I decided to keep two Gigabit ports for management (VMkernel).

I have not yet migrated iSCSI to my 10G adapters. This is mainly

Direktorn Comments

comments

Pin It

Comments are closed.