VMware vSphere 4
- 25 May, 2009 12:15
VMware vSphere 4, out today, is a big release, with plenty of new features and changes, but it's not your run-of-the-mill major update. The new features, which range from VM clustering to agentless VM backup, are especially significant in that they may mark the moment when virtualisation shifted from the effort to provide a stable replica of a traditional infrastructure to significantly enhancing the capabilities of a virtual environment.
In short, if you're running a VMware infrastructure, life should get easier. For anyone who's ever tried to provide rock-solid OS-based clustering services, the new VM clustering feature, called Fault Tolerance, should be a vast improvement. Hot Add of CPUs and RAM has never really been an option for most shops, but it suddenly is (with the right OS, of course). These moves show that VMware is still pushing the virtualisation envelope.
Considering the scope of the upgrade, perhaps "VMware Infrastructure" did warrant a new name, but let's hope that VMware stops there. The company has a bad habit of changing the names of its products every few months, and it's getting tiresome trying to explain why VirtualCenter, vCenter, VI3, V3i, ESX, ESXi, and now vSphere are all basically the same product or parts of the same product suite.
Along with new features and improvements, vSphere brings more hardware resources to VMs. You can now add up to eight vCPUs to a single VM; previously, VMs were limited to four. The new RAM limit is 255GB, up from 64GB. The ESX hosts themselves can now support up to 64 cores and 512GB of RAM. Also — though I haven't had a chance to test this — it appears that you can map raw PCI devices to a specific VM.
VMware's also making some noise about performance enhancement for key technologies, such as claims of 20 percent performance improvement in Microsoft SQL Server throughput, and a claim of a 10x performance bump for iSCSI. That last claim may be just a bit exaggerated, as it appears to be based on the support of 10Gig iSCSI interfaces, rather than an improvement in VMware's internal iSCSI software initiator, which has always been a bit sluggish.
Speaking of performance, the performance graphs and data available in vSphere is much improved over the current release, with a more intuitive layout and better overall access to specific information regarding the performance of a VM or a host.
Inside the Sphere
I've had vSphere 4 (otherwise known as ESX 4.0 and vCenter 4.0) running in the lab for a few days now. It comprises the same parts as VI3, with ESX or ESXi running on the hosts, and vCenter running the show. Installation of these components is the same as it's always been, only now you're prevented from installing vCenter on an Active Directory domain controller, which is arguably a good idea. In fact, VMware now recommends running vCenter as a VM.
My early testbed comprised several different boxes, with a mix of Intel- and AMD-based servers, including an HP ProLiant DL580 G5 and a Sun Fire X4600 M2. I installed vCenter as a VM running under Windows Server 2008, alongside a separate domain controller, and built myself a nice little virtual infrastructure.
As I mentioned, the new Fault Tolerance feature has the ability to change lives. In a nutshell, this allows you to run the same VM in tandem across two hardware nodes, but with only one instance actually visible to the network. You can think of it as OS-agnostic clustering. Should a hardware failure take out the primary instance, the secondary instance will assume normal operations instantly, without requiring a VMotion.
The most significant penalty for this capability is that it requires the same VM footprint to run on both hardware nodes, so if it's a VM with 4GB of RAM, you'll be using 4GB of RAM on each hardware node during normal operation. However, that's small potatoes for running mission-critical virtual servers with this level of redundancy.
Host Profiles is also a fantastic addition, if perhaps overdue. Host Profiles allows admins to build a hardware host system and capture the configuration to be applied to subsequent hardware nodes. Rather than having to manually configure new nodes or even resort to scripted modifications to ESX's internal configuration files, you can now take a single hardware node and propagate its settings to other nodes. In addition, you can check for nodes that may not comply with the profile. This makes the creation and distribution of ESX hosts far simpler, once you've waded through the enormous profile management configuration tree.
While it's hot
Hot Add lets you add not only RAM and CPU but also virtual HBAs and network interface resources to supported VMs on the fly. For instance, you might be able to add another 2GB of RAM and two vCPUs to a Windows Server 2008 instance without even rebooting the box. The operative phrase here is "supported VMs." Hot adds are obviously not supported by most x86 operating systems, but this feature goes a long way toward adapting operating systems to the virtual environment rather than the other way around.
The same can be said for vNetwork Distributed Switch, the new facility to simplify provisioning and administration of VM networks. It allows for the integration of third-party virtual switches, like Cisco's Nexus product, and is a key part of Cisco's Unified Computing initiative.
There's a new format for grouping VM application stacks too, called vApps. For instance, if you have an application that has a few front-end servers and a back-end database server, you can group them under a "vApp" umbrella and manage them as a single entity, even specifying which servers need to be running before others are started, and a few other organisational bits.
There's also an older VMware concept updated for the enterprise environment: vStorage Thin Provisioning. Until now, it hasn't been possible to create an ESX VM without allocating all the virtual disk space at VM creation. Thus, a VM with a 200GB virtual disk will consume 200GB of real disk space on the selected storage device. With thin provisioning, that virtual disk can be cut, but won't use more physical storage than the VM is currently using. Thus, the 200GB virtual disk that's 50 percent full will use only 100GB on the storage device. This capability has been around for a while in VMware's desktop products and has finally made its way to the datacentre product.
For smaller shops, the new Data Recovery feature will come in handy; it promises to simplify VM backups while providing backup to disk and recovery at the file and image level.
But how does vSphere feel — how does it drive? I haven't had all that long to work with it, but I've noticed that vSphere's management seems to be snappier than the current version. Anyone who's worked with VMware vCenter and ESX knows that it's not uncommon for certain tasks to hang in the status centre seemingly forever, maybe stuck at 40 percent complete for 10 minutes before finally lunging ahead. There's very little response from the management console during such events, and error reporting has always been extremely sparse and not easily digested. vSphere seems to improve on these issues, with faster actions, better VM integration, and an overall more fluid feel.
Time will tell if this is actually the case in production. My pre-release version of vSphere had a few problems, like the non-existent ESX host Web UI and a complete failure to successfully PXE boot a new ESX host. Being pre-release code, that's not terribly surprising, but I certainly hope those functions are present in the official release.
On the plus side, I noted improvements in host interaction with storage, such as automatic LUN discovery when adding an iSCSI target, and an overall streamlining of some previous host configuration oddities. There's also an included utility designed to automagically upgrade ESX 3.5 hosts to the new version. I'm certain there's more to discover about vSphere, and I'll detail the ins and outs in an upcoming review.
Until then, I can say that my brief experience with vSphere has been positive, and the features offered are taking things to the next level. They also would seem to highlight just how far behind VMware's competitors really are.