Menu
Five tips for no-brainer virtualisation

Five tips for no-brainer virtualisation

Virtualisation -- creating logical pools of IT resources not linked to physical devices -- can reduce spending on new server and storage hardware, increase application uptime and simplify IT management. But organisations will only get those benefits if they follow some key steps, according to IT managers, analysts and other industry observers.

1) Understand virtualisation

Some users purchase storage virtualisation for only one purpose without realizing the other benefits it can provide them, says Mike Karp, an analyst at Enterprise Management Associates, a research group in Colorado. For example, "It does not necessarily pay to buy file-based virtualisation technology just to do data migration," he says. Storage virtualisation becomes more worthwhile, he says, when it is also used for purposes such as capacity management, load balancing and "information life-cycle management," which moves data to less expensive storage devices as the data becomes less valuable.

Tell that to Thomas Williamson, manager of network operations who purchased VMware ESX to virtualise 25 servers at the Calvary Chapel in Fort Lauderdale, Florida. He wasn't aware that he could also virtualize the storage for those virtual machines. It was only after purchasing a 4TB Dell/EMC CX300 SAN that he learned from a user's group about DataCore Software's SANmelody storage and virtualisation management software.

Williamson says SANmelody allows him to use regular servers linked to low-cost disk arrays to expand his storage capacity, and to use features such as snapshotting at far less cost than with the Dell/EMC SAN. "The biggest mistake for us was ... our lack of research," he says. "When we found out about DataCore, we [thought] why didn't we wait the extra month; why didn't we possibly research the thought of storage virtualisation?"

Users seeking to virtualize their file-based storage should first consider their objective, says Greg Schulz, founder and analyst at The StorageIO Group, a Minnesota.-based industry analysis and consulting firm. If the goal is to simply reduce the number of NAS appliances in use, consolidating them into fewer, larger devices might be easier and less expensive than virtualisation, he says. But if the goal is to get a single view of all the space available on all the NAS appliances in different offices, virtualisation might be the better choice, he says.

Careful research upfront also helps the IT staff set reasonable expectations. According to an advisory note by analyst Andi Mann at Enterprise Management Associates, the most common outcomes of virtualisation are enabling disaster recovery and business continuity; increasing flexibility and agility; improving server utilization; reducing downtime and then, only after those objectives have been met, lowering administration and management costs.

2) Create a process

Realizing cost savings -- or even just keeping the virtual environment stable and secure -- requires consistent processes for creating, configuring, maintaining and eventually eliminating virtual servers when they are no longer needed.

Because a virtual machine (VM) doesn't require the purchase of new hardware, there is often no formal process to approve its creation, says Stefan Paychere, founder and chief technology officer at Dunes Technologies, a virtualisation-management software vendor in Stamford. That can result in a "sprawl" of virtual servers that are as hard to manage as their physical counterparts.

Botched changes to a physical "host" server are especially dangerous because they can hurt the availability or performance of multiple VMs, says Vick Vaishnavi, director of product marketing at BladeLogic , a datacenter automation software vendor in Lexington. Many IT managers also neglect backup and fail-over plans in the new, less familiar virtualisation environment, he says, even though virtualisation can make it easier for working servers to take over for failed hardware.

Creating sound processes governing the VM life cycle, says Vaishnavi, requires an in-depth understanding of how the physical servers and their computing resources are "divvied up" among VMs and how the VMs "map to" the service-level agreements that dictate the performance and uptime of applications.

Consistent monitoring of the VMs is also necessary to ensure they've been properly configured to comply with corporate, industry or government standards.

3) Measure and test

Predicting the hardware required for each VM can be tricky, says Gordon Haff, an analyst at Illuminata , a research and analysis firm in Nashua, New Hampshire. "People look at something like CPU utilization and assume they can put more virtual machines on a piece of hardware than they in fact can."

Paychere says making accurate predictions requires taking into account five metrics:

-- The number of CPU cycles.

-- The amount of disk space.

-- The level of disk I/O.

-- The amount of memory.

-- The network bandwidth each VM will require.

Some vendors, such as VMware with its Distributed Resource Scheduler, can dynamically reallocate and balance VMs among physical machines as application needs change, he says. Premigration tools from several vendors also help calculate the best ratio of virtual to physical servers.

4) Take a total systems view

A physical server running multiple virtual machines generates far more network traffic than one running a single application. However, IT decision-makers often don't consider the impact of virtualisation on other parts of the IT infrastructure, says Jay Kramer, vice president of worldwide marketing at iStor Networks, a storage vendor in Irvine, California

Calvary Chapel's Williamson says he underestimated how server virtualisation would boost his storage needs. "When you're working with server virtualisation, you can throw up a server in a second, and use 15GB to 20GB of storage without even realizing it," he says.

Mann says companies can reduce their appetite for storage by "tightly restricting" the creation of VMs through tools that perform automated discovery, inventory and configuration management of VMs.

Boosting network bandwidth to handle this increased traffic to and from VMs might require implementing multipathing, path fail-over and load balancing on the network to ensure adequate bandwidth in case one network component becomes overloaded or fails.

Maintaining proper security, uptime or redundancy might require creating virtual LANs to keep sensitive traffic from unauthorized eyes or on the fastest network links, says Kramer. The network interface cards that connect servers to the network can also be virtualized to give each guest server its own IP address, and the host bus adapters that link servers to storage arrays can be virtualised to present multiple logical ports to the storage fabric. Such beyond-the-server virtualisation helps preserve the unique configuration elements that ensure security and performance for each VM, Kramer says.

5) Watch the details

Once you have the grand design down, watch out for the implementation details that can unexpectedly increase costs or complexity. One such detail: Different software companies have different licensing policies for VMs, says Eric Foote, chief technical architect at CareTech Solutions in Troy, Michigan, which provides IT services and health information management to health care providers. "If you are on Microsoft's virtualisation software, certain products let you run multiple instances" without any additional licensing charge, says Foote. "If you take the same application package and move it over to a VMware environment, the licensing per instance is counted differently." The result can be significant additional licensing costs that were likely not planned for, he says.

Migrating applications and servers from physical to virtual machines can also be more complicated than vendors claim, says Foote. In some cases, the operating system or application will need specific patches or updates before being converted to the virtualized environment, he says. Driver and hardware incompatibilities that did not exist in the previous physical environment, but do exist in the new virtualized environment, can also require multiple migration attempts, he says.

Hardware incompatibilities are a particular concern for smaller storage vendors that lack the money or staff to ensure compatibility with popular virtualisation platforms such as VMware's, says Enterprise Management Associates' Karp.

Another detail to keep in mind: IT managers require new skills in the virtualized world, such as an understanding of "the network traffic going in and out of the virtualized environment," Foote says. "You've got virtualised switches and virtualised [network interface cards] and virtualized [local area networks], and you need somebody who understands all of that and can properly implement and support it." The same is true, he says, of storage, where "now you have multiple hosts sharing common volumes" instead of each server having its own dedicated storage.

Less complexity, not more

Virtualisation "should not be adding any more complexity, should not be adding any more management work for you," says The StorageIO Group's Schulz. "It should not be introducing any new bottlenecks; it should not be introducing any new instabilities." By thinking through virtualisation beforehand, he says, IT planners can ensure "you are using the right tool for the right problem at hand."

Robert L. Scheier is a freelance writer based in Boylston, Mass. He can be reached at bob@scheierassociates.com.


Follow Us

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags virtualisationvirtualization

Featured

Slideshows

EDGE 2018: Kiwis kick back with Super Rugby before NZ session

EDGE 2018: Kiwis kick back with Super Rugby before NZ session

New Zealanders kick-started EDGE 2018 with a bout of Super Rugby before a dedicated New Zealand session, in front of more than 50 partners, vendors and distributors on Hamilton Island.‚Äč

EDGE 2018: Kiwis kick back with Super Rugby before NZ session
EDGE 2018: Kiwis assess key customer priorities through NZ research

EDGE 2018: Kiwis assess key customer priorities through NZ research

EDGE 2018 kicked off with a dedicated New Zealand track, highlighting the key customer priorities across the local market, in association with Dell EMC. Delivered through EDGE Research - leveraging Kiwi data through Tech Research Asia - more than 50 partners, vendors and distributors combined during an interactive session to assess the changing spending patterns of the end-user and the subsequent impact to the channel.

EDGE 2018: Kiwis assess key customer priorities through NZ research
Show Comments