You’ve Heard the Term, Now Understand the Term: Virtualization
Virtualization is a process that allows multiple operating system environments (typically servers) to run on a single hardware platform. It’s comparable to your personal computer running multiple applications at the same time.
In traditional setups, the capacity of your server hardware (CPU, memory, etc.) is often underutilized. You can assign your servers many roles, such as serving email and websites or file and print sharing, but for security or supportability reasons, you often need to keep these roles separate from one another. Traditionally this means buying more servers, which translates into more power, space, noise and cooling and maintenance costs. This cost is constant, regardless of how much capacity is actually being used, so there’s lots of room for waste. For example, a computer that serves websites may only be using 10% of its total capacity of CPU power, memory and disk space.
Virtualization allows you to tap into this unused hardware capacity and lower your hardware costs and power consumption. Who doesn’t like that proposition?
Virtualization can work in one of two ways.
- You can install software within your computer’s operating system. Some popular virtualization software for desktop computers are VMware Fusion, which allows Mac users to simultaneously run OS X and Windows environments, and Microsoft’s “XP Mode,” a feature of the Windows 7 operating system that allows continued use of older software.
- You can use a “hypervisor,” a bit of software with a very small footprint that is installed directly on the server hardware instead of a full operating system. Some popular hypervisor platforms for servers include VMware ESXi and Microsoft Hyper-V.
In both cases, software is used to imitate a sort of “hardware sandbox” in which the virtualized computing environment is run. This software translates requests from the virtual environment (or VM, virtual machine) to the physical one, relaying instructions from the encapsulated operating system to the physical CPU(s), memory, disks, and network adapters, using a set of parameters that determines which percentage of available resources to allocate to the virtual machines.
And the million-dollar question: Why isn’t everyone doing it?
In a word: Cost.
Most businesses have relatively limited IT budgets, and virtualization can come with a hidden price tag. Although Microsoft and VMware each offer a “free” version of their hypervisor, both require a paid license and a good bit of professional know-how to really take advantage of their full benefits. These technologies can also be difficult to integrate with existing backup processes, especially for small businesses that have invested in tape-based backup systems.
When power is lost, most small businesses with physical servers rely on signaling from UPS (uninterruptible power supply) systems to alert servers to initiate a graceful shutdown of the system. This becomes an issue with virtual servers because the hypervisor sits between the server operating systems and the hardware, but since the hypervisor is not a full-featured operating system in itself, it cannot process the shutdown signal from the UPS data cable. The work around for this problem—buying a network-attached UPS which signals software running within the VMs—also adds to the system’s cost.
The “thin” nature of hypervisors also presents unique problems in a small business environment. If one of your servers breaks down, you can no longer access a physical console.
Lastly, consolidating your servers through virtualization can be akin to putting all of your eggs in one basket. When a hardware failure occurs, well, you get the metaphor.