Today, nearly every datacenter has become heavily virtualized. In fact, according to Gartner as many as 75% of X86 server workloads are already virtualized in the enterprise datacenter. Yet even with the growth rate of virtual machines outpacing the rate of physical servers, industry wide, most virtual environments continue to be protected by backup systems designed for physical servers, not the virtual infrastructure they are used on. Even still, data protection products that are virtualization-focused may deliver additional support for virtual processes, but there are pitfalls in selecting the right approach.

Your production environment has requirements that extend well beyond the things you notice during a demo or proof-of-concept. Failing to account for these requirements will lead to significant costs in the weeks and months after your initial deployment. Understanding these requirements and their costs will help you make a better evaluation of a virtualization-focused data protection system, and lead to much better long-term outcomes. This paper will discuss five common costs that can remain hidden until after a virtualization backup system has been fully deployed.

ONE: RISING HARDWARE AND NETWORK REQUIREMENTS

It is easy to focus on the software cost of a backup solution, but because the protected data is stored on disk, this storage component becomes a significant factor in the cost of the solution. Backup storage often contains many copies of identical data – take that copy of the standard corporate PowerPoint presentation everyone has a copy of as an example. Since all your virtual machines are probably built from the same few operating system images, it may be stored multiple times needlessly consuming storage resources. Deduplication technology detects and removes the redundant data to dramatically reduce the amount of disk required, and therefore the overall cost of the solution. Given the widespread availability of deduplication technology, not using it at all is an obvious rookie mistake. But there are subtleties in deduplication options that can lead to big costs. For example, deduplication appliances are significantly more expensive than commodity disk storage – generally in the range of $2K to $4K per TB. Moreover, they achieve their best efficiency when all the data is stored on a single appliance. Once the data overflows onto a second appliance, duplication starts to creep back in and the benefit is reduced. This obviously gets worse as your storage requirements grow.

Likewise, when you start to move data around the network, WAN acceleration technology can dramatically speed transfers and reduce cost. WAN accelerators work much the same way as storage deduplication appliances – they look for and eliminate redundancies in the data before transmitting it across that expensive, metered network. But, again, external appliances are expensive, which drives up the cost of the solution. When combined with deduplicating storage appliances, you also incur the inefficiencies of multiple systems doing and undoing the same job. Certainly the management consoles for each one of these appliances is reporting massive savings due to their actions. And just as certainly, all of this redundant effort is costing you money.

TWO: INTEGRATED DATABASE SURPRISES

Like most applications that manage large amounts of data, backup and recovery solutions need a database. Fortunately a database is often included with the software solution – this lowers cost and eases deployment. Yet for many point-based solutions the typical database used in the background is Microsoft’s SQL Server Express. This “Express” version of Microsoft SQL Server works perfectly well for small deployments and shows well during a product evaluation. But once your deployment starts to get real, supporting many production systems, many months of archives, and possibly multiple locations, you will need an enterprise- class database. In short, once your deployment gets past the proof-of-concept stage, more robust database functionality will be needed to realize the most value from your data protection implementation.

Microsoft’s SQL Server Enterprise is the comprehensive version of Microsoft’s SQL Server Express. But it is higher cost and may quickly change the economics of your software purchase. It’s important to know which version your selected backup solution uses and how it will support the long-term needs of your environment. Two otherwise identical solutions can have dramatically different costs and value depending upon the database they use.

THREE: SLOWING EFFICIENCY AT SCALE

Using an enterprise-class database is not the only thing that changes when you move from proof-of-concept into serious production. Lots of things work well in the first demo that don’t when they scale up. Just as a deduplicating storage appliance may lose efficiency when it grows beyond a single node, efficiency can also be lost as you scale the number of server you protect. Most backup systems will have dedicated servers that connect to your storage devices. This lets you scale-up performance as your backup storage pool grows. It is hard to tell when you are only running a single backup server, but the performance characteristics of this server can vary widely between data protection software vendors. A poorly written backup server module will diminish CPU performance and disk I/O bandwidth, forcing you to deploy more backup servers relatively early in your growth. This drives cost, both in additional software licenses and in additional hardware, as well as in the incremental human effort to manage those extra servers.

FOUR: THE “DOUBLE COST” OF MULTIPLE

Virtualization-focused point backup solutions are attractive because virtualization is becoming (or already is) the dominant infrastructure for production applications. But even with this overwhelming preference for the flexibility and efficiency of virtualization, almost every datacenter retains some applications running on dedicated servers. These “physical” systems need a data protection solution too – and they almost certainly already have one. In fact, in most use cases the reason to look at a new backup system is to move the virtual infrastructure onto a data protection platform that is optimized for virtual servers. This makes absolute  sense, but if the new solution cannot also support your physical servers, you end up with a fragmented backup system. This means double the cost with two sets of backup licenses, two maintenance contracts, two management consoles, two sets of backup hardware, two sets of trained / certified administrators, and two different sets of reports. It also means two different sets of protection, retention, and archiving policies, two different sets of troubleshooting procedures, two different processes for resolving trouble tickets, and two different sets of expectations for your users.

FIVE: THE HIGH COST OF VENDOR LOCK-IN WITHOUT WORKLOAD PORTABILITY

Virtualization has moved beyond being a mainstream architecture in our datacenters. The core of virtualization, the hypervisor, is actually beginning to commoditize. VMware dominates most datacenters, but Microsoft’s Hyper-V is already owned by everyone who owns a license for Windows Server, and has at least a limited role in most datacenters. Some organizations are using or actively moving to Hyper-V for the bulk of their production. Hypervisors from RedHat and Citrix are also realistic options for many organizations. Given these all of these choices, there are financial and operational pressures that drive organizations to have multiple hypervisors, and to move some or all workloads to a different hypervisor. This means that an application that is running on one hypervisor today might be running on a different one tomorrow. This is not to suggest that an application will move back-and-forth between different hypervisors on a daily basis. Rather, it’s likely that there will be a one-time, deliberate and cautious migration. But no matter how deliberate, cautious, and well planned it is, there will be a period of time when the backups for the application in question were made on hypervisor X, and you might need to recover them on hypervisor Y.

This creates a couple of problems. It goes almost without saying that your data protection solution must cover all the hypervisors you might consider (not just the ones you use today). To feel confident moving an application from one hypervisor to another, you really want to be able to recover backups made on the old hypervisor directly onto the new hypervisor – and quickly, without any manual translation process that could fail or slow you down. Not having this capability makes you feel locked-in to your current hypervisor vendor.

LOOK BEYOND THE DEMO

Point virtualization backup solutions look good during the demo, but once you try to integrate them into your enterprise strategy, they create cost, complexity, and risk. When evaluating a data protection solution, consider:

  • The monetary costs of deduplicating storage, and the efficiency at scale as part of the overall solution budget
  • The monetary cost of any additional database licenses that may be required
  • The rate at which hardware components of the backup system scale, and the monetary and human costs of too many backup servers
  • The monetary, human, and operational cost of maintaining separate systems for physical and virtual environments
  • The risk of lock-in to a component that should be a commodity, and the opportunity to build more agility into your core IT operations

Data protection technology can seem like fairly mature, boring technology, but they represent the foundation upon which your most important asset – your long term access to data – rests. Get this wrong, and your financial and people costs will scale unpleasantly with your data growth. Get this right, and you will build a reliable, cost-efficient foundation that also enables agility without sacrificing your reputation for reliability.

To read full download the whitepaper:
Hidden Costs of Virtualization Backup Solutions, Revealed

SEND ME WHITEPAPER