When it comes to data protection and business continuity, what is your goal? That’s easy – a solution you know will work every time, protecting everything you have in your data center with absolutely zero downtime and zero data loss.
Backup tools sit in a very strategic location. They touch and manage all corporate data and the majority of applications. With complete access to the lifeblood of a company, backup providers are building ways for corporations to not just protect the data at hand, but to use their reach for capabilities far beyond data protection.
Selecting the right solution is increasingly more than just about basic backup. Before choosing a backup vendor, understand the wide variety of offerings, what to look for and potential gaps in coverage to put your organization in the best position to achieve the lofty goal of total protection all the time.
We will provide guidance on each of these topics. In each section, we have created a checklist or you to use to ensure your solution is the best the market can offer. And, finally, on the last page, we’ve added a convenient chart that can help you create a shortlist of leading features.
SECTION 1: PROTECT EVERYTHING IN YOUR DATACENTER
Your environment is complicated, but protecting it doesn’t have to be. You need to protect everything in your data center, whether it be physical or virtual, deployed on premises, at a remote location or in the cloud. In addition, new technologies are emerging, such as hyperconverged infrastructures such as Nutanix and Cisco UCS. A simple, all-in-one approach to backup, recovery automation, and cloud continuity built to deal with all forms of computing styles makes IT administrators more productive as they can do more in less time. Today’s leading data protection solutions can protect diverse environments and come pre-integrated and optimized to provide high-speed, error-less performance.
If you were to build your own backup and recovery solution, you would probably have to integrate dozens of different pieces of software and hardware – servers, storage, de-duplication, OS, security, analytics, search, monitoring…. Unfortunately, many vendors ask you to take that approach by partnering with other suppliers rather than building their own total solution. The amount of time you will have to spend on pprotection is directly proportional to the number of servers and components you have to install, manage, and maintain. Time is money. Newer vendors are taking the integrated approach specifically to reduce time and money spent on continuity. Visionaries in IT are deploying single, complete solutions, purpose-built to perform data and application protection – in other words, an appliance.
A purpose-built, all-in-one appliance is easier to install, upgrade, and manage. Today’s leading appliances are able to protect all computing platforms, including virtual systems, physical Windows and Linux systems, legacy systems, and cloud workloads deployed in hyper-scale clouds such as Amazon AWS and Microsoft Azure. A modern, intuitive user experience is a priority: it should always be possible to operate your backup system without referring to a manual so substitutes or managers can stand in when primary admins are unavailable.
SECTION 2: GAIN QUICK RECOVERY TIMES (RTO)
While instant recovery with zero downtime is ideal, putting in place the resources to meet this objective may not be affordable for every application in every organization. Organizations need to inventory their applications and triage them by their importance to the functioning of the business. More backup capabilities should be invested to protect mission-critical applications than those apps that can be off-line for a short while. The following features should be considered to support mission-critical apps.
SECTION 3: ACHIEVE NEAR-ZERO DATA LOSS (RPO)
Once you have parsed mission-critical applications from those that don’t need near-instantaneous recovery, you are in a position to set recovery point objectives (RPOs) for all classes of apps. A recovery point objective is basically deciding how much data you can afford to lose. Here are features and functions that can help define and deliver on your RPO objectives.
SECTION 4: CLOUD-BASED DISASTER RECOVERY
Cutting edge enterprises as well as organizations doing business at a single location are increasingly using the cloud as their disaster recovery location. Regularly scheduled backups are stored in the cloud at low cost and are isolated from accidental deletion or ransomware attacks. These cloud-based backup files should serve two purposes – first they are preserved to meet data compliance mandates, but they should also be able to be used for disaster recovery
SECTION 5: TESTING, REPORTING AND SUPPORT
Now that you have set your RPO and RTO goals you need to be confident that they can be met. You also need to prove to others, including senior management, auditors and regulatory agencies to name a few, that you have verifiable plans in place to execute your recovery program. You need confidence that your programs will work in an emergency and reports that back you up.
- TEST, TEST AND TEST AGAIN
The only way to know if you can recover in an emergency is to test regularly and each time you make a change to your infrastructure. New, intelligent tools are now available that can greatly ease your concerns by automatically testing to ensure all components are in place and capable of recovering or telling you what is broken so it can be fixed. Additionally, you get an easy to read, formal report certifying that your disaster recovery solutions have been tested and showing the results. These tools automate testing so you know exactly how fast and to what point your data and applications are protected without requiring manual work on your part.
- TEST AND DEV ENVIRONMENTS
Using advanced automated provisioning tools you can test beyond just application recovery. Organizations need to know that new software versions and patches will not cause performance interruptions by testing them prior to deployment on production servers. Automated provisioning tools can now spin up and create test sand boxes that are exactly the same as your production environment because they are created from your most recent backups. If problems are found, they can be pinpointed and solved. Once all testing is finished the entire test environment can easily be torn down.