Many IT organizations find themselves in a tough situation when it comes to workload recovery requirements: The time that it will take them to restore mission-critical data or application services is significantly longer than their SLAs specify. When business users depend on having their high-volume, high-priority data and essential software highly available to support real-time analytics, customer experiences and digital transformation, unplanned data center outages and file corruptions can be significant, or even deadly. Data loss and the inability to execute transactions may occur under these circumstances. It’s equally dangerous when ransomware attacks unpatched servers and holds production data hostage.
AVAILABILITY FOR COST-SAVINGS, COMPETITIVE EDGE AND COMPLIANCE
When a company doesn’t consider alwayson, mission-critical data Availability as a core foundation for driving revenue and growth, it reduces its ability to be competitive against those businesses with nimbler data defense strategies. Go beyond two hours of critical data downtime, and reputation, customer satisfaction and more all will suffer. Clients will look to work with other businesses that can do the job they need better and with greater consistently.
DETERMINING APPROPRIATE AVAILABILITY ENVIRONMENTS
Getting the house in order to avoid unnecessary losses and costs related to backup and recovery starts with reducing data silos—mainstream, file-based workloads and cloud-native, object-based applications—into a single hub. Simplification and standardization clear the way for thinking of data Availability as an asset, not another cost-center component.
Certainly, being able to store and rapidly restore large data sets on physical and virtual machines ups the capabilities of the always-on enterprise. Since data may reside inside VMs either in the cloud or on-premises, or flow across these environments, there’s a need to reduce siloed views in favor of singlepane-of-glass views. That said, use cases—daily analytics vs. archival, for instance—will help determine whether the cloud or on-site flash arrays make the most sense for particular backup and restore and retention operations.
Using the cloud to archive longterm data that must be protected from loss, should it need to be accessed in the future, may make more sense than using on-premises flash storage from a space, cost and SLA perspective. Using the cloud for other business continuity purposes, however, can be a more-costly option because of the way that cloud costs are operationalized. That can greatly impact the bottom line. Recovery times for backed-up data stored in a public cloud also may be slower.
AVAILABILITY IN A HYBRID WORLD
The best approach, then, needs to be a hybrid one that utilizes both the cloud and on-site flash storage with a modern Availability platform. Picking just one solution for recovery and retention means making some compromises on features and functionality.
Still, there may yet be the perception that the cloud alone is perfectly suited to be a data destination and a data backup and recovery environment—that is, that on-premises solutions are no longer needed for the latter functions. The cloud’s popularity as a backup medium is evident from statistics from a study published by Market Research Future: It finds that the global cloud backup market is expected to grow to $5.6 billion by 2023. So, the focus for IT and virtualization admins must be on ensuring that they are delivering simplicity across the entire stack—the same principle promised by the cloud.
IT leaders must present flash storage platforms as the solution to what traditionally has been an inherently and architecturally complex process and infrastructure. Fortunately, these systems don’t have decades of technical debt beneath them to undermine that goal. Users likely will compare whatever on-premises solutions are provided by IT with their cloud experiences, and that comparison must be favorable so that the organization can enjoy the benefits of best-of-breed deployments.
Read this Tech Talk paper to find out how an all-flash solution can stave off disaster by providing:
- Availability and near-constant data access
- Fast and flexible recovery
- Virtual, physical and cloud data management
- And more!