On April 13, 1992, 250 million gallons of water from the Chicago River flooded from a crack in an underground tunnel into the basements of dozens of businesses in the city’s financial hub. Countless data centers were destroyed in what’s now known as the Great Chicago Flood, shutting down hundreds of businesses and resulting in a loss of around $25 billion in trading.
More than 25 years later, the technology surrounding data management has evolved, but the potential for costly interruption remains the same. Data center outages cost companies an average of $21.8 million in 2017—and with 27 percent of servers experiencing at least one outage per year, unplanned downtime isn’t a matter of if, but when.
There’s also the potential for brand damage. “Sure, you have to get the data center running again,” says Danny Allan, vice president of product strategy at Veeam. “But you have to make sure your customers feel confident you can still deliver.”
Today, businesses must redefine the way they think about data management, changing policies and behavior to ensure their data remains not just available, but also hyper-available—keeping pace with the demands of an always-on marketplace, workforce and customer base. Getting to that point means embarking on a journey, with intelligent data management—the kind that adapts and anticipates as businesses innovate and scale—as the ultimate destination. It’s a path that can be broken down into five stages: backup, aggregation, visibility, orchestration and automation.
1. Backup: Reactive Recovery
This first stage is the bedrock for all that follows, ensuring company data is secure and recoverable in the event an outage, loss or cyber attack. But according to Allan, around 80 percent of the market still struggles with data recovery.
“The businesses at this stage are typically functioning in a reactive mode,” Allan says. “You’re trying to protect all of your data, but you’re using legacy products rather than relying on the cloud or SaaS. It’s too complex, it’s not reliable and it costs a lot of money.”
2. Aggregation: Protection Across Workloads
Businesses begin to gain insight into their data in this second stage, allowing them to leverage it to drive value to the enterprise. This stage is all about guaranteeing protection across multiple workloads (however disparate), from physical data centers to various forms of the cloud. By centralizing control, businesses can operate more fluidly over varying infrastructures and quickly access data that might otherwise remain siloed.
3. Visibility: From Reactive to Proactive
This advancement allows businesses to enter the visibility stage, where they truly transition from a reactive data management stance to a proactive one. “The early stages are focused on the need to be always-on, the need to be secure or the need to be compliant,” Allan says. “You can tell when someone makes a breakthrough into the third stage or later, because that’s when the question becomes, ‘How can I use data to get ahead of the competition?’ In other words, they’re thinking proactively about how data management informs their broader strategy.”
4. Orchestration: A Seamless Environment
The new data landscape isn’t defined by volume alone. Hyper-sprawl plays an equally important role, with data arriving from sources ranging from multiple forms of the cloud to an array of endpoint devices. In the fourth stage—orchestration—businesses harness the sprawl by seamlessly moving data to the best location across multicloud environments, giving them the ability to move between various infrastructures without sacrificing continuity or security.
5. Automation: Driven by AI
Even with deep visibility into one’s data, however, businesses in this stage still rely primarily on manual, human-led processes. That changes when they take a step into the final stage—automation—in which data management is driven by AI and machine learning that automatically backs up, secures and migrates data based on real-time business needs.
Though full automation is still a few years away for most businesses, some are already leveraging new technologies to support their data management strategy. Allan points to one New Zealand-based Veeam customer that had an important data center set up near a fault line. A decade ago, an earthquake might have resulted in the irreversible loss of crucial company data. Now, though, IoT sensors keep the data center secure by picking up any sign of tremors, which would trigger data replication at another center in Sydney.
“Setting up for disaster recovery costs money because it’s running on both sides, and so you’re paying for 200 percent capacity. A smarter way of doing that is to leverage intelligence so that you’re not doing that 365 days a year,” Allan says. “Instead, you have an automatic system that kicks in and backs up your data in the event of a natural disaster. Rather than having ongoing, continuous protection, you can use all of these different mechanisms to drive greater intelligence about when, how and where to back your data up.”
In today’s evolving data landscape, this means that every business must embark on a journey through the five stages of intelligent data management—and take their availability strategy beyond a reactive stance to fully embrace the potential of their data.