As information technology (IT) organizations undergo digital transformation (DX), maintaining high levels of data availability is becoming more important. DX is the profound transformation of business and organizational activities, processes, competencies, and models to fully leverage evolving digital technologies and their impact across society in a strategic manner. The need for increased data availability, coupled with the continued high data growth rates in organizations of all sizes, is among many factors that drive the need for an IT transformation (ITX) during DX projects. ITX is about modernizing IT infrastructure to meet the evolving performance, availability, scalability, agility, and management requirements of the data-centric model that will increasingly come to dominate the business practices of successful companies.

In research published in Trends in Enterprise Storage Availability Management (IDC #US44649819, January 2019), IDC explored issues associated with server and storage availability. In this IDC Perspective, we focus specifically on organizational targets for workload availability and how well enterprises are performing to those goals. 95.8% of surveyed enterprises have at least two explicitly defined “availability tiers” and are managing workloads placed in those tiers to specific uptime requirements.

The use of recovery point objective (RPO) and recovery time objective (RTO) dominates the availability metrics used (60.3%), but the use of “wall-time metrics” also ranked high in our survey results. 31.6% of surveyed enterprises were defining a tier by the number of “nines” the tier was targeted at (i.e., 99.999%, or “five-nines,” equates to slightly more than 5 minutes of downtime per year) and 8.1% of surveyed enterprises were using a somewhat more reactive approach that took both planned and unplanned (actual) downtime into account.

IT organizations used a number of different tools to achieve high availability for their mission-critical tier. Cloud-based backup led the list, with 44.1% using it to provide backup and/or disaster recovery for these workloads, followed by traditional backup software for on-premise targets (41.3%), backup software with application-specific snapshot integration (34.4%), and the use of hyperconverged infrastructure with integrated data protection and object storage (as a backup repository) tied at 27.7%. Cloud-based backup targets and traditional backup software led the list for how organizations protect their business-critical workloads as well.

Defining Availability Goals

Definitions of downtime varied in the survey. 38.5% of IT organizations defined “downtime” as the failure of an IT resource (e.g., server, storage, network), which could impact but not necessarily remove application access, whereas 29.1% defined it as an entire application failing. When asked about the causes of failure, surveyed organizations indicated that the network was the most frequent culprit (16.2%), followed by servers (15.4%), malware (10.3%), and complexity (10.1%). Storage ranked only fifth as a cause of failures, at 9.8%.

Advice for Technology Buyers

  • In companies where data is becoming an increasingly strategic asset, steps should be taken to ensure that data can be captured, protected, recovered, and safely shared to meet evolving requirements. If the storage infrastructure will be relied on to provide high availability, consider using the many features available in most enterprise-class storage systems to balance workload-specific requirements for performance, security, availability, and cost. Configure a “defense in depth” approach leveraging tools like erasure coding and/or RAID, snapshots, replication, backup, transparent network failover, nondisruptive upgrades, and object and cloud-based backup repositories. Ensure that any new storage platforms being considered have been evaluated in terms of their ability to meet your own evolving data availability requirements.
  • While historically most applications relied on the underlying storage infrastructure to meet availability requirements, most next-generation applications (NGAs) are designed to handle that themselves. NGAs tend to be characterized by scale-out, software-defined designs featuring distributed architectures and running on commodity off-the-shelf (COTS) hardware, and they are often custom built by IT organizations to service mobile computing, social media, big data analytics, artificial intelligence and machine learning, cloud, and other newer workloads. These systems leverage software-based features like erasure coding, clustering, and replication to provide high levels of availability. When relying on an application to ensure availability, it is important to understand exactly what is happening underneath the covers so that there are no surprises, particularly when that application has been designed and is managed by a third party (like a cloud provider). 
  • Most mission-critical workloads are maintained in on-premise infrastructure where administrators can configure systems and applications to meet higher availability levels than can be guaranteed in public cloud environments. Recent research by IDC notes that the reasons enterprises give for choosing to deploy a workload in an on-premise location can include performance, availability, and regulatory or compliance requirements and IT governance issues. Outside of those reasons, many enterprises are looking to minimize onpremise IT infrastructure requirements where they can by moving workloads (or deploying new ones) in the public cloud. Note, however, that public cloud providers’ willingness to guarantee “four-nines” availability would meet 65.3% of what survey respondents defined as “missioncritical.”
  • Although the topic was not explored in this survey, many enterprises undergoing ITX efforts are consciously trying to streamline operations by, at least in part, consolidating workloads onto fewer storage platforms. Increasing workload density can raise concerns about domain size failure, so organizations doing this must understand the impact of different types of failures on their ability to provide application services in the aggregate. This could be another factor that is driving availability requirements ever higher.

Read this exclusive report by IDC to find out how storage and data availability requirements are a big focus for organizations during DX and how enterprises define and measure data availability.

 

To read full download the whitepaper:
Digitally Transforming Enterprises Are Managing to Higher Levels of Storage and Data Availability

SEND ME WHITEPAPER