Not long ago, enterprises only deployed private-line services such as MPLS to connect branch office sites to the data center sites where critical business applications were hosted. Maintaining MPLS connections took significant time and capital because reaching all the branch office sites increased transport costs and required IT resources to manage multiple regional MPLS vendors. Since provisioning MPLS connections was often time-consuming and costly, and to minimize bandwidth costs, many enterprises deployed WAN optimization solutions.
Soon, enterprises realized the benefits of implementing server virtualization technology and started to consolidate multiple data center sites. Additionally, with the advent of SaaS applications and IaaS workloads, enterprises began migrating many of their in-house applications to public cloud infrastructure such as AWS, Microsoft Azure and Google Cloud Platform (GCP). While all this was happening, the strain on the WAN increased since the traffic between branch office sites to the public clouds was first backhauled to the data center, increasing required MPLS bandwidth exponentially.
During the past five or six years, bandwidth costs have dropped dramatically for enterprises that have adopted SD-WAN solutions that enable secure, reliable utilization of lower cost broadband services. Enterprises can use a hybrid WAN comprising different transport services such as broadband, 4G/LTE, MPLS and others.
SD-WAN allows enterprises to use broadband internet and 4G/ LTE connections for branch-to-data center and direct-tonet connectivity while offering all the QoS advantages of a private line connection. And since these connections are relatively less expensive, adding more bandwidth is less of an issue. However, the distance between the user and the data has also increased with data center consolidation and public cloud options. Added latency in the network degrades application performance for certain applications such as file sharing.
For instance, if a remote employee wants to transfer a file from a branch office site to a company folder sitting in the public cloud infrastructure it can take a considerable amount of time because of the distance between user and the data and the data receipt acknowledgement requirements of the file transfer protocol. If several employees have to make such transfers every day, the total time can easily add up to several minutes or hours. Previously, the majority of applications were hosted close to the user, so latency issues weren’t as severe. Today, as most applications sit in public, private, or hybrid cloud environments, the latency factor needs to be addressed to meet and exceed application performance SLAs continuously.
The time it takes for the data to travel from sender to receiver and back is referred to as network latency. As the distance between locations increases over the WAN, especially for remote international sites with low-speed transport services or where there are long backhauls, application performance degrades. This has less to do with the available bandwidth and more to do with the time it takes to send and receive data packets over distance, data receipt acknowledgements required by some protocols before sending the next segment of data and the number of times data must be retransmitted due to packet loss. To counter such challenges, enterprises deploy WAN optimization solutions.
A common misconception persists that SD-WAN reduces or eliminates the need for WAN optimization techniques. The reality is that SD-WAN and WAN optimization solve fundamentally different problems, and they are complementary when deployed in unison. Geographically distributed enterprises with locations worldwide can experience impaired application performance for critical, latency-sensitive TCP/IP applications such as transaction processing or data backup caused by excessive round-trip delays.
How SD-WAN improves application performance
1. Speed and availability- Bandwidth-intensive and lag-sensitive applications like VoIP and Unified Communications are increasingly running over WAN optimization technologies, but many aren’t sophisticated enough to support these critical applications. SD-WAN, though, provides increased visibility and flexibility to prioritize voice packets. Hybrid deployments also provide the ability to rapidly shift traffic between MPLS and public internet connections. All enterprises want to give their network users a seamless, undeterred, experience, SD-WAN brings that want to fruition.
2. Effective load balancing- IT networks are filled with applications requiring different levels of services. Some may be more latency-sensitive, while others might work well without straining the network. Instead of letting performance-intensive applications congest a network and ruin it for all applications, SD- WAN solutions enable flexible pathways depending on individual application requirements. It prevents congestion spots from being created by diverting the traffic to alternate less-busy channels. This ensures a reliable flow of data transfer, instead of making it vulnerable to getting lost, dropped or blocked.
3. SD-WAN offers affordable flexibility- Traditionally WAN meant carrier MPLS and expensive, proprietary routers at every site. This solution is expensive and difficult to change and expand, especially if your organization has branches all over the globe or a sophisticated IT environment. The essence of SD-WAN is that it enables businesses to fully take advantage of any available connection types to provide connectivity, allowing sites to be turned up rapidly. And off-the-shelf hardware reduces capital costs, eliminates the need for maintenance contracts and eliminates the need for deploying expensive professional services resources to set up the network. SD-WAN automates the process of configuring and deploying network characteristics and hardware profiles. With the click of a button and some data entry, new sites are added to the WAN remotely.
Here are the pressing issues when it comes to application performance:
- Inconsistent Bandwidth- An office that wants their employees to watch training videos on Youtube may not have the network bandwidth to accommodate the heavy demand that video streaming requires. Not enough bandwidth can cause network congestion and packet loss. While cloud-based applications are becoming the mainstay of all businesses, it is essential to acknowledge that every web-based application needs bandwidth and needs to be supported with a scalable solution. Insufficient bandwidth and high latency cause poor experience among hosted applications especially if there’s a mismatch of bandwidth between the sender and destination network. When data is sent from a higher bandwidth LAN to lower speed WAN networks, it can create a queue of packets waiting to be received. If network congestion doesn’t get relieved and more packets keep getting piled up, tail-drop can occur, or the router will buffer and discard packets randomly (RED).
- Poor experience of hosted applications – An overloaded server or overstressed communication device can produce bottlenecks which takes a toll on application performance. Usually, poor application performance is identified after it happens, once users start complaining about app crashes, slow load or problems while carrying out a specific task within a hosted app. But this downtime can prove to be a costly endeavor. 98% of businesses’ surveyed by ITIC said that just 1 hour of downtime costs over $100,000 in revenue. But, even if your business isn’t that big, slow application performance will still reduce employee productivity and customer satisfaction.
- Lack of offsite backups due to poor bandwidth- Running backup systems during regular working hours, when the network is already strained, can exhaust all available bandwidth, which can cause important business systems to crash during work time. This effect compounds greatly during the first initial offsite backup which can take minutes to days. If not relieved, this congestion can create an upload bottleneck and impact download speeds. But offsite backups are critical for businesses when data has essentially become their pulse- an invaluable asset. Offsite backups are necessary for two reasons: First, Cyber threats- Cyber Attack, like Ransomware, corrupts the whole network which is what back up servers are a part of. To prevent this, having backup drives separate from their parent network becomes even more critical; the second reason is physical damage. It’s hard to prepare for a hurricane, flood, fire and unexpected excess humidity, all of which can cause physical damage to onsite backup servers. Since offsite backups are critical, so is the need to ensure that your network can support the necessary bandwidth it requires.