Your users, especially when working outside of the office, no longer need to always connect to the corporate network to get work done. They often connect directly to SaaS apps. And, let’s face it, employees also don’t turn on the VPN if they’re using their work laptop for personal use — which means they’re left with very little security protection.
Plus, many organizations are now using direct internet connections at branch offices, which means employees and guest users don’t get the protection of your traditional security stack. Not only are more offices connecting directly to the internet — it’s estimated that 70% of branch offices already have some direct internet access — but attackers recognize these weak points in their targets and have started exploiting them more.
To solve these new challenges, security controls must also shift to the cloud. This in-depth white paper describes how security must evolve to protect users anywhere they access the internet, why traditional secure web gateway (SWG) solutions cannot address these gaps, and why a new kind of internet gateway represents an entirely new way of thinking about securing your users.
The IT landscape has evolved. Critical infrastructure, applications, and data are moving to the cloud, leveraging either public or private cloud infrastructure. Salesforce.com, Box, G Suite, Office 365, and other software-as-a-service (SaaS) apps, whether sanctioned by IT or not, are commonplace in companies of all sizes and industries — even the most highly regulated ones. Not only does this raise questions about how to protect where sensitive data is going and how it’s being used, but it also changes how employees get their work done.
Looking back: Secure Web Gateways were originally built to control, not secure users and data.
SWGs are often used as one way to protect users against threats online. But, is that what they were really built to do? Think back a couple of decades to a time when bandwidth was expensive and there was a concern about employee productivity online. To offset these challenges, web proxy technology was born. Web gateways were designed to control web traffic as a way to manage bandwidth consumption, and they controlled access to inappropriate sites to help you manage productivity. Sure, it required a lot of maintenance and exceptions to work around some problematic web apps and sites, but it seemed worth it back then.
Later, companies became increasingly concerned about users going to malicious sites and their sensitive data leaking on the web. In response to these liability and breach risks, SWG vendors strengthened content filtering and added data loss prevention capabilities to better analyze all web traffic and better control its movement. Since they are typically built on a proxy architecture, SWGs are able to analyze web content and determine if a site presents a security risk.
Today, the priorities for security teams have flipped. Threat protection is now the highest priority because the financial impact of data theft and loss greatly outweighs any productivity or bandwidth loss. And while employees are still unproductive at times, technology is often not the right solution. After all, how do you prevent someone from playing games on their personal phone? And you probably don’t care what they surf or how much they stream when off the network, just as long as they don’t get infected or phished, right? When you consider how IT has changed, SWGs were not architected to provide the capabilities needed to address the security risks of today.
While web proxy functionality is necessary to inspect HTTP/S traffic, SWG solutions are often complex to deploy, appliance-based, and closed, siloed platforms that mainly protect users when they are on the corporate network or connected via VPN. Although most now offer a cloud or hybrid delivery model, it increases the complexity for administrators. And shifting the same SWG technology into the cloud won’t magically resolve all its maintenance burdens. Plus, it still only gives insight into web-based threats over ports 80 and 443 — leaving you blind to command and control (C2) callbacks that use other ports and protocols to exfiltrate or encrypt data.
The rules of the game aren’t working any more — it’s time to change the game. We need to reimagine how security can be delivered so it better protects users for the way they work today — and the way they will work in the future. That is the key driver behind a new kind of cloud security platform called the Secure Internet Gateway.
Looking forward: Security must move to the cloud to fully protect data, apps, and users – wherever they go.
A Secure Internet Gateway (SIG) provides safe access to the internet anywhere users go, even when they are off the VPN. Think of it as your secure onramp to the internet. Before you connect to any destination, a SIG provides the first line of defense and inspection. Regardless of where users are located or what they’re trying to connect to, traffic goes through the internet gateway first. Once traffic gets to the SIG platform, there are different types of inspection and policy enforcement that can happen. Here are the capabilities that define a SIG today:
- Visibility and enforcement on and off the corporate network, even when users are off the VPN and without backhauling all traffic to the corporate network
- Protection against threats over all ports and protocols
- Proxy-based inspection of web traffic and file inspection with AV engines and behavioral sandboxing
- Live threat intelligence derived from global internet activity analyzed in real-time, with updates enforced everywhere within minutes
- Open platform with a bidirectional API to integrate with your existing security stack (including security appliances, intelligence platforms/ feeds, CASB, etc.) and to extend protection everywhere
- Discovery and control of SaaS applications
As more security controls move to the cloud, a SIG provides a platform that future capabilities can be built upon. Let’s take a deep dive into each of these capabilities.
Visibility and enforcement on and off the corporate network, for all ports and protocols
One of the core tenants of a SIG is visibility into all internet activity, anywhere users are located. If you can’t see it, then you neither protect it, nor learn from it. A SIG must provide a comprehensive, yet simple way to get all traffic to the cloud platform for analysis. And it should be done without requiring complex deployments. Many IT teams don’t even realize how complex setting up always-on VPNs, full GRE or IPSec tunnels, and custom PAC files have been all these years — until they realize that there’s a different, far simpler way: leveraging the Domain Name System (DNS).
DNS is a foundational component of how the internet works — when you click a link or type a URL, a DNS request initiates the process of connecting to the internet. Similar to how you look in your address book for your colleagues’ phone numbers, DNS was first developed to map domain names to IP addresses. DNS is used by every device — including laptops, servers, mobile phones, and Internet of Things (IoT) devices — as the first step in nearly every internet connection. DNS is also used by malware. In fact, 91% of all command and control callbacks use DNS. The one exception is when a device autonomously connects directly to an IP address instead, and a SIG should be able to cover those scenarios too. By using DNS, you can stop threats over all ports and protocols — not just web ports 80 and 443 like a SWG.
Proxy-based inspection of web traffic
An internet gateway needs a cloud-delivered proxy to be able to inspect web traffic, especially when the domain has a risky reputation or includes both legitimate and malicious content. But the proxy should be reimagined from the way it was originally developed for the web gateway.
The proxy should also be built using the latest technology, such as a microservices architecture. By taking a multitenant, container-based approach, any service provided on the proxy is completely detached from any other service, which enables automatic scaling in the event that one service requires more processing power. By automatically providing more capacity for that function, it results in more effective performance for the proxy.