Network security has changed significantly during the past several decades. Basic network security started with packet filtering devices unable to perceive the state of each session; each packet was an isolated event. This approach allowed attackers to spoof traffic easily and bypass these simplistic controls, so security researchers began tying the state of the network traffic to the policy controls applied. With stateful filtering devices, we gained insight into legitimate sessions versus deliberate, malicious network traffic patterns. Before long, that wasn’t enough, either. In hindsight, these attacks were often simple examples of traffic and protocol manipulation and became relatively easy to detect and block.

The attack landscape evolved to more application-centric attacks, and our network protection controls needed to advance and improve to keep up, ultimately leading to the creation of intrusion detection and intrusion prevention systems, as well as web application firewalls. As attacks grew more sophisticated, security professionals realized that we were continually failing to prevent many attacks because we couldn’t properly evaluate application and user behaviors within our environments. This realization led to the creation of network behavior monitoring, as well as the “next gen” firewall industry.

Today, we’re still struggling, even with all of these technologies. Attackers are still getting in, and frequently, all manner of malicious communication goes totally undetected. Some of today’s attackers are very smart and have sophisticated ways to “blend in”—for example, they know huge volumes of traffic are coming in and going out on TCP port 443, and to many firewalls this just looks like traditional HTTPS traffic to websites, so attackers use this to their advantage. Many other traditional network controls are totally blind in these scenarios, even when attackers move laterally through the network to look for new systems and data to compromise.

It’s time to rethink the way we’re approaching network security today. Some of the things we need to address include:

  • Looking at our entire environment as potentially untrusted or compromised, versus thinking in terms of “outside-in” attack vectors—increasingly, the most damaging attack scenarios are internal due to advanced malware and phishing exercises compromising end users
  • Better understanding intended application behavior—from the processes running on workloads to the network traffic they generate—and doing our best to enforce these approved application behaviors
  • Focusing on trust relationships and system-to-system relationships in general within all parts of our environment—most of the communications we see in enterprise networks today are either wholly unnecessary or not relevant to the systems or applications really needed for business

System and Application Inventory Discovery and Maintenance

While in-depth patching and configuration management discussions are beyond the scope of this paper, they both are critical to the first area of hygiene: inventory. Simply knowing what you have at any given time, what the intended state of the asset should be and its actual state, is paramount to building a sound base of cyber hygiene. Three important elements that apply to the discovery and inventory management phases of cyber hygiene include the following.

  • Mean time to detect/track.How long it takes to discover/detect compute workloads across the organization can have a major impact on the success of any inventory monitoring and management strategy, especially in highly dynamic cloud and DevOps scenarios.
  • Environment coverage. The breadth of the environment regularly or continuously assessed for inventory changes or updates can affect how current the inventory is, especially with the older generation of scanning and agent-based reporting tools.
  • Asset criticality and grouping. Identification of specific assets within the environment, ideally through some sort of tagging or naming mechanism, is invaluable when evaluating risk.

5The entire cycle of discovery, asset evaluation, configuration and security posture, and monitoring workload state can be greatly facilitated by using software-defined infrastructure. Once we have inventory in place (and continuously updated), hypervisors and network virtualization tools can help us to enforce the desired state of not only the workloads themselves, but also the interaction between workloads that should be communicating for application environments to function properly. This dynamic, flexible model of micro-segmentation and application control is at the heart of the next generation of software-defined security, which we’ll cover in the next sections.

The Value of Automation: Moving to Adaptive Micro-Segmentation

In order for security to keep pace with the DevOps teams and deployment models, we need to automate core security tasks by embedding security controls and processes into deployments and running production workloads. To successfully implement an adaptive micro-segmentation strategy in a fast-paced DevOps environment, controls will need to be defined for applications and workloads following standards and requirements and then applied automatically in several places:

  • Within the workload template or image, or container image running within a virtual infrastructure
  • Within configuration templates for “ infrastructure as code” such as Amazon Web Services CloudFormation or Terraform
  • Within a software-defined network security policy encapsulating any running workload and allows or denies specified traffic patterns
  • Within a central policy engine/enforcement point to arbitrate network and application traffic between virtual workloads

Next-Generation Security in the Software-Defined Data Center

As we shift toward a fully software-defined data center (SDDC), both in-house and across various public cloud environments, a number of things are changing in the realm of information security. Fortunately, all of these changes are positive and will help us finally get a handle on some of the industry’s more pressing and challenging problems.

We’re embracing software-defined security, which includes everything from configuration profiles defined within templates to network policy embedded in the virtual network stack across a hypervisor environment. The use of APIs and virtual appliances from vendors will only grow, eventually replacing many of the hardware-driven platforms we’ve been used to. We’ll see new skills emerging, focused on software-based security definitions and application security mapping and control, as well as automation tools and process updates that heavily rely on automation and orchestration platforms. Development, operations, and information security teams are blending and aligning more closely than ever before. Additionally, we’ve seen new use cases and implementation methods for security arise due to the unification of the control and policy planes within the SDDC. Some of these use cases include the following:

  • Adaptive micro segmentation for application security. This is probably the “killer use case” amongst all of them, as we’ve discussed in this paper. A dynamic “zero trust” policy engine adapting to changing workload applications and traffic patterns in DevOps environments will enable security and networking teams to construct much more effective and sustainable isolation and segmentation strategies that grow with your cloud strategy. By helping with hygiene (inventory and configuration) as well as access control (isolation and affinity policy between workloads), the SDDC facilitates highly granular application whitelisting models at all layers.
  • Software-defined DMZs. In alignment with the previous use case, software-defined DMZs can be flexibly created and managed to encapsulate certain types of workloads of varying sensitivity. By adding a workload into a defined DMZ, it inherits the DMZ’s policies and can immediately adapt to the environment. Software-based DMZs can also be much more rapidly provisioned and updated compared to traditional physical network segments.
  • Security for virtual desktops. As more organizations embrace Virtual Desktop Infrastructure (VDI), we will start to see the same benefits of micro-segmentation and application control applied in end-user compute environments. Today, many attacks begin with the end user and move laterally from end user desktops in the early stages of attack lifecycles. Few network environments are well-equipped to detect or control this kind of behavior and VDI with a security layer embedded in the virtual network greatly enhances security capabilities to control what traffic is allowed and trigger an early warning system of possible malware or attacker compromise.
  • Agentless anti-malware. Offloading workload antimalware processing to a dedicated virtual appliance or other virtualization-compatible engine has been common for some time but will only continue to grow and advance as SDDC and cloud deployments proliferate. Integrating antimalware technology with the virtual control plane via APIs also facilitates more effective detection and response capabilities, such as automatically quarantining a suspicious or infected virtual machine.

There are certain to be many more security-oriented use cases that emerge as the SDDC and DevSecOps technologies and techniques take hold in modern data centers and cloud environments.

To read full download the whitepaper:
Evolving Micro-Segmentation for Preventive Security: Adaptive Protection in a DevOps World

SEND ME WHITEPAPER