Nowadays applications are deployed across multiple data centers. Organizations are also deploying workloads in private and public clouds. Within the data center, they have workloads running on bare-metal servers, running as virtual machines and containers. Several different security solutions are deployed to secure the east-west traffic between all their workloads spread across multiple diverse deployments. The lack of a single security paradigm makes achieving security segmentation complex and hard.

Traditionally security segmentation is enforced on the network infrastructure. We implemented this using the ACLs, VLANs, zones, firewalls, security groups and other similar constructs. In order to contain the lateral movement of threat actors if they gain a foothold in the environment, organizations are seeking tighter controls around their applications and environments. They are moving towards a zero-trust approach to implementing tighter segmentation policies between the workloads in different applications or environments. This means more filtering rules on the switches. If we implement the fine-grained segmentation on the network – the number of rules increase exponentially. Maintaining the filtering rules as the workloads change move or migrate is not an easy task. And this challenge only gets worse with scale. Most organizations never get the security segmentation to the granularity they desire because it is simply not possible to achieve with network constructs and the resources needed to implement and manage them.

Network devices, firewalls and SDN technologies which ultimately implement segmentation using network constructs do a great job and perform exactly like how they are programmed. When humans program these devices there is a lot of room for error. And the threat actors only need one error to get in. SDN based technologies are great for network automation. But when it comes to segmentation, they still fall short in implementing segmentation easily and at scale. Few of the fundamental requirements to easily and safely achieve security segmentation without breaking application traffic and disturbing business are 1. A live application dependency map 2. Segmentation policy not dependent on IP addresses 3. An automated way to easily model, test and enforce a policy to ensure safe segmentation. In almost every segmentation approach using network gear or SDN technologies, we lack these three basic requirements which fail us at the later stages of segmentation projects as the scale goes up.

We see more and more organizations adopting the zero-trust approach using host-based segmentation solutions. Unlike other security products like antivirus software and HIPS, host-based micro-segmentation tools tend to be based on lightweight agents. These agents are not inline, nor do they route traffic from kernel to user space for filtering and inspection. Rather, the agents are used to look at connection tables, collect telemetry, and report on workload behavior to a central management system. Once a policy is established, local agents are used for policy enforcement and monitoring only.

Security professionals with remaining performance concerns would be best served by conducting performance testing on host-based micro-segmentation agents so they can assess performance and operations impact for themselves. Here is a good paper by John Oltsik, Senior Principal Analyst, Enterprise Strategy Group that discusses host-based micro-segmentation and overcoming historical biases. He points out how the performance issues are obsolete, other security concerns noted and addressed and how central management addresses operational overhead.  The historical biases about host-based security technologies are certainly valid in some applications but can be thought of as “false knowledge” with regards to micro-segmentation. Based on this mindset, organizations must understand the tradeoffs associated with different micro-segmentation technologies and make their decisions based on technologies available in 2019 rather than in 2009.