Network firewalls were created as the primary perimeter defense for most organizations.  A firewall is a network device that monitors packets going in and out of networks and blocks or allows them according to rules that have been set up to define what traffic is permissible and what traffic isn’t. Several types of firewalls have developed over the years(proxy, stateful, Web app and the newest next-generation firewalls), becoming progressively more complex and considering more parameters when determining whether traffic should be allowed to pass. Firewalls started off as packet filters, but the newest do much much more.

Initially placed at the boundaries between trusted and untrusted networks, firewalls are now also being deployed to protect internal segments of networks. Is bringing firewalls deep into the data centers to create micro perimeters the best solution? The problem is not new. Segmentation has been around for as long as we have been connecting networks. We started segmenting using routers that we placed at the edge called perimeter firewalls. Segmented the “inside” from the “outside”. Then progressed to creating DMZ using 2-tier firewalls, the “firewall sandwich”. This created Trusted, Semi-trusted and untrusted environments. Another approach has been leveraging the capabilities that networking infrastructure like switches and routers using VLANs and ACLs to define segments on the network. Their primary motivation is network performance, essentially efficient routing of packets. Slowly the perimeter firewalls started moving deeper into the data center. It is manageable when we have fewer workloads. Not an issue for the first few hundred operating system instances be it virtual or bare metal.  The challenges surface when we define more granular policies and when the scale increases - both of which are critical aspects for security segmentation to create micro boundaries within the data centers and cloud. 

None of these approaches give visibility into the application flows, how are applications talking to one another. Visibility is key to safely build/model and test policies before enforcing segmentation. Now the other critical aspect to security segmentation is uniformity - you need a common security construct to define granular policies and automate security irrespective of where your application servers live or move. Today's complex application architectures span across multiple data centers and cloud(s) and adopting virtualization, containerization, and microservice models. Firewalls are not the answer for building granular security policies for all these deployment models and the operational challenge only gets worse with scale and granularity of the policies. 

Static policies don’t work with today’s compute: cloud, VMs, containers, bare metal and who knows what else is coming. Defining granular policies and managing them using firewalls is time-consuming and unmanageable.  Firewall rules and ACLs start to get unwieldy, and I had to personally deal with TCAM sizes and memory outages on the network devices many times. 

Remembering the outages that we had to go through after the 'permit and pray' phase of a change control window - I wish we had more Visibility into application traffic flows. Bet on technologies that provide you real-time visibility into application flows - not network maps. Not a map that is obtained by crawling config files. Bet on technologies that don’t create “choke” points. Re-routing your app traffic and re-architecting your network should not even be an option to consider. Try to leverage what you already have than to rip and replace. 

Embrace the modern shift of security segmentation being decoupled from network infrastructure which brings DevOps centric security to your organizations. Implement security segmentation on the application servers using the native stateful firewalls without introducing kernel modifications. Read how Brian Chess talks about how Netsuite implemented segmentation to protect their high-value applications.