Effective Cloud Security: Beyond Microsegmentation
According to Risk based security research, 2019 is on track to being the “worst year on record” for breach activity.
Despite a whole host of security tools and control systems, breaches continue to occur, they are more frequent and the blast radius is wider. And more often than not they follow a pattern of either being insider threat, compromised user accounts or exploitation of zero day vulnerabilities. Traditional enterprise perimeter based security model is focused on preventing perimeter breach, and once the perimeter is breached, there is not a lot being done to prevent lateral movement of an attacker. The recent CapitalOne and Uber breaches have shown how attackers with compromised user accounts moved laterally to download data of millions of users from the enterprise’s S3 buckets.
Micro segmentation is Micro Perimeter
Enterprises are increasingly embracing Zero Trust security postures to combat threats that cannot be easily identified or vectors hitherto unknown. This security model is strongly rooted in the principle “Never Trust, Always Verify”, hence, Zero Trust. For most enterprises, Zero Trust is built on the concept of Microsegmentation. Microsegmentation essentially allowed enterprises to segment workloads into groups, and group membership is determined by a variety of factors such as application type, function type, department, location…. and groups are secured through group based policies. Eventually this evolved to labels or tagged based group membership and inferring the security policies based on group membership. Thus was born intent based policies.

Traditional Data Center vs Micro Segmented Data Center: Source VMWare
Micro Segmentation was a key enabler of zero trust security architectures, allowing security teams to whitelist workloads that can access services. While this allowed some degree of isolation and permissioned access, microsegmentation would not have addressed the Capital One breach. A true zero trust architecture should evolve to go beyond network level controls.
Traditional security postures built on L3/L4 Firewalls, WAFs, IDS and IPS systems are geared towards a perimeter based security approach. With microsegmentation the perimeter has moved closer to the workload. And in the event of a breach, microsegmentation allowed security teams to contain lateral movement of breaches at a network level and contain the fallout. But,
Is network layer segmentation enough? Did network micro segmentation falter at preventing lateral movement of threats in recent well publicised breaches?
This harks back to the evolution of next-generation firewalls (NGFW), when perimeter firewalls evolved from Layer3/LayerL4 rule based firewalls to deep packet inspection (DPI) based firewalls. This need was necessitated by the fact that all applications eventually moved to HTTP or HTTPS and gaining deeper insights allowed for fine grained per-application policy and control. Similar needs led to the evolution of Cloud Access Security Brokers (CASB) to shed light on shadow IT and to gain insight into applications used by enterprise users. Now enterprises no longer permit or deny all outgoing port 80/443 traffic but can closely monitor usage of enterprise sanctioned applications.
A whitelist approach for a cloud era
We are seeing a similar shift within the datacenter (VPC) where L3/L4 micro segmentation would not suffice to prevent a breach like the one Capital One experienced. It is key to understand the roles of each workload in the data center and its relationship to the overall functioning of the application. Whitelisting application access beyond the application’s network behavior allows security operations to detect behavior that is abnormal and scrutinize this interaction closely. We can take a cue from the way NGFW and CASBs evolved and extend this from a network layer (whitelisted microsegments) to application layer (whitelisted applications).
There are some unique challenges to create this whitelist, and trends such as multi-cloud, hybrid cloud, microservices have a significant impact on the ability to create and continuously manage this whitelist blueprint.
Shared responsibility SHIFTS responsibility
With the continued growth in cloud migration, security of cloud applications is even more in focus. Cloud providers are advocating a shared responsibility model where the security responsibilities are shared between cloud providers and enterprises deploying cloud applications. Shared cloud security model is sometimes aptly described as security-of-the-cloud vs security-in-the-cloud. Here are two very similar models of shared cloud security from Microsoft Azure and Amazon AWS.

Microsoft Azure and AWS shared responsibility models: Source Microsoft & Amazon
These models very clear layout what part of the security posture the cloud provider is responsible for. Here the onus is on the enterprise to implement a zero trust paradigm and to ensure access to assets deployed in public cloud are appropriately controlled. The cloud infrastructure allows to create an old fashioned micro segmentation but it is up to the enterprise to create a true application aware whitelist infrastructure. Multi cloud and hybrid cloud further complicates implementation of an effective security architecture.
Microservices complicates security policies
In his book, Agile Software Development, Principles, Patterns, and Practices, Robert C Martin introduced the single responsibility principle, and in a way microservices architecture embodies that principle. In this paradigm, large monolithic applications are broken down into micro applications that are developed by smaller teams. Each microservice or set of microservices follow their own cadence of development and release.

Monolithic vs Microservices Architectures: Source Altassian
A plethora of ways to build, deploy and manage the microservices further exacerbates the issue of microsegmentation. In one fell swoop the issue of network segmentation and network whitelisting became an issue of micro application whitelisting and challenges include but are not limited to:
- Service and components are smaller in form and function but the number of such components increases significantly.
- Auto scaling, self healing and service discovery make infrastructure dynamic and unpredictable
- Development and release at the pace of Agile means that SecDevOps need to adapt to that pace.
- Many orchestration platforms (kubernetes, Openshift, Nomad, Mesos, Docker Swarm…) and many different ways to service discovery
Auto discovery and zero touch configuration
Given the complexity and dynamic nature of today’s cloud environments, it is impractical to manually coordinate and create security configuration. The only plausible way is to auto-discover, learn and auto-create this configuration. The system needs to discover the application interactions, discover APIs, discover infrastructure accessed by applications, discover data accessed by applications and auto-create a blueprint.
Taking a blackbox approach to application security takes us down a path of guessing the application requirements and could potentially lead to imprecise controls. Understanding the process of applications development, how their services are used, how they interact in the ecosystem, the privilege levels they need to access infrastructure, what data they need, when they need it and at what privilege levels will help create an effective security strategy. And more importantly it will enable SecOps to react to security events with speed and precision.