Forces Of Fundamental Shift in Application Security
During the course of evolution , some events act as turning points and make such a major impact that they are seen as defining moments as well as the start of a new era in their respective spaces. We are at such a juncture in the application security space.
The adoption of cloud and virtualization for deploying and developing applications creates a huge shift and makes traditional and existing security solutions ineffective and inapplicable in the modern cloud era.
This article is the first in a multi-article series to examine various underlying factors contributing to the fundamental shift in Application Security space. I’ll be following up with a deeper discussion on the various factors and emerging approaches/solutions in subsequent articles. Let’s start!
The arrival of Address-less Workloads: Where Has My 5-Tuple Gone?
Before the adoption of cloud and virtualization technologies, applications were deployed on physical servers. Each physical server was assigned an IP and the IP was more or less static for the life of the physical server. Thus the application workloads deployed on servers had a fixed address for most of its lifespan.
Security teams used the destination address of the workload to refer to the workload and created security policies for it. Security products offered ACLs based on a 5-Tuple (Source IP, Source Port, Destination IP, Destination Port, Protocol) to enable security teams to express policies to secure the workloads.
The use of dedicated physical servers for deploying workloads quickly spurred into a wasted capacity and underutilization of servers, high operational overheads, and cost of maintaining large server farms.
Virtualization saved the day. With virtualization, available physical hosts and computes were abstracted as an aggregate available compute capacity in the data center. Workloads were dynamically instantiated on one of the available physical hosts. This caused the workload to have a dynamic address. Workloads could be placed on any physical hosts.
Networking solutions responded to the advent of virtualization with overlays and virtual networks as a solution. And thus, while the advent of virtualization is the start of the address-less workloads, it is shielded by the overlay networks and network virtualization technologies. Security teams and solutions are still able to use 5-Tuple available inside the virtual networks and thus the disruption is masked.
Recent advancement with Containerization and Server-less technologies make the address- less workloads more prominent. Platforms like K8s instantiates containers dynamically and inherently prevent the assignment of any form of static IP address to the workload. Workload to IP address mapping is internally managed by the underlying platform.
A server-less platform like Amazon Lambda goes a step further and even further de-couples the workload from underlying infra. A workload is instantiated on a demand to serve a specific function and it can be instantiated without any pre-context of the underlying host or network segment.
Workloads becoming address-less is the first of the main fundamental shifts. As the workloads don’t have any fixed network address that is known to the security team in advance, traditional security solutions that have policies designed in terms of 5-Tuples no longer work. Security policies cannot be expressed in terms of networking parameters like destination IP addresses or subnets any more.
In the cloud era, IP address based identification has been replaced with identity and attribute-based identification of the workloads. This leads to newer areas that evolve around securing address-less workloads. We’ll delve more deeply about these topics such as application micro-segmentation, discovering workloads using service discovery, and also workload identification frameworks like Secure Production Identity Framework for Everyone (SPIFFE)
Workload Becomes Mobile: Living in a Perimeter Less World
Back in the old days, when workloads were tied to the physical hosts, security teams could secure the perimeter around the physical hosts and felt assured that they’re providing a secure environment to the workload.
As virtualization abstracts the underlying available physical hosts as a pool of available compute capacity, workloads are now instantiated at any available host. Also, as and when the usage requirements change, workloads are migrated to the best available physical host. The first iteration of this was to move the workload from one host to another host in the same data center. As technology became more mature, the migration across data centers and across multiple types of environments (public and private cloud) has become reality.
As a continued evolution, containerization platforms make the migration more seamless by abstracting the underlying compute, network and storage completely.
Workload mobility is another big shift. Perimeter-based approach to security no longer works. In Cloud Era, keeping security policies tied to a specific network segment or a specific host is no longer effective to protect the mobile workloads.
Securing mobile workloads requires a more pervasive and distributed security approach. We’ll go over various approaches to create distributed security layers such as RASP, Sidecar, and other insertion mechanisms as well as how these approaches address the distributed workload security.
Ephemeral & Elastic Workloads: Humans as Control Planes Don’t Scale
One of the basic promises of cloud technologies is to enable operational efficiencies and reduce cost. Cloud platforms enable the pay-as-you-use model. Various public and private cloud platforms like K8s dynamically scale down and scale up the workloads as per the load and demand.
The total lifetime of the workload is reduced to hours from the previous span of months or years. Workloads are short lived and only kept around if needed.
This presents a new set of challenges in terms of configuring and managing security infrastructure and policies for dynamic and ephemeral workloads.
Policy configuration needs to dynamically adjust to scale up and scale down events. Traditional solutions that are driven by a configuration file don’t work anymore. Configuration needs to be API driven and automated.
Human admin-centric manual configuration based systems cannot be used in cloud environments.
Also, another challenge is that as the workload scales up and down, then security infrastructure has to be able to scale up and down accordingly for reliability and full realization of the cost savings. This makes the traditional security solutions with fixed capacities unfit for dynamic and elastic environments.
Emergence of East-West: APIs Are Everywhere
To maximize the gains from available elasticity of the workload, it becomes important to segregate different functions of the application into separate workloads (aka services). This provides two main benefits to the businesses. First, it allows an individual functionality to scale up and scale down more granularly based on its need, thus maximizing the operational efficiency and reducing the total cost of running application. Second, it allows the different functions to be developed, deployed and debugged separately.
It also creates an increased amount of workload-to-workload communication. Most of this traffic is an API layer traffic as the different functions of applications are communicating using APIs.
Before the cloud era APIs were seen as a north-south interface and most of the application layer security protection was deployed at the perimeter for north-south services, that shifted in the cloud era. APIs became primary means of communication amid workloads internally.
Deeper application-layer protection became important for internal or east-west traffic to secure the internal data exchanged during east-west interactions.
To summarize, this article touched upon the underlying factors creating a paradigm shift into the application security landscape.
Stay tuned for a deeper dive into various areas related to securing modern apps in cloud-era!