Debunking the Top Cloud Native Security Myths
Myth 1: The same old methods can be used to achieve compliance in a cloud native environment
Many compliance regulators do not include specific guidelines that relate to the processes or artifacts specific to cloud native environments. Ironically, despite this lack of guidance (or possibly as a result), teams tend to believe that classic controls satisfying compliance requirements in other environments will satisfy the same requirements for a cloud native architecture. An informed demonstration of “best effort,” aligned to the concepts laid out by the regulators, is the desired goal.
Cloud native environments will generally involve controls for components and layers that are updated separately of one another. For example, where before we might have just hardened a VM and scanned for malware, now it’s important to scan container images, scan and harden the VM, and also scan and harden the orchestrator. And we must monitor and log events, to show proof that the controls are in place.
For a practical example, PCI DSS guidelines require the separation of PCI from non-PCI systems. The guidelines name firewalls, physical access controls, Multi-Factor Authentication, active monitoring and the restriction of administrative access as methods to “provide reasonable assurance that the out-of-scope system cannot be used to compromise an in-scope system component.”
To achieve the level of separation required by PCI DSS in a multi-tenant Kubernetes cluster, we would need to have separate registries and pipelines, divide resources across the separated entities and approved administrators via K8s namespaces. Along with it, we would need to ensure proper labeling and tagging within those namespaces to further enforce segregation, and RBAC would be required to control access to the runtime and firewall policies used by the security tool to prevent violations of the desired segmentation. It’s all open to interpretation, but if we do that and show proof that these controls are in place, most auditors would be wholly satisfied.
Myth 2: The cloud provider will secure both account configurations and what is run in the cloud
It is critical to understand the responsibilities that a cloud provider will – and will not – assume, and where the gray areas exist. The AWS shared responsibility model, for example, briefly describes it as protection ‘of’ the cloud which is the cloud provider’s responsibility, whereas customers are responsible for protection ‘in’ the cloud. This simplification appears to belie two critical responsibilities for customers:
- The cloud provider is not responsible for the safe configuration of its customers’ accounts and services. While cloud providers offer many default security configurations, it is the customer’s responsibility to check the configurations and add additional protection as needed for the security context of their applications. A false sense of security here can lead to a drastic underestimation of the time and effort required to properly configure a set of services. Gartner states, in its report on ‘How to Respond to the 2020 Threat Landscape’ that, “Through 2023, at least 99% of cloud security failures will be the customer’s fault.”
Simply with an EC2 instance, an S3 bucket, Lambda for functions and CloudTrail for auditing, already these services require dozens of key configurations to prevent potential data leaks and security breaches. The good news is that these mistakes can easily be prevented with a Cloud Security Posture Management solution, which can be set up by whoever has access to those cloud accounts in an organization. Preventing these mistakes for simple deployments could be as simple as a quick, free trial.
- There is no one formula to protecting ‘in’ the cloud, and the customer is required to learn and understand the nuances involved. For example, Kubernetes in the cloud could be accomplished via EKS or Open Source, and with AWS’s default EC2 Amazon Linux 2 OS, or the customer’s own Linux OS. The shared responsibility model is vastly different between these options and requires homework on the part of the customer to understand the full set of security responsibilities. For instance, in the Linux example above, customers must patch any guest OS and applications, whereas AWS would take care of patching when its default OS is used.
With even a basic understanding of why the most commonly held myths do not apply, teams can begin to gain a more accurate sense of how to begin planning an effective cloud native security strategy, and cloud native journey overall.
Get the full whitepaper for all seven myths, which includes key concepts your team can refer to as you plan your cloud native journey.