Aqua Blog

Improving Kubernetes Security: Upgrade Clusters to Avoid Exposure

Improving Kubernetes Security: Upgrade Clusters to Avoid Exposure

With the move to cloud native development comes potentially increased risk of services that are exposed to the Internet and can easily be discovered by attackers. When combined with the fast pace of change in Kubernetes versions, there’s real risk of being one vulnerability away from a security incident. Recently I carried out some research on Kubernetes systems visible on the Internet that highlighted some potentially concerning trends.

Exposing Kubernetes to the Internet

The first objective of my research on Kubernetes systems was to explore how many clusters are directly connected to the Internet. There were over 750,000 systems visible in the searches I conducted, which appeared to be Kubernetes API servers.

Kubernetes systems are relatively easy to identify on the Internet, as the way they configure TLS certificates for the API server contains known strings, but they can also be identified by unauthenticated access to an API endpoint called /version. This endpoint is often made available without authentication in on-premises clusters and, reviewing the data visible on the Internet. Over 90,000 clusters also allow for it to be queried in that environment.

What this tells us is that attackers will be able to identify clusters to target relatively easily and, in many cases, will be able to establish the version in use and as a result which CVEs the cluster might be vulnerable to.

The first piece of advice that companies should look at here is, do you really need to have the Kubernetes API server directly accessible to the Internet? Placing it behind a VPN or Jump server, or restricting the source IP address ranges will prevent attackers who are scanning the Internet looking for clusters to attack, from finding your services.

The importance of upgrading exposed clusters

Of course, there may be some use cases where it’s necessary to make the API server available and in those cases it’s essential that you ensure the version of Kubernetes is updated to address any CVEs. It’s also very important to ensure that your cluster stays within the support lifecycle of the distribution. Otherwise, when new patches are released, they won’t be available for your clusters necessitating a rushed upgrade to a new major version.

Looking at the data available where the cluster version was visible on the Internet shows that this is a challenge for cluster operators. 26% of clusters that had a visible version number were running versions of Kubernetes that were likely to be unsupported.

An important point to consider here is that it’s not just self-managed clusters that this applies to. In fact, the bulk of the unsupported versions visible on the Internet appeared to be AWS EKS clusters. Whilst managed Kubernetes clusters remove the requirements to carry out day to day management of the control plane, the customer is still responsible for planning for and triggering upgrades.

Doing this in a timely fashion is also important as, according to cloud provider documentation, they will automatically upgrade your clusters at some point after it reaches the end of support. Depending on the versions involved, this could cause availability issues for your cluster workloads as there are sometimes breaking changes involved in Kubernetes upgrades (notably the upgrade to 1.16 removed several deprecated APIs, which would break workloads targeting those deprecated endpoints).

Conclusions

There’s a couple of important takeaways from this research:

1. Improve your security by not exposing information that you don’t need to
Most managed Kubernetes distributions do allow for the control plane to be available only to private networks or specific CIDR ranges, and adding this control will reduce your organization’s exposure to attack.

2. Understand the Importance of having a planned upgrade cycle for all Kubernetes clusters
Ideally plan to upgrade the cluster every three months, to take advantage of the new versions that are made available. This helps reduce risk by ensuring that only single version upgrades happen and also gives teams that manage the clusters regular practice with the processes. For companies where a 3-month cycle isn’t practical, the minimum cadence should be to upgrade every 9-12 months to ensure that the version in use is still receiving security upgrades. Whilst some managed Kubernetes providers will allow a slightly longer cycle, there aren’t any who are really providing a “Long Term Support” option, as yet.

3. Understand the shared responsibility model for your cloud providers
Understand what areas of cluster operation you are responsible for. In this case, my expectation would be that some cluster admins are operating under the assumption that their cloud provider would handle upgrades, but as we’ve seen, that is not the case.

Rory McCune
Rory was a Cloud Native Security Advocate at Aqua. He has worked in the Information and IT Security arena for the last 20 years in a variety of roles. He is an active member of the container security community having delivered presentations at a variety of IT and Information security conferences. He has also presented at major containerization conferences and is an author of the CIS Benchmarks for Docker and Kubernetes and main author of the Mastering Container Security training course which has been delivered at numerous industry conferences including Blackhat USA.