Container Security Best Practices for Conscientious DevOps

Container Security Best Practices for Conscientious DevOps

As part of the DevOps team, I’m sure you’re already a fan of containers; the way  they’ve eliminated the pain of those environment-related configuration challenges, and reduced your infrastructure requirements by being so much more lightweight than full-blown VMs. But the very thing that makes them so lightweight – sharing the host’s kernel – also gives rise to potential security issues.

It’s important to bear in mind that most of the scaremongering regarding container security really relates to the technology’s relative immaturity – Docker is just about 3 years old, after all! There continues to be active work to plug potential weaknesses, both within the container engines themselves, and through third party tools which improve container management and security.

There are, however, steps you can take to maximise the security of your containerized environments:

Own It!

Rapid DevOps deployments will not succeed if security is a hot potato that’s passed down the chain. In a recent webinar, we polled the attendees to ask, “who owns container security in your organization?” -- The results were split between security teams, DevOps teams, shared responsibilities and no-one/unknown.

In most cases, the correct model is a shared one, but responsibilities must be well-defined and not nebulous.

Scan frequently

A container image includes all of its dependencies – but what if those dependencies are not secure? There are several image scanning solutions on the market, some free or open-source. It is best practice to apply a vulnerability management system that does the following:

  1. Scans the image as part of the build process, before it is pushed to the registry. 
  2. Continues to scan the image while in registry - this ensures that your saved images are not affected by new vulnerabilities.
  3. Tracks down vulnerabilities in running containers. Once new vulnerabilities are discovered, make sure you have the tools in place to detect whether these vulnerabilities are risking your environment.

If you have an active image pipeline, it’s a good idea to invest in a dedicated container scanning solution, which Aqua provides.

However you do it, scan your images, and scan frequently!

Revoke privileges

“With great power comes great responsibility”, is a useful motto for both superpowers and root privileges. The only element of Docker which needs root access is the Docker daemon, so you’ll want to be extremely careful about who has access to that. Make sure Docker admin users get access based on their role. Solutions like Aqua allow you to segregate user access to Docker Daemon, for example: users with "Auditor" will only be able to view container logs.

Download 'Security for Containers: 5 Things DevOps Need to Do' eBook Today!

In addition to revoking permissions from users, you should also make sure containers do not run with root privileges. There’s always a risk of container breakout, so youneed to make sure that if this happens, you’re not compromising all the other running containers, as well as the host system itself. Thankfully, as of version 1.10, Docker supports User Namespaces, so that a container’s root user can map to a non-root host system user. This feature isn’t enabled by default, though, so you’ll need to switch it on.

Hide secrets

With container security, we’re mostly concerned about the risk of containers gaining access to other containers, or the host system. However, compromised secrets – API tokens, private keys, and usernames/passwords - could give malicious parties access to external services outside your containerized environment. Container images which expose your secrets are always a bad thing, but clearly become much more dangerous if pushed onto Docker Hub.

The commonly-used ‘solution’ of using environment variables to store secrets is also inadequate, as environment variables can easily be leaked or written to log files. Best practice is to add secrets at run-time when you docker run your images. Even better is using a dedicated secret store service, such as Vault.

Remove unused components/images

It’s easy to lose track of all your containers, particularly if you’re running a cluster, and end up inadvertently running older versions of your images which expose weaknesses, or contain vulnerable components that you’ve already fixed.

The two sides to risk mitigation here are using a container management tool, such as Kubernetes, and old-fashioned housekeeping to make sure you’re continually deleting out-of-date versions of your images and their dependencies.

Avoid SSH, use automation

You should try to avoid installing SSH daemon inside containers at all cost!. If you’re primarily SSH-ing into the container to perform routine tasks, consider if you could instead automate these tasks – bash scripts and a user-level cron in the simplest case.

It may feel like a loss of control, but closing off SSHreally is one of your best available defenses against both bots and human attackers.

These are the proactive steps you can take to secure your containerised environments. It is true, however, that the short lifetime of Docker has meant that its built-in management and security tools are still a little immature. That’s why it’s a good idea to invest in a dedicated tool, such as Aqua, which ‘fills the gaps’ and ensures that your containers are just as secure as dedicated VMs.

Request a Demo Today!

Amir Jerbi

Amir is the Co-Founder and CTO at Aqua. Amir has 20 years of security software experience in technical leadership positions. Amir co-founded Aqua with the vision of creating a security solution that will be simpler and lighter than traditional security products. Prior to Aqua, he was a Chief Architect at CA Technologies, in charge of the host based security product line, building enterprise grade security products for Global 1000 companies. Amir has 14 cloud and virtual security patents under his belt. In his free time, Amir enjoys backpacking in exotic places.