rvlsoft - Fotolia

Manage Learn to apply best practices and optimize your operations.

Fortify Docker image security with these 5 tips

Docker images are complex by many measures, but strengthened security alleviates plenty of worry during runtime. Implement these best practices to fortify images.

With virtualized workloads exploding in popularity, cyber attacks have risen in lockstep. Despite the fact that containerized applications are more secure than ever, DevOps professionals must work proactively to counteract these threats before they become dangerous.

Docker remains the dominant container runtime. And Docker images are diverse -- yet far from impenetrable. In 2019 alone, Docker's 10 most popular images contained 30-plus vulnerabilities -- with node images responsible for a massive 580 OS vulnerabilities.

To mitigate such risks, these best practices highlight some of our top pointers to secure individual Docker containers and overall Docker ecosystems.

1. Minimize images

Before you choose an image and runtime OS, ensure the preferred image is functionally relevant. Familiar technologies are optimal for a dev team. But what if multiple base images can get the job done?

Use the smallest base image possible -- and preferably not a full-blown OS image, as these types of images contain numerous libraries, dependencies and tools. While that added flexibility is beneficial, it dramatically expands the attack surface of a container.

If an image search yields little relevance to your project, it's best to look elsewhere. Developers can use tools like Buildah and Buildkit to create images with custom dependencies or packages. Buildah also sheds the Docker daemon process; this benefits scaled development and greatly reduces privilege requirements. However, this process is a common attack vector due to its open-socket communication.

With so many Linux distros out there, it's easy to find one that's more stripped down. Additionally, Alpine Linux images are only 5 MB in size, with no obvious vulnerabilities in a minuscule package. This is a fantastic option if it's a good fit.

2. Opt for multistage builds

It's often optimal to run one image in testing and another in production. Tested images don't need compilers, a build system or debugging. Those tasks have been completed, and excess tool inclusion is unnecessary.

Multistage image construction simplifies long-term Dockerfile maintenance and artifact replication. FROM statements can apply to multiple base images -- making it easy to copy and remove important artifacts. A careful selection of artifacts slashes the number of vulnerabilities present throughout the infrastructure.

3. Institute proper management privileges

As with any managed system, restrict the configuration of Docker images to only trusted users. Each image should have its own dedicated groups and users. Furthermore, users with access to application configurations should only possess the requisite permissions for doing so -- and nothing else. These same users should run accompanying processes.

For example, Node.js images ship with a generic user. Users can also run the Docker daemon as non-root users. Those without root access have fewer privileges and can inflict less damage to the ecosystem -- accidentally or otherwise. These accounts are also less dangerous in the hands of remote attackers.

Rootless access "does not use binaries with SETUID bits or file capabilities," according to Docker's documentation. Note that newuidmap and newgidmap are required and included. With rootless daemon use, it would be useful to manually install the docker-ce-rootless-extras package via the command-line interface.

4. Protect sensitive information

Data leaks are unfortunate consequences of cyber attacks, but some leaks occur throughout the ecosystem. Bad configurations and careless cache storage expose information to images during the build process. Avoid any attempts to mount sensitive files. Therefore, the Docker secrets feature is a good idea. Secrets are simply a unit of data; this can be a password, an SSH private key or SSL certificate, or any other data that requires encryption in the Dockerfile or application source code.

Container runtimes will occasionally require this protected information to run properly. However, it's possible -- and recommended -- to store this data in an isolated location, from where it can be pulled. Note that this feature is limited to swarm services.

Additionally, secrets provide an abstraction layer between containers and credentials -- which unlocks multi-environment operation. Secrets are usable with Windows, Nginx and WordPress services, among others.

Lastly, .dockerignore files help avoid unwanted COPY instructions. This prevents sensitive data leaks into unwanted locations during the build process.

5. Thwart man-in-the-middle attacks

Docker images contain and share data between themselves and containers. Any system with data transportation must protect against MITM attacks -- as these communicative pathways are vectors for interception. There are several ways we can combat this.

First, all images should be signed and verified to confirm legitimate operation. This prevents any opportunity for fraudulent certificates from imposters and blocks access to critical services. Images cannot be tampered with or originate from compromised sources. Services like Notary are ideal for signing all images you work with as they ensure authenticity through The Update Framework.

Second, the COPY operation is a great substitute for ADD when faced with arbitrary URLs. ADD opens the door to tampering and accesses local archives. This leads to path traversal -- access to externally located resources -- or Zip Slip -- widespread file overwrite or remote command execution. Conversely, COPY only permits local file-and-directory replication from the host machine.

Dig Deeper on Managing Virtual Containers

SearchSoftwareQuality
SearchAppArchitecture
SearchCloudComputing
SearchAWS
TheServerSide.com
SearchDataCenter
SearchServerVirtualization
Close