Alex - stock.adobe.com

How to harden Docker images to enhance security

Strong Docker image security is paramount to fighting off breaches. Watch this video to learn multiple ways you can harden deployments and deter attackers.

How to Harden Docker Images to Enhance Security

Containers are becoming commonplace in data centers and cloud tenancies across the globe.

With their surge in adoption, it is wise to handle these systems with a sense of security at the forefront. Whether you're developing containers for your company or deploying containers created by other teams, knowing how to harden these deployments is important.

Let's look more closely at five ways to harden Docker images.

Restrict network port accessibility

The first recommendation for securing containers involves network ports. During the creation of a container, a developer might allow access to additional network ports for troubleshooting or debugging purposes. This process works, but remove access to those extra network ports once the image begins to enter into production or public internet-facing environments.

Use the -p parameter when using the Docker command-line interface (CLI) to set strict limits on host-to-container port mappings. Any ports not specified in this command will be inaccessible, even if the Dockerfile exposes them.

Limit build data

During the container Dockerfile build process, it's common to have log files, API secrets and other data that won't be in the final container image. To prevent these files from being included in the build context, use the .dockerignore file to explicitly leave out particular files or directories from the build process. This protects any secret data or credentials from accidental leaks.

Keep image size small

The third Docker image hardening method is to update the base image to be a "slim" or Alpine Linux container image. With less system files or applications in the container image, there are fewer applications susceptible to any hacking attempts. This reduces the horizontal network movement options available to such an attacker.

In addition to reducing the container's image size, run containers in a strict read-only mode in production. Any web service that faces the public internet should have zero unnecessary writeable locations on the hard drive. Instead, the service should rely on secure network connections to databases to store or manage customer data.

Reduce exposure

The fourth measure is proper network segmentation for an application's architecture. Rather than deploying all containers into a flat network, divide public-facing services -- for example, web servers -- away from back-end services such as databases.

Database containers do not need to be exposed to the public internet and should only have a very narrow network link accessible for the web services to communicate over. When the databases aren't exposed to the internet, the risk of security breach attempts decreases.

Use Docker Compose

The final way to harden Docker containers is to wrap it all together with a Docker Compose file. In the video example, our read-only parameters set the temporary file system locations and the ports are hard-coded specific to what needs to be publicly accessible. And we created these new network components. But most important is the logical separation between our front-end and back-end networks.

By segmenting the network this way, we can a public endpoint's or public user's access to only the front end. The database one will only be restricted to container-to-container communication over specific links. Security increases because no public user can connect into the database, only the specified containers.

Transcript

Dave Pinkawa: Welcome to this video on how to harden your Docker containers in a security-conscious way.

00:05

Throughout this video, we're going to be working on hardening this example Docker file. This is a web server on the Debian-based image, we are installing the Apache2 package. And then we are just making some minor adjustments to our running container. At the bottom here you can see we are exposing by default, 22 and 80.

00:24

Consider when pushing this to production that we do not want to publish any of these non-standard ports that are not web-based. In this instance, Port 22 may be any sort of development-based port that we should not be making publicly accessible or any inbound traffic. Now we're going to build our image based off of this Docker build file. Once that's completed, we're going to use the Docker CLI to just run this plain container and publish all ports.

00:57

And here is the crux of the situation that we want to resolve. You can see that both Port 22 and Port 80 have been published and are now going to be publicly accessible. Upon pushing to production, we only want to supply and publish those ports which are vital for our application -- in this case, Port 80. We can use the Docker CLI, again using the 'docker run' command, using the lowercase 'p' parameter and this will specify which do we actually want to publish.

01:25

Any of those ports not published here in this method will be exposed for container-to-container communication but will not be publicly accessible within that container itself. By reducing which ports are exposed on your container, you're effectively increasing the security because none of these additional services will then be accessible. In addition to the port specifications as a security measure, we can also secure our Docker build file itself.

01:53

To do this, we're actually going to be implementing something called a .dockerignore file. And what this says is that anything that our Docker build uses, we're going to explicitly ignore -- in the context of our container creation -- a certain subset of folders or files. In this example, we are creating the .dockerignore file. This is going to be used to specify which folders and which files do we want to ignore from our build process.

02:18

In our working example here, we have the .gitignore file. So, we don't want to include any of our source control files that might be included or put in the context of our container build unnecessarily. In addition, I have a file that is specific and full of secret information that I do not want put into the context of our build pipeline here. To see the changes in our Docker build context, we're going to build this image now -- we're going to use a different tag of v2. And then we're going to compare this against our v1 tagged build image as well.

02:50

You'll see that the context size has been drastically reduced here on our v2 because we've excluded both of these items. Our Docker build v1, which was our original context without the .dockerignore file but 162 kilobytes. And with the .dockerignore file in place, our v2 version of this container is only four kilobytes of contact size. By reducing the build context of our containers, we're effectively removing any potential secret information that is there unnecessarily.

03:23

Following the same train of thought to reduce our containers overall footprint or size, we can also use what are called slim images, or any of the BusyBox or Alpine-based images that might apply for your particular use case. In our case here, the Debian package does have a stable slim option in the Docker Hub. By updating our Dockerfile to reflect this new image choice, let's go ahead and build it and see what that difference in size looks like.

03:56

As you can see here -- based on our container sizes through each of our iterations -- that the v3 slim version of our container is about 50 megabytes smaller than our previous two iterations. By running one of these slim images, you're really just removing any sort of bloat or unnecessary packages for running this image inside of a container. This could be binaries, this could be man-pages, any sort of additional packages that would potentially be a security risk if left within the container and that container was breached at some point.

04:29

The fourth option for securing your containers when pushing to production is running them in a read-only state. Depending on the software you're running, you may need to mount some temporary file system spaces. In the case here of Apache2, the web server does need to be able to write to several locations on the hard drive itself. So, we've mounted the temp file system -- using the 'mounting' command -- to provide this scratch space for the web service.

04:57

Coming up in just a moment, we will also see how to implement this within a Docker Compose file because typing this out of the Docker CLI is quite a pain.

05:11

With our typo fixed, let's get this container up and running. And now with our container running, let's go ahead and see if that read-only flag has actually been implemented. So, first things first, I'm going to connect into our container as the root user in our interactive way. Here, I'm just going to create and open up a terminal as root. Then I'm going to attempt to create a file on Etsy and then create a test file. We can see that; oh I am not allowed, so it is a read-only file system.

05:44

Now when I attempt to touch and create a file in one of those locations, I have hard coded and specified in my 'docker run' command -- now we can see that I do in fact have the ability to read. So, this is a temporary file system. This is one that will be removed once this container is turned off or killed but will persist as long as the web server needs it to run.

06:09

And our last example here of how to harden our container, I just want to wrap it all together with a nice Docker Compose file. So again, you can see our read-only parameters set the temporary file system locations that we previously had to put via the Docker CLI, are now set here as well. Our ports are hard coded specific to what needs to be publicly accessible. And we have created these new network components as well. So our Docker build is going to be the current directory we're in that is going to be our web service. We also have a link to the database network and the database container. But most importantly, is the logical separation between our front-end and back-end networks.

06:51

By segmenting out our network in this way, we can specify that the front end is the only one that can be accessible by a public endpoint or public user. The database one will only be restricted to container-to-container communication over specific links. This logical segmentation provides a level of security because no public user will then be able to connect into our database, only the specific specified containers. So let's go ahead and run our Docker Compose up we're going to run this with the dash D parameters that runs in the background. And as we can see, both of our images have come up successfully. The ports that are published versus exposed are as they have been throughout our examples. And those containers are configured as we've gone through throughout this entire exercise.

07:40

I hope this video has been very helpful and informative on how to harden your Docker containers. Thank you!

Dig Deeper on Containers and virtualization

Software Quality
App Architecture
Cloud Computing
SearchAWS
TheServerSide.com
Data Center
Close