Today in this blog, we will discuss Docker Storage Drivers.
Docker is an open-source containerization platform. It enables developers to package applications into containers—standardized executable components combining application source code with the operating system (OS) libraries and dependencies required to run that code in any environment. Containers simplify the delivery of distributed applications and have become increasingly popular as organizations shift to cloud-native development and hybrid multi-cloud environments.
Developers can create containers without Docker, but the platform makes it easier, simpler, and safer to build, deploy and manage containers. Docker is essentially a toolkit that enables developers to build, deploy, run, update, and stop containers using simple commands and work-saving automation through a single API.
It used to be that when you wanted to run a web application, you bought a server, installed Linux, set up a LAMP stack, and ran the app. If your app got popular, you practiced good load balancing by setting up a second server to ensure the application wouldn't crash from too much traffic.
Docker Storage Drivers
Docker storage motive force also referred to as graph driver affords a mechanism to shop and manage images and containers on the Docker host. Docker uses a pluggable architecture that helps one-of-a-kind storage drivers. Garage drivers allow our workloads to be put in writing to the container’s writable layer. There are several one-of-a-kind storage drivers supported through docker, we want to pick the excellent storage driver for our workloads. To choose the nice garage driving force, it is crucial to apprehend the procedure of building and storing images in docker and how those images are utilized by bins. The default garage driver is overlay2 and it's far supported on docker engine – community, and docker ee 17.06.02-ee5 and up, but, we can exchange it as in step with our requirement.
Different Storage Drivers of Docker`
Below are different storage drivers supported by Docker: –
It is the default storage driver currently.
It is supported by Docker Engine – Community, and Docker EE 17.06.02-ee5, and a newer version.
It is newer and more stable than its original driver called ‘overlay’.
The backing filesystem for overlay2 and overlay driver is off.
It is supported by Linux kernel version 4.0 or higher, version 3.10.0-514 or higher of RHEL or CentOS is required for overlay2 otherwise we need to use an overlay storage driver that is not recommended.
Overlay2 and overlay drivers come under Linux kernel driver OverlayFS storage driver which is a modern union filesystem and it is similar to aufs, however, OverlayFs is faster and easy to implement.
AUFS is a union filesystem, which means it presents multiple directories called branches in AUFS as a single directory on a single Linux host. These directories are known as layers in Docker.
It was the default storage driver before overlay2 was used to manage images and layers on Docker for Ubuntu, and Debian versions before Stretch. It was the default storage driver in Ubuntu 14.04 and earlier.
This driver is good for Platform as a Service where container density is important because AUFS can share the images between multiple running containers efficiently.
It provides a fast container startup and uses less disk space as AUFS shares images between multiple running containers.
It uses memory efficiently, however, not efficient in write-heavy workloads as latencies are high while writing to the container because it has multiple directories so the file needs to be located and copied to the container's top writable layer.
It should be used with Solid State Drives for faster read and write than spinning disk and use volumes for write-heavy workloads to get better performance.
The backing filesystem for AUFS is xfs and ext4 Linux file system.
It is a block storage driver that stores data in blocks.
It is good for write-heavy workloads as it stores data at the block level instead of the file level.
It is a kernel-based framework and Docker’s device-mapper storage driver takes advantage of its capabilities such as thin provision and snapshotting to manage images and containers.
It was the default storage driver for CentOS 7 and earlier.
It supports two modes:
loop-lvm is used to simulate files on the local disk as an actual physical disk or block device by using the ‘loopback’ mechanism. It is because the device-mapper only supports external or block devices.
It is useful for testing purposes only as it provides bad performance.
It is easy to set up as does not require an external device
It requires additional devices to be attached to the local host as it stores data on a separate device.
It is production-ready and provides better performance.
It requires a complex setup to enable direct-lvm.
We need to configure the daemon.json file to use direct-lvm mode. It has multiple options that can be set up as per requirement.
The lvm2 and device-mapper-persistent-data packages must be installed to use device-mapper.
It uses direct-lvm as a backing filesystem.
This storage driver is also part of the main Linux kernel.
It is only supported on SLES ( Suse Linux Enterprise Server) for Docker EE and CS engine.
However, it is recommended for Ubuntu or Debian for Docker Engine – Community edition.
btrfs driver backed by Btrfs filesystem which is a next-generation copy-on-write filesystem.
Btrfs filesystem has many features, for example, block-level operations, thin provisioning, copy-on-write snapshots, etc. these features of Btrfs are used by Docker’s btrfs storage driver to manage images and containers.
It also requires dedicated block storage and must be formatted with Btrfs filesystem, however, we do not require a separate block device if using SLES as it is formatted with Btrfs by default. It is recommended to use additional block devices for better performance.
Our kernel must support btrfs.
It is open-sourced under CDDL (Common Development and Distribution License) and created by Sun Microsystems (currently Oracle Corporation).
It is also a next-generation filesystem that has many features like volume management, snapshots, checksumming, compression and deduplication, replication, etc.
It is not part of the mainline Linux kernel because of licensing incompatibilities between the CDDL and GPL.
It is not recommended to use this Docker storage driver for production workloads without substantial experience with ZFS.
It is only supported on Docker CE with Ubuntu 14.04 or higher.
It is not supported on Docker EE or CS-engine.
It is only for testing purposes and not recommended for production use.
The performance is poor for this storage driver.
It can be supported on any filesystem where no copy-on-write filesystem can be used.
Examples of Docker Storage Drivers
Let’s learn some commands to know about Docker’s storage driver with examples: –
We use the ‘docker info’ command to check the default driver used by Docker as below: –
Explanation: – In the above snapshot, we can see that the ‘overlay’ storage driver is used by Docker.
Now we want to configure the Docker to use ‘overlay2’ as a default storage driver, we can do that by editing the ‘daemon.json’ file located at /etc/docker/daemon.json as below: –
Step1: First stop docker service using below command: –
$sudo systemctl stop docker
Step2: Add the below configuration to the ‘daemon.json’ file, create the file if it does not exist.
Step3: Start the docker service again as below: –
$sudo systemctl start docker
We can run a container and check which driver is being used by that container: –
$docker run -d --name test_container ubuntu
$docker inspect test_container | grep -i ‘graph’ -A 8
Explanation:– In the above example, we can see that the ‘overlay2’ graph driver or storage driver is being used by a newly created container.
There are numerous distinct storage drivers to be had which might be supported through docker. We need to apprehend the functionality of every driving force and choose which driver is satisfactory suitable for our workloads. We recall three high-degree factors whilst choosing docker’s garage driving force those are ordinary overall performance, shared storage machine, and stability.