Docker Swarm is a native clustering tool for Docker. It allows you to create a cluster of Docker hosts and schedule containers across the cluster. One of the key features of Docker Swarm is the ability to share storage volumes across multiple containers and hosts. This can be useful in several scenarios, such as when you need to share data between containers or when you want to use a persistent storage volume available to all Swarm members.
Shared storage volumes in Docker Swarm are a powerful tool for allowing multiple services to access and share the same data.
Docker Swarm comes with a standard “local” driver out of the box. It will create new volumes for your data on whichever node your service tasks are scheduled. However, we want something else. We are looking for a shared volume where all our services can access the shared data without having to duplicate it. To achieve this, we need a storage driver.
The most popular drivers to multimount the same SAN (Storage Area Network) storage volume onto each docker swarm node are nfs, glusterfs, iscsi or sshfs.
You can use a shared volume to store application configurations, logs, or common datasets that multiple services need to access.
Let’s look at an example of creating a shared volume in Docker Swarm. To begin, we will create a docker-compose.yml file. This file will define the services using the shared volume and any additional configuration, such as environment variables or port mappings.
To build a shared storage volume in Docker Swarm, you first need to create a Docker volume. You can use the docker volume create command to do this. For example, to create a volume called myvolume, you would run the following:
$ docker volume create myvolume
Once the volume has been created, you can use it with any container in your Swarm by using the -v flag and specifying the volume name and the mount point within the container.
For example, to mount the myvolume volume at the /data directory within a container, you would run the:
$ docker run -v myvolume:/data myimage
The volume will be automatically distributed across the nodes in the Swarm, and it will be available to any container that mounts it. This makes it easy to share data between containers and to persist data even if a container is stopped or moved to a different host.
How to Use NFS, GlusterFS, ISCSI, and SSHFS for Shared Storage Volume.
Using shared storage with Docker Compose is slightly different from using a standalone Docker.
With Compose, you use a volume key in the services section of your docker-compose.yml file to specify which directories you want to mount as volumes.
Use NFS for shared storage with Docker Compose
NFS (Network File System) is a network file-sharing protocol that allows you to mount a remote file system on your local machine.
It is commonly used to share files between machines in a network, such as a home or a business network.
- Type: volume
To use GlusterFS, ISCSI, or SSHFS, you would specify the appropriate driver in the driver field of the volume configuration.
Use GlusterFS for shared storage with Docker Compose
GlusterFS is a scalable network file system that enables you to create a single logical storage volume from multiple servers.
It is designed to handle large files and can be used for various purposes, including cloud storage, data backup and recovery, media streaming, and more.
Use ISCSI for shared storage with Docker Compose
ISCSI is a tool for managing shared storage volumes on a network. It allows you to create, delete, and resize volumes and assign them to servers or containers.
Use SSHFS for shared storage with Docker Compose
SSHFS (SSH File System) is a file system that allows you to mount a remote file system using a Secure Shell (SSH) connection.
This can be useful for accessing files on a remote server as if they were stored locally or for file transfer between servers using the command line.
How AWS S3 can be used as a shared volume in Docker Swarm?
AWS S3 can be used as a shared volume in a Docker Swarm using a third-party plugin such as rexray/s3fs. This plugin allows you to mount an S3 bucket as a local file system on your Docker host and then use the resulting volume in your Docker Swarm services.
To use rexray/s3fs in your Docker Swarm, you will first need to install the plugin on each of your Docker Swarm nodes. You can do this by running the following command on each node:
$ docker plugin install rexray/s3fs
Next, you will need to create an S3 bucket and configure the necessary permissions to provide the plugin access to the bucket. Once you have done this, you can create a Docker Swarm service that uses the rexray/s3fs volume by specifying the rexray/s3fs volume driver in the volume field of your service’s config object.
Here is an example of a Docker Swarm service configuration that uses the rexray/s3fs volume to mount an S3 bucket as a shared volume:
- type: volume
This configuration will create a new volume named myvolume using the rexray/s3fs volume driver and mount it at the /app/data path in the myservice container. The mybucket S3 bucket in the us-east-1 region will back the volume.
- Setup a NFS Server With Docker
- Awesome Swarm (Bret Fisher)
- How does Docker Swarm implement volume sharing
- Tutorial: Create a Docker Swarm with Persistent Storage Using GlusterFS
- Using Storage Volumes with Docker Swarm
- How to share volume in docker swarm for many nodes. 1.
- How to share volume in docker swarm for many nodes. 2.
- Setting up a shared volume for your docker swarm using GlusterFs
- CephFS distributed filesystem
- CephFS distributed filesystem
- Shared Storage (GlusterFS)