Skip to main content
DevOps

How to Install DRBD on CentOS Linux

This step-by-step tutorial demonstrates how to install Distributed Replicated Block Device (DRBD) on CentOS Linux.

LHB Community

What is DRBD?

DRBD (Distributed Replicated Block Device) is a software package for Linux-based systems. It is used to replicate the storage devices from one node to the other node over a network.

It can provide assistance to deal with Disaster Recovery and Failovers. DRBD can be understood as a high availability for hardware and can be viewed as a replacement of network shared storage.

How DRBD works?

drbd setup overview

Suppose we want to cluster a storage partition on two CentOS systems, we require a block device (like /dev/sdb1) on both system. Those systems are defined as Primary node and Secondary node (can switch Primary and Secondary nodes).

DRBD uses a virtual block device (like drbd0) to share the /dev/sdb1 block devices of both systems. Primary node is the one where virtual drive drbd0 is mounted to read/write purpose.

First we need to install DRBD packages which is used to create a virtual disk drbd0. We can format it as a xfs or ext3 filesystem to use /dev/drbd0 device. The drbd0 device is configured to use /dev/sdb1 block devices on both the systems. We now work only on drbd0 device.

Since drbd0 can only be mounted on Primary node, the contents are only accessed from primary node at a time. By anyway if the primary system crashes out, we may lose the system files, but the virtual device drbd0 will be available. We can switch the initially secondary node as a primary, and can access its contents again.

Using DRBD on CentOS

This tutorial was performed on CentOS 7 but it should work for other CentOS versions as well. Read this to learn how to check CentOS version.

Requirements

  • Two CentOS installed system
  • A free block-device like /dev/sdb1 on both systems (preferred to be same size)
  • Selinux Permissive or disabled
  • port 7788 allowed on firewall
  • Nodes must be within same network.

Installation

Here we follow the installation by adding epel repository since drbd packages are not available at CentOS distributions.

$ rpm -ivh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm

Also add GPG-key on both nodes. GPG-key is the public key used to encrypt the communication between nodes.

$ rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-elrepo.org

Now we can use yum to install drbd packages. We must identify the drbd versions supported by our kernel. Check the drbd versions available for your kernel:

$ yum info *drbd* | grep Name

The output is like:

Now install the required version of drbd along with necessary kernel modules.

$ yum -y install drbd84-utils kmod-drbd84

Verify whether the kernel module is loaded or not.

$ lsmod | grep -i drbd

If the response of above command gives empty output, then kernel modules are not loaded. You need to reboot the system and try:

$ modprobe drbd

modprobe is a command that inteliigently adds or removes modules from Linux kernel. To make the modules be loaded during each boot, systemd-modules-load service is used. So, create a file called drbd.conf  inside /etc/modulesload.d.

$ echo drbd > /etc/modules-load.d/drbd.conf

Configuring DRBD

DRBD configuration files are located at /etc/drbd.d/

By default, /etc/drbd.d/global_common.conf  is available global which contains the global or main configurations. Others configuration files are called resource files with *.res extension.

Now we create the resource configuration files on both nodes to use drbd for our specified block devices.

Let's create the resource file named linuxhandbook.res

$ vi /etc/drbd.d/linuxhandbook.res

copy and paste the below content to the resource file

resource linuxhandbook {
protocol C;          
on node1 {
                device /dev/drbd0;
                disk /dev/sdb1;
                address 10.20.222.14:7788;
                meta-disk internal;
                }
on node2 {
                device /dev/drbd0;
                disk /dev/sdb1;
                address 10.20.222.15:7788;
                meta-disk internal;
                }
} 

Here,

  • linuxhandbook is the resource name. Resource names must always be unique.
  • protocol C is used for synchronous communication. It is  fully synchronous replication protocol. Other protocols available are protocol A and protocol B.
  1. protocol A: Asynchronous replication protocol. Generally preferred for nodes in long distanced networks.
  2. protocol B: Semi-synchronous replication protocol. Also called as Memory synchronous protocol.
  3. protocol C: preferred for nodes in short distanced networks.
  • node 1 and node2 are the hostname of individual nodes. Only use to identify the blocks.
  • device /dev/drbd0 is the logical device created to use as  device.
  • disk /dev/sdb1 is the physical block device that drbd0 will occupy.
  • address 10.20.222.14:7788 and address 10.20.222.15:7788 are the ip addresses of two respective nodes  with open tcp port 7788.
  • meta-internal disk is used to define to use internal Meta data of disk.

The configuration must be same on both nodes.

Now, we need to initialize the meta data storage on each nodes:

$ drbdadm create-md linuxhandbook

If this gives error message, then you must create a dummy data file manually, and the try above command afterwards.

$ dd if=/dev/zero of=/dev/sdb1 bs=1024k count=1024

dd command is used to create a random file of specified memory. The create-md command must be success then after.

After logical device made usable, attach the drbd0 device to sdb1 disk on both nodes. See the output of lsblk

$ lsblk

the output should be like this

If not, then attach the drbd0 device to sdb1 disk through resource file.

$ drbdadm attach linuxhandbook
or
$ drbdadm up linuxhandbook

Once again try,
$ lsblk

Start and enable the drbd service on both the nodes.

$ systemctl start drbd
$ systemctl enable drbd

If drbd start may be quick for one node and take some time for another node .

Setting up Primary and Secondary nodes

DRDB uses only one node at a time as a primary node where read and write can be preformed.

We will at first specify node 1 as primary node.

$ drbdadm primary linuxhandbook --force

Check the status of drbd process:

$ cat /proc/drbd 
or 
$ drbd-overview

The output look like:

Here, the information we can get is :

  • currently which node is primay and which is secondary.
  • the data synchronization process.
  • drbd device status like: Inconsistent, Uptodate, Diskless.

Another node,node2 is automatically set as secondary  node. See the drbd overview process status.

The major step we haven’t yet performed is to format the drbd0 device. This can be done only on one of the nodes.

Here, we format drbd0 as ext3 with mkfs command. xfs file system also works. It is better we use the same disk type as of /dev/sdb1.

$ mkfs -t ext3 /dev/drbd0

Now,  again at primary node, (for instance, it is node1 in this tutorial) we must mount the drbd0 device to be able to work on it.

$ mount /dev/drbd0  /mnt 

you can select your required mount point instead of /mnt. for example, I can mount the /dev/drbd0 device to /var/lib/mysql to use it for mysql database drbd.

NOTE: Always remember the process. First, you should make the node as primary for DRBD. And then mount the drbd0 device to your system and you are allowed to perform actions on the device. Without making node as primary you cannot mount the drb0 device cannot use the contents of that device.

Testing DRBD process

After drbd has been set up at both the nodes and one node is made primary. We mounted the device to /mnt location. Now create a file to test the synchronization of drbd nodes.

$ touch  /mnt/drbdtest.txt
$ ll /mnt/

After this, we will set node1 as secondary and node2 as primary. The process is similary mirrored. At node 1(at instance primary node), unmount the /dev/drbd0 device, make it secondary. At node2 (at instance secondary node), make it primary node, and mount to required location.

at node 1:

$ umount  /mnt
$ drbdadm secondary linuxhandbook

at node 2:

$ drbdadm primary linuxhandbook
$ mount /dev/drbd0  /mnt

After successful mount at node2, check the files at /mnt folder. You must see the drbdtest.txt file (created at node1).

$ ll  /mnt/

If you want to have GUI interface to manage and visualize drbd cluster nodes, you can use LCMC (Linux Cluster Management Console).

That's it! You have successfully implemented DRBD at your system. If any queries remains, you can comment out below at comment section.

Author: Rishi Raj Gautam is a Linux lover and an open-source activist.

LHB Community