;

Introduction to DRBD 8.x

Try it in our public cloud & Get $5 Credit
CLAIM NOW

As most people know, the internet is not infallible. Servers go up and down all the time, due to hardware failure, configuration mistakes, network problems, etc. Many of those failures are often unpredictable and may cause headache to infrastructure owners and administrators. To help counter these potential issues, people have introduced the concept of “High Availability” or HA for short. This whole design philosophy hinges on the replication of network routes and websites data so that there is no single point of failure.

The implementation of these new types of design forced the creation of new replication tools that could maintain data integrity and enable failover in case of server crash. DRBD is one such tool often used in clustered environments. What it does is that it creates a network replicated partition which can be activated or deactivated on several hosts. When paired with a program like heartbeat or pacemaker, it can activate itself on a different server when one of the nodes go down. This ensures that the data on the DRBD partition is always available on one of the nodes.

Tools like DRBD are necessary whenever you start considering the need for High Availability. This is recommended for critical business architectures which cannot afford to go down any single moment. As such, it’s great for clusters with services that need to have their data replicated between nodes. That’s why you would need to have at least 2 servers when you want to use DRBD. Each of these servers should have at least one free partition that DRBD will replicate.

To setup a DRBD, you will first want to install the provided packages for your Linux distribution on both nodes you want to replicate. From there, you’ll want to proceed with the configuration so that you use the right type of replication (more on that later) and that both server have the correct security key to connect to each other. As I will not go further in-depth with DRBD installation, you can look at the documentation here for more details: http://www.drbd.org/users-guide-8.4/ .

One thing to keep in mind when configuring DRBD is the type of replication you want it to do. It offers three replication protocols, conveniently named A, B and C:

Protocol A is mostly used for replication between two nodes that do not share the same physical location. Basically, any “write” made to the DRBD partition is considered as done once it’s been written locally and a TCP packet has been sent to the remote server. It’s great for performance over long distance but may cause data loss in case of network instability or forced failover.

Protocol C is the most used of all three replication protocol options. It is a synchronous replication option, as it considers a “write” to the partition to be done once both nodes have confirmed that it’s completed. This can only be done effectively in an environment with very little latency to ensure best performances.

Protocol B is a mix of protocols A and C. The main difference is that local “writes” are considered as done once they’ve been written to the local partition and the remote server confirms it has received the TCP packet. Data loss is less likely than with protocol A and write time may be better than with protocol C in an environment with more latency. That said, if the partition on the primary node was to be destroyed and the secondary found itself unable to write to disk, the most recent changes may be lost.

One issue may happen on DRBD, which also can happen on many replication technologies: it is what we call a split-brain. A split-brain may happen when both servers lose connection to each other for some reason and start to consider themselves as the primary and active node. What happens is that each node writes new data separately without being able to communicate with each other. Since both nodes are not identical anymore, it is likely that the data written to each node will be valid, but it will also differ. If the link in-between the nodes comes back while the replication is in this state, DRBD will automatically cut the connection. The administrator will then need to choose one node where the data will be kept and one node where it will be erased. The DRBD will be resynchronized from the server with the good data and the split-brain will then be fixed.

As said previously, DRBD can be combined with heartbeat to create a setup that failovers automatically. What is even more interesting is when you add programs like mysql to this setup. For example, it is possible to use a DRBD partition as directory for mysql databases’ data files. If the primary mysql server went down, heartbeat would be able to transfer a floating IP to the secondary DRBD node and mount DRBD as data directory for this node. That’s a high-availability architecture example that can also be achieved for several web services, like nfs or apache.