;

How to Achieve High Availability Load Balancing with Keepalived on Ubuntu 16

Try it in our public cloud & Get $5 Credit
CLAIM NOW

How to Achieve High Availability Load Balancing with Keepalived on Ubuntu 16

Keepalived is a daemon that can be used to drive a number of load balancing processes on Linux Virtual Servers (LVS) to maintain high availability. Load balancers enable two or more identical servers to provide service through a single floating IP address or set of IP addresses. When one or more of the servers is not functioning optimally, keepalived can shift more of the load to those servers that are more healthy. Perhaps the simplest configuration for the keepalived load balancer uses the daemon to maintain high availability by implementing the failover service. During failover, the entire load is shifted from a primary server to a secondary server upon failure of the primary. The present tutorial describes the implementation of such a highly-available load balancer setup based on Ubuntu 16.04 and keepalived. In this tutorial, we will configure the load balancers using a “floating IP” and an active/passive redundancy.

Getting Started

In order to follow this guide you will need to have the following in place:
• Two servers (Cloud Server or Dedicated Server), each running a fresh installation of Ubuntu 16.04. We will call these servers LB1 and LB2 below
• Root access to the nodes

Tutorial

For your reference, here are the servers, or load balancers, we’ll be working with, along with their respective public and private IP addresses. Where necessary, remember to replace with the IP addresses for your own servers.

LB1
Public:173.209.49.66
Private:10.119.0.1

LB2
Public:173.209.49.67
Private:10.119.0.2

The load balancers will make use of a “floating IP”, and we’ll configure active and passive redundancy as well.

Floating
Public:173.209.49.70
Private:10.119.0.10

Tutorial

Your first step when you install any software is to make sure your system is up to date by executing the following commands.

apt-get update
apt-get -y upgrade

The update ensures you will install the most recent (stable) packages available. The upgrade will install the most recent security patches and fixes.

Ubuntu’s firewall will have to be re-configured to allow for the configuration changes made to run keepalived. So, once your system has been updated disable Ubuntu’s firewall.

ufw disable

You are now ready to install keepalived and the necessary dependencies:

apt-get install linux-headers-$(uname -r) keepalived

Startup Keepalived on Boot

With keepalived installed, configure the server so that the daemon activates on boot. You will also need to enable the ipvsadm kernel module, which provides key underlying functionality keepalived uses for load balancing.

systemctl enable keepalived
modprobe ip_vs

Configure Keepalived

Create the keepalived configuration file folder on both servers:

echo "" > /etc/keepalived/keepalived.conf
nano /etc/keepalived/keepalived.conf

We will now set up keepalived so that it will use the Virtual Router Redundancy Protocol (VRRP) to determine when LB1 or LB2 should be the active router based on the health of LB1. To implement this step you will need to create and save the following script to your keepalived folder on LB1:

vrrp_instance VI_LOCAL {
interface eth1
state MASTER
virtual_router_id 51
priority 101
virtual_ipaddress {
10.119.0.10
}
track_interface {
eth0
eth1
}
}
vrrp_instance VI_PUB {
interface eth0
state MASTER
virtual_router_id 52
priority 101
virtual_ipaddress {
173.209.49.70
}
track_interface {
eth0
eth1
}
}
virtual_server 173.209.49.70 443 {
delay_loop 4
lb_algo sh # source hash
lb_kind NAT
protocol TCP
real_server 10.119.0.100 443 {
weight 1
TCP_CHECK {
connect_timeout 15
nb_get_retry 3
delay_before_retry 2
}
}
real_server 10.119.0.101 443 {
weight 1
TCP_CHECK {
connect_timeout 15
nb_get_retry 3
delay_before_retry 2
}
}
}
virtual_server 173.209.49.70 80 {
delay_loop 4
lb_algo wrr # weighted round robin
lb_kind NAT
protocol TCP
real_server 10.119.0.100 80 {
weight 1
TCP_CHECK {
connect_timeout 15
nb_get_retry 3
delay_before_retry 2
}
}
real_server 10.119.0.101 80 {
weight 1
TCP_CHECK {
connect_timeout 15
nb_get_retry 3
delay_before_retry 2
}
}
}

and create and save the following script to your keepalived folder on LB2:

vrrp_instance VI_LOCAL {
interface eth1
state BACKUP
virtual_router_id 51
priority 100
virtual_ipaddress {
10.119.0.10
}
track_interface {
eth0
eth1
}
}
vrrp_instance VI_PUB {
interface eth0
state BACKUP
virtual_router_id 52
priority 100
virtual_ipaddress {
173.209.49.70
}
track_interface {
eth0
eth1
}
}
virtual_server 173.209.49.70 443 {
delay_loop 4
lb_algo sh # source hash
lb_kind NAT
protocol TCP
real_server 10.119.0.100 443 {
weight 1
TCP_CHECK {
connect_timeout 15
nb_get_retry 3
delay_before_retry 2
}
}
real_server 10.119.0.101 443 {
weight 1
TCP_CHECK {
connect_timeout 15
nb_get_retry 3
delay_before_retry 2
}
}
}
virtual_server 173.209.49.70 80 {
delay_loop 4
lb_algo wrr # weighted round robin
lb_kind NAT
protocol TCP
real_server 10.119.0.100 80 {
weight 1
TCP_CHECK {
connect_timeout 15
nb_get_retry 3
delay_before_retry 2
}
}
real_server 10.119.0.101 80 {
weight 1
TCP_CHECK {
connect_timeout 15
nb_get_retry 3
delay_before_retry 2
}
}
}

The “virtual_router_id” needs to be unique for each VRRP instance defined. This ID must also be unique within the VLAN. The same ID should not be used on two clusters using the same physical switch or VLAN. The ID needs to match on both LB1 and LB2 for the same VRRP instance. Valid values are from 0 to 255.

Netfilter can use nf_conntrack to track the connections among your servers. Kernel parameters, such as IP addresses, can be immediately modified with the sysctl command. Once nf_conntrack is enabled and sysctl is configured as follows

modprobe nf_conntrack
nano /etc/sysctl.conf

keepalived will be able to track the connections between the servers and re-assign the floating IP addresses between LB1 and LB2 as necessary, depending on which should be active and which should be passive at the time.

To complete the configuration of the servers to run keepalived for high availability enter the following tweaks:

net.ipv4.ip_forward = 1
net.ipv4.ip_nonlocal_bind = 1
net.nf_conntrack_max = 1000000

Once you apply the tweaks as follows:

sysctl -p

your configuration should be complete and you can start keepalived.

systemctl start keepalived

Verify Keepalived’s Status

Now we need to ensure our keepalived instance is operating as expected. First, we’ll check that both floating IP addresses are assigned to the first keepalived instance. To do so, execute ip addr show and see if the floating IP addresses are present:

root@lb1:/etc# ip addr show

1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:8e:e4:2f brd ff:ff:ff:ff:ff:ff
inet 173.209.49.66/29 brd 173.209.49.71 scope global eth0
valid_lft forever preferred_lft forever
inet 173.209.49.70/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe8e:e42f/64 scope link
valid_lft forever preferred_lft forever
3: eth1: mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:8e:ea:2d brd ff:ff:ff:ff:ff:ff
inet 10.119.0.1/24 brd 10.119.0.255 scope global eth1
valid_lft forever preferred_lft forever
inet 10.119.0.10/32 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe8e:ea2d/64 scope link
valid_lft forever preferred_lft forever

Verify that 173.209.49.70 and 10.119.0.10 are assigned to LB1. The presence of these addresses indicates that LB1 is active and LB2 is passive. Now, if we shut down keepalived on LB1 those IP addresses should appear on the second server.

root@lb1:/etc# systemctl stop keepalived

Switch to LB2 and check the IP addresses:

root@lb2:~# ip addr show

1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:8e:ae:b8 brd ff:ff:ff:ff:ff:ff
inet 173.209.49.67/29 brd 173.209.49.71 scope global eth0
valid_lft forever preferred_lft forever
inet 173.209.49.70/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe8e:aeb8/64 scope link
valid_lft forever preferred_lft forever
3: eth1: mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:8e:ed:ba brd ff:ff:ff:ff:ff:ff
inet 10.119.0.2/24 brd 10.119.0.255 scope global eth1
valid_lft forever preferred_lft forever
inet 10.119.0.10/32 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe8e:edba/64 scope link
valid_lft forever preferred_lft forever

Verify that the floating IP addresses are now assigned to the second node. If so, LB2 is now active. The outwardly visible portion of the configuration has now been verified.

As a last quick check, confirm that the backends are well specified within keepalived:

root@lb1:/etc# ipvsadm

IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 173.209.49.70:http wrr
-> 10.119.0.100:http Masq 1 0 0
-> 10.119.0.101:http Masq 1 0 0
TCP 173.209.49.70:https sh
-> 10.119.0.100:https Masq 1 0 0
-> 10.119.0.101:https Masq 1 0 0

Provided all IP addresses show up as expected, keepalived should now work as expected.

Conclusion

Keepalived is now installed on your LVS cluster of two servers. Following the basic principles above, you can increase the size of your cluster if you wish to achieve even higher availability. Even with just two servers, your keepalived instance should make major downtime a thing of the past. If you found this article helpful, feel free to share it with your friends and let us know in the comments below!