;

How to Achieve High Availability Load Balancing with Keepalived on Ubuntu 14

Try it in our public cloud & Get $5 Credit
CLAIM NOW

How to Achieve High Availability Load Balancing with Keepalived on Ubuntu 14

Load balancing is a way to distribute workloads across multiple computing resources such that large, resource-intensive tasks may be safely and reliable completed with more efficiency and speed than if just one machine were to perform the tasks.

Keepalived is a free and open source load balancing solution for Linux systems. This guide will go over the installation process for Keepalived on Ubuntu 14.04 and how you can use it for load balancing on your Linux clusters.

Getting Started

Before you begin to follow the steps in this guide, make sure that you meet these requirements:
• Two servers (Cloud Server or Dedicated Server), each running a fresh installation of Ubuntu 14.04. We will call these servers LB1 and LB2 below
• Both servers connected to the same LAN
• Root access to both servers

For the purposes of this guide, we’ll be working with a public network of 173.209.49.66/29 and a class A private network, or LAN, of 10.119.0.0/24.

Tutorial

For your reference, here are the servers, or load balancers, we’ll be working with, along with their respective public and private IP addresses. Where necessary, remember to replace with the IP addresses for your own servers.

LB1
Public:173.209.49.66
Private:10.119.0.1

LB2
Public:173.209.49.67
Private:10.119.0.2

The load balancers will make use of a “floating IP”, and we’ll configure active and passive redundancy as well.

Floating
Public:173.209.49.70
Private:10.119.0.10

The first task is to ensure that the systems of both servers are fully up to date.

apt-get update
apt-get -y upgrade

If it’s installed, disable Ubuntu’s default firewall.

ufw disable

The next step is to install Keepalived and all necessary dependencies.

apt-get install linux-headers-$(uname -r) keepalived

Use this command to activate Keepalived on boot. We’ll also enable the ipvsadm kernel module.

update-rc.d keepalived defaults
modprobe ip_vs

Now we’ll have to configure Keepalived for our setup.

echo "" > /etc/keepalived/keepalived.conf
nano /etc/keepalived/keepalived.conf

Here’s the alterations for the LB1 server.

vrrp_instance VI_LOCAL {
interface eth1
state MASTER
virtual_router_id 51
priority 101
virtual_ipaddress {
10.119.0.10
}
track_interface {
eth0
eth1
}
}
vrrp_instance VI_PUB {
interface eth0
state MASTER
virtual_router_id 52
priority 101
virtual_ipaddress {
173.209.49.70
}
track_interface {
eth0
eth1
}
}
virtual_server 173.209.49.70 443 {
delay_loop 4
lb_algo sh # source hash
lb_kind NAT
protocol TCP
real_server 10.119.0.100 443 {
weight 1
TCP_CHECK {
connect_timeout 15
nb_get_retry 3
delay_before_retry 2
}
}
real_server 10.119.0.101 443 {
weight 1
TCP_CHECK {
connect_timeout 15
nb_get_retry 3
delay_before_retry 2
}
}
}
virtual_server 173.209.49.70 80 {
delay_loop 4
lb_algo wrr # weighted round robin
lb_kind NAT
protocol TCP
real_server 10.119.0.100 80 {
weight 1
TCP_CHECK {
connect_timeout 15
nb_get_retry 3
delay_before_retry 2
}
}
real_server 10.119.0.101 80 {
weight 1
TCP_CHECK {
connect_timeout 15
nb_get_retry 3
delay_before_retry 2
}
}
}

And these alterations will be applied to the LB2 server.

vrrp_instance VI_LOCAL {
interface eth1
state BACKUP
virtual_router_id 51
priority 100
virtual_ipaddress {
10.119.0.10
}
track_interface {
eth0
eth1
}
}
vrrp_instance VI_PUB {
interface eth0
state BACKUP
virtual_router_id 52
priority 100
virtual_ipaddress {
173.209.49.70
}
track_interface {
eth0
eth1
}
}
virtual_server 173.209.49.70 443 {
delay_loop 4
lb_algo sh # source hash
lb_kind NAT
protocol TCP
real_server 10.119.0.100 443 {
weight 1
TCP_CHECK {
connect_timeout 15
nb_get_retry 3
delay_before_retry 2
}
}
real_server 10.119.0.101 443 {
weight 1
TCP_CHECK {
connect_timeout 15
nb_get_retry 3
delay_before_retry 2
}
}
}
virtual_server 173.209.49.70 80 {
delay_loop 4
lb_algo wrr # weighted round robin
lb_kind NAT
protocol TCP
real_server 10.119.0.100 80 {
weight 1
TCP_CHECK {
connect_timeout 15
nb_get_retry 3
delay_before_retry 2
}
}
real_server 10.119.0.101 80 {
weight 1
TCP_CHECK {
connect_timeout 15
nb_get_retry 3
delay_before_retry 2
}
}
}

The “virtual_router_id” setting must be unique for each of the defined VRRP instances, and it should also be unique within your VLAN. Make sure you’re not using the same ID on any two clusters connected via the same physical switch or VLAN. Naturally, this ID needs to match on both LB1 and LB2 for the same VRRP instance. Valid values are from 0 to 255.

Now, enable nf_conntrack and we can proceed to sysctl configuration.

modprobe nf_conntrack
nano /etc/sysctl.conf

Alter the configuration so it matches the below:

net.ipv4.ip_forward = 1
net.ipv4.ip_nonlocal_bind = 1
net.nf_conntrack_max = 1000000

Now, apply the changes.
sysctl -p

Finally, we can start up Keepalived.

service keepalived start

Let’s verify that Keepalived is running in the expected manner. First we should verify that both of the floating IPs are assigned to the first keepalived instance.

Using the command ip addr show, you can see if the IPs are present:

root@lb1:/etc# ip addr show

1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:8e:e4:2f brd ff:ff:ff:ff:ff:ff
inet 173.209.49.66/29 brd 173.209.49.71 scope global eth0
valid_lft forever preferred_lft forever
inet 173.209.49.70/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe8e:e42f/64 scope link
valid_lft forever preferred_lft forever
3: eth1: mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:8e:ea:2d brd ff:ff:ff:ff:ff:ff
inet 10.119.0.1/24 brd 10.119.0.255 scope global eth1
valid_lft forever preferred_lft forever
inet 10.119.0.10/32 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe8e:ea2d/64 scope link
valid_lft forever preferred_lft forever

If everything’s been set up correctly, you’ll see 173.209.49.70 and 10.119.0.10 on LB1. If you shut down keepalived on LB1, those same IP addresses will appear on the second server.

root@lb1:/etc# systemctl stop keepalived

After shutting down keepalived on LB1, go on the second server and see if it indeed has those IP addresses assigned:

root@lb2:~# ip addr show

1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:8e:ae:b8 brd ff:ff:ff:ff:ff:ff
inet 173.209.49.67/29 brd 173.209.49.71 scope global eth0
valid_lft forever preferred_lft forever
inet 173.209.49.70/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe8e:aeb8/64 scope link
valid_lft forever preferred_lft forever
3: eth1: mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:8e:ed:ba brd ff:ff:ff:ff:ff:ff
inet 10.119.0.2/24 brd 10.119.0.255 scope global eth1
valid_lft forever preferred_lft forever
inet 10.119.0.10/32 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe8e:edba/64 scope link
valid_lft forever preferred_lft forever

Finally, make sure that the backends are well specified within keepalived’s config:

root@lb1:/etc# ipvsadm

IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 173.209.49.70:http wrr
-> 10.119.0.100:http Masq 1 0 0
-> 10.119.0.101:http Masq 1 0 0
TCP 173.209.49.70:https sh
-> 10.119.0.100:https Masq 1 0 0
-> 10.119.0.101:https Masq 1 0 0

Conclusion

Now that you’ve completed this guide, you should be able to set up Keepalived on your own clusters. Keepalived is scalable, so you can try it out on clusters of any size. If you found this article helpful, feel free to share it with your friends and let us know in the comments below!