Galera let’s you scale your database setup both vertically and horizontally. If you feel that your reads are not fast enough or you would like to rearrange your database server-web server assignations, scaling horizontally by adding database servers is recommended. Just make sure to follow the right steps.
Warning! To keep an healthy Quorum, it's very important to always keep an odd numbers of Galera node. In this tutorial, we will be adding 2 new nodes to an existing Galera Cluster(3nodes). If you only want to only add the capacity of 1 node to the Galera cluster, you can add what we can an Arbitrator to your cluster to act as your 5th member of your quorum. You can learn more on how to setup a Galera Arbitrator node here. As usual, we recommend that you first have a backup of your data before making any changes in production.
Installing Galera on the new nodes
On your new nodes, the first step will be to install the Galera repository and packages. We start that process by adding the python software properties package.
apt-get install software-properties-common
apt-key adv --keyserver keyserver.ubuntu.com --recv BC19DDBA
Then you need to create a repository file and add it in the source list, /etc/apt/sources.list.d/galera.list, so that apt can query that repository.
deb http://releases.galeracluster.com/ubuntu trusty main
Now that we have a new repository setup, we update the apt cache and install the packages.
apt-get install galera-3 galera-arbitrator-3 mysql-wsrep-5.6
Make sure you have done those operation on both new nodes
You will want to verify connectivity with your 3 original nodes and the other new node, so start by pinging them.
Then, try to connect to the MySQL server on the other nodes. By example from Node4 to MySQL on Node1(192.168.0.1)
mysql -H 192.168.0.1 –u root -p
If you get a request to input your password, this means the mysql server on the other side is listening. You can now proceed with configuring MySQL properly.
Configuring the new Galera Node
From one of your original nodes(Node1 by example), copy the configuration to the new nodes. In this example, we are copying the file from Node1 to Node4:
scp /etc/mysql/my.cnf firstname.lastname@example.org:/etc/mysql/
You may also want to copy the /etc/mysql/debian.cnf file from one of the original nodes. In this example, we are copying the file from Node1 to Node4:
scp /etc/mysql/debian.cnf email@example.com:/etc/mysql/
On the new nodes, create the necessary users. This operation will be needed on Node4 and Node 5:
mysql –u root –p
CREATE USER ‘wsrep’@’%’ IDENTIFIED BY ‘password’;
CREATE USER ‘root’@’%’ IDENTIFIED BY ‘password’;
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION;
Finally, make sure to add the new server to the wsrep_cluster_address line in the mysql configuration file.
One last thing before you start up the new components:
Warning! if you’re using rsync or mysqldump for SST transfer, bringing up the new nodes will lock the databases on the donor nodes. This could bring down heavily used setups. It would be strongly recommended to add those nodes when the setup is less solicited. Additionally, you may want to set the load balancer to only redirect requests on 2 nodes instead of 3. You would now be able to set the third node as a donor, which means it would be the only node with its DB getting locked and providing the new nodes with the data.
To set a SST donor, add the following line in the mysql configuration on the two new nodes:
You can then start the new nodes and they should start replicating the data from the donors.
service mysql restart
Just like when setting up your original setup, you can test the replication by adding databases. However, if your cluster is already getting used, you can just watch as new data gets replicated to it by running a test query on the new nodes. Something really simple like listing the databases should work.
mysql -u root -p
Alternatively, if you want to make sure your new nodes are in the cluster, use the following :
show status like '%wsrep%';
The line “wsrep_cluster_size” in the output will tell you how many nodes are in the cluster. If that number is equal to how many servers you have setup, then all is well. You have now managed to scale out your cluster.
Your Galera cluster now runs on 5 nodes and is ready for more action. If this guide was helpful to you, kindly share it with others who may also be interested.