;

How to install Galera on Ubuntu 14.04

Try it in our public cloud & Get $5 Credit
CLAIM NOW

How to install Galera on Ubuntu 14.04

Galera Cluster is a plugin for MySQL which enables the Galera replication engine and lets you replicate your databases between servers in a synchronous master-master manner. It requires the InnoDB database engine to be set as default for all replicated databases. It’s strongly recommended to run an odd number of Galera nodes to keep a healthy quorum. While there are MariaDB and Percona XtraDB version, we will cover the default MySQL installation in this tutorial.

Getting Started

• 3(Cloud Instances or Dedicated Servers) (which we will call “Nodes”)with Ubuntu 14.04 Installed
• Review your AppArmor policies with MySQL to prevent conflict
• Galera Packages
• RSync installed on each node (this should be the case by default on Ubuntu 14.04)

Install Galera

First, we’ll want to install the Codership repository, which contains the packages we need. We start that process by installing the python software-properties package, as it is needed to properly manage additional custom repositories.

apt-get install software-properties-common
apt-key adv --keyserver keyserver.ubuntu.com --recv BC19DDBA

Then we need to create a repository file and add it in the source list, /etc/apt/sources.list.d/galera.list, so that apt can query the repository.

deb http://releases.galeracluster.com/ubuntu trusty main

Now that we have a new repository setup, we update the apt cache.

apt-get update

Galera needs 3 packages installed to work. Please also note that these packages CAN and WILL conflict with other MySQL packages, so it is better to have a clean server with no MySQL packages installed to avoid dependency hell. So, next we install these 3 packages on all 3 nodes:

apt-get install galera-3 galera-arbitrator-3 mysql-wsrep-5.6

Once this is done, only starting MySQL on each node remains.

service mysql start

You now have a working MySQL instance on each node. Once you have the packages installed, you will want to ensure that connectivity to MySQL is possible between nodes. Start by pinging each node from each other.

root@node1:~#ping 192.168.0.2
root@node1:~#ping 192.168.0.3
root@node2:~#ping 192.168.0.1
root@node2:~#ping 192.168.0.3
root@node3:~#ping 192.168.0.1
root@node3:~#ping 192.168.0.2

If all is well, modify the mysql configuration at /etc/mysql/my.cnf so that MySQL binds on all ip addresses or, modify it to bind to your preferred MySQL communication IP. The 0.0.0.0 represent a wildcard IP and means that mysql listens on all server IPs.
bind-address=0.0.0.0
service mysql restart

Then, try to connect to the MySQL server on the other nodes.
mysql -H node2ip –u root -p

If you get a request to input your password, this means the mysql server on the other side is listening. You can now proceed with configuring MySQL properly.

Configuration

Now we will configure the my.cnf that you will use on the Galera Cluster nodes. If the default configuration is not already set like this, you will want to add the following lines in your /etc/mysql/my.cnf file:

binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
wsrep_provider=/usr/lib/libgalera_smm.so
wsrep_provider_options="gcache.size=300M; gcache.page_size=1G"
wsrep_cluster_name="galeracluster"
wsrep_cluster_address="gcomm://192.168.0.1,192.168.0.2,192.168.0.3 "
wsrep_sst_method=rsync
wsrep_sst_auth=wsrep:password

There are a few things to change and consider here.

  • First of all, the default storage engine line is necessary, because Galera can only replicate innodb databases.
  • The wsrep_cluster_name is the name of your galera setup. It will need to be the same on each server.
  • Wsrep_cluster_address is where each node is defined. You may want to start your first node with “gcomm://” instead, or else Galera will try to connect to the other nodes and fail as they are not started yet.
  • The wsrep_sst_auth line indicates the method Galera will use to send large chunks of data from one node to the other when one of the nodes is too far behind. The three options here are Rsync (the fastest), MySQLdump (the slowest) or Xtrabackup. Rsync locks the database for the donor node, MySQLdump too, but Xtrabackup does not.
  • The last line is a MySQL user and password used for replication. If using MySQLdump, it needs to be the MySQL root password and the same root password must be used on all nodes. If using Xtrabackup, it can be a user that has access to all tables.

  • Next, create the replication user on each node.

    mysql –u root –p
    CREATE USER ‘wsrep’@’%’ IDENTIFIED BY ‘password’;
    We will also want the root user to be reachable remotely.
    CREATE USER ‘root’@’%’ IDENTIFIED BY ‘password’;
    GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION;

    (Please note that it may be preferable to specify your network IP with a wildcard at the end like 192.168.0.% instead of just using %, as the mysql server root will become reachable from any IP address with just a %)

    Ubuntu has a special user called debian-sys-maint to do maintenance tasks on MySQL. Because of the database replication, the maintenance operations will only continue to work properly on the first node, failing on the others. You will want to copy /etc/mysql/debian.cnf from the original node to the other nodes, so that the maintenance operations can continue to work.

    scp /etc/mysql/debian.cnf root@node2:/etc/mysql/

    Once all of this is done, we are now ready to start the cluster. On the original node, make sure that gcomm:// is empty in the MySQL configuration. Then shutdown MySQL and restart it like this:

    service mysql stop
    service mysql start -–wsrep-new-cluster

    You will now have the first node up. On the other nodes, make sure your configuration is set properly and then restart MySQL.

    service mysql restart

    MySQL should take slightly longer than usual to start as it will sync with the main server. You now have a working master-master Galera cluster. The last step will be to verify that this setup is working.

    Testing it

    You can check that master-master replication is working properly by creating a database on one node and verifying that it exists on the other nodes. We will first login to a MySQL instance on one of the nodes.

    mysql -u root -p

    Then we create a new database.

    root@node1:~#create database test

    Next, we go on another node and check if our database is created there.

    root@node2:~#mysql -u root -p
    root@node2:~#show databases;

    If you see a database named test in the database list, it means the replication has worked flawlessly.

    Conclusion

    You now have a fully functional Galera cluster. You may now want to configure your application to send queries to the cluster or setup a load-balancer to better control where the queries end up. In case you would be interested in using HAProxy for your load-balancing need, we have a knowledge base explaining how to setup HAProxy for use with MySQL. If this guide was helpful to you, kindly share it with others who may also be interested.