;

How To Configure Apache Virtual Hosts on Ubuntu 16

Virtual hosting is the technique of hosting multiple websites on one server. Virtual hosting enables efficient utilization of computing resources because the multiple websites share the computing resources of the host server. Apache is one of the most web servers in the world, and it provides a mechanism for enabling virtual hosting on a web server. Virtual hosting works by creating a directory for each domain/website and then redirecting visitors of a domain to that site’s specific directory. The whole process is handled transparently by Apache, and the user is not aware that the server is hosting other websites. Apache virtual hosting is very scalable and can be expanded to handle a large number of sites.

Getting started
To complete this walkthrough successfully the following are required:
• A node (Dedicated or Cloud Server) running Ubuntu 16
• All commands must be entered as root
Apache web server installed

Tutorial
This guide demonstrates how to implement virtual hosting for two domains, www.globo.tech and support.globo.tech, on a single server. The two domains can be substituted for any domain that you may use.

First, create the directories for each website in the /var/www directory using the following commands:
sudo mkdir -p /var/www/www.globo.tech
sudo mkdir -p /var/www/support.globo.tech
Next change the ownership of the directories so that the regular user can modify them:
sudo chown -R $USER:$USER /var/www/www.globo.tech
sudo chown -R $USER:$USER /var/www/support.globo.tech

Next change the permissions for the directories so that visitors to the site can have read access to the web pages:
sudo chmod -R 777 /var/www/www.globo.tech
sudo chmod -R 777 /var/www/support.globo.tech

The next step is creating a virtual host file for each website; this is accomplished by copying the default virtual host file and then editing it appropriately. Use the following commands to copy the default host file:

sudo cp /etc/apache2/sites-available/000-default.conf /etc/apache2/sites-available/www.globo.tech.conf
sudo cp /etc/apache2/sites-available/000-default.conf /etc/apache2/sites-available/support.globo.tech.conf

The virtual host files have to be modified so that they can direct requests appropriately. Open the virtual host file for the www.globo.tech site using an editor of your choice. In this example nano has been used:

sudo nano /etc/apache2/sites-available/www.globo.tech.conf

Subsequently, add a directive for the ServerName, this is the domain for the website:

ServerName www.globo.tech

Next, modify the DocumentRoot directive so that it matches the directory that you have created for the website

DocumentRoot /var/www/www.global.tech

If the changes have been implemented correctly the virtual host file will look this:
<Virtualhost>
ServerAdmin webmaster@localhost
ServerName www.global.tech
DocumentRoot /var/www/www.global.tech
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</Virtualhost>

The same changes should be made for the support.global.tech website. Open the virtual host file using the following command:
sudo nano /etc/apache2/sites-available/support.globo.tech.conf

Alter the file so that it matches the following:
<Virtualhost>
ServerAdmin webmaster@localhost
ServerName support.global.tech
DocumentRoot /var/www/support.global.tech
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</Virtualhost>

The final step is for enabling the new virtual hosts and reloading the web server. Apache provides the a2ensite tool for enabling sites. Use the following commands to enable the two sites:
sudo a2ensite www.global.tech.conf
sudo a2ensite support.global.tech.conf

Finally, reload apache using the following command:
sudo systemctl restart apache2

Conclusion
You have now setup virtual hosting, and you can use your single server to serve two websites. You can experiment further by adding more sites to your server. If this walkthrough has assisted you, do not hesitate to share with other people.

How to Install & Update OpenVPN on Ubuntu 16

Using OpenVPN allows you to securely and safely access the internet, especially when you’re connected to a public or untrusted network. OpenVPN is a solution that will enable you to create a wide array of network configurations; the configurations allow customized private network solutions that can meet a variety of needs. OpenVPN is an open-source software that employs Secure Socket Layer (SSL) protocols for additional security.

Update OpenVPN

OpenVPN allows authentication through pre-shared secret keys, certificates, or a username and password combination. Through this authentication, secure point-to-point connections are established with high-level encryption protocols.

Getting Started

When you decide to install and update OpenVPN on Ubuntu 16.04, you will first need a node running Linux Ubuntu 16.04 LTS; the node you choose can be on a cloud server or a dedicated server. It’s important to verify that your operating system is running the most recent version, including any updates or patches that may need to be installed.

Update OpenVPN on Ubuntu 16

The first step in any successful implementation is updating the system, verifying that all necessary updates have been pushed and the install itself is clean. You can check this by running the following commands:
$ apt-get update
$ apt-get upgrade

Once the updates are pushed, you can then proceed with installing OpenVPN and EasyRSA on your node:
$ apt-get install openvpn easy-rsa

Now that OpenVPN and EasyRSA have been installed, it’s time to set up the CA Directory and then move to it:
$ make-cadir ~/openvpn-ca && cd ~/openvpn-ca

You will need to edit the vars file to match the information you have:
$ nano vars
export KEY_COUNTRY="US"
export KEY_PROVINCE="CA"
export KEY_CITY="SanFrancisco"
export KEY_ORG="Fort-Funston"
export KEY_EMAIL="me@myhost.mydomain"
export KEY_OU="MyOrganizationalUnit"

If needed or if you choose, you can edit the keyname as well:
export KEY_NAME="server"

After setting up the directory, editing the vars file, and editing the keyname if you chose to do so, it’s time to build the CA Authority:
$ source vars
$ ./clean-all
$ ./build-ca

At this point you will receive a set of prompts, you may type Enter at each prompt.

When the prompts have completed, it’s time to create the server certificate, the key, and the encryption files. If you opted to change the KEY_NAME value earlier, you would need to verify that you’re building the correct key at this time:
$ ./build-key-server server

Make sure to accept the default entry during the build.

Now it’s time to generate the DH Key:
$ ./build-dh

After generating the DH Key, the TLS Key will need to be generated:
$ openvpn --genkey --secret keys/ta.key

There are two options for building a certificate here, once that generates a password and one that does not create a password.

No Password Option
It’s time to generate a client key pair and certificate, replacing “client” with the name of your generated certificate:
$ cd ~/openvpn-ca
$ source vars
$ ./build-key client

Password Option
If you would prefer to have a password assigned to your certificate during this build, follow the below commands:
$ cd ~/openvpn-ca
$ source vars
$ ./build-key-pass client

Now that the certificate has been built, with or without a password, the OpenVPN server can be configured. During this configuration, make sure to match KEY_NAME with the correct name:
$ cd ~/openvpn-ca/keys
$ cp ca.crt server.crt server.key ta.key dh2048.pem /etc/openvpn
$ gunzip -c /usr/share/doc/openvpn/examples/sample-config-files/server.conf.gz | tee /etc/openvpn/server.conf

We need to edit the openvpn.conf file to continue the configuration, following all of the steps and commands outlined below, as applicable:
$ nano /etc/openvpn/server.conf

Once the edits are complete, it’s time to configure the server to forward traffic through the VPN:
$ nano /etc/sysctl.conf
add:
net.ipv4.ip_forward=1
Reload sysctl
$ sysctl -p

First, we need to locate the primary interface:
$ ip route | grep default
default via 192.168.65.254 dev eth0 onlink

After the primary interface is located, the UFW rules will need to be altered:
$ nano /etc/ufw/before.rules
# START OPENVPN RULES
# NAT table rules
*nat
:POSTROUTING ACCEPT [0:0]
# Allow traffic from OpenVPN client to eth0 (change to the interface you discovered!)
-A POSTROUTING -s 10.8.0.0/8 -o eth0 -j MASQUERADE
COMMIT
# END OPENVPN RULES

During this alteration, the default UFW rules will also need to be edited:
$ nano /etc/default/ufw
DEFAULT_FORWARD_POLICY="ACCEPT"

When the necessary edits are complete, it’s time to open the firewall port on OpenVPN:
$ ufw allow 1194/udp
$ ufw allow OpenSSH
$ ufw disable
$ ufw enable

It’s time to start and enable the OpenVPN server. When the server is enabled, make sure to check the server status:
$ systemctl start openvpn@server
$ systemctl enable openvpn@server
$ systemctl status openvpn@server

We need to create the client configuration file, making a few minor edits and adding some comments:
$ mkdir -p ~/client-configs/files
$ chmod 700 ~/client-configs/files
$ cp /usr/share/doc/openvpn/examples/sample-config-files/client.conf ~/client-configs/base.conf
$ nano ~/client-configs/base.conf

Edit the following lines:
remote server_IP_address 1194
proto udp
user nobody
group nogroup

Comment out these lines by adding “#”:
#ca ca.crt
#cert client.crt
#key client.key

Then add:
cipher AES-128-CBC
auth SHA256
key-direction 1

After completing the edits and comments, you will need to create a script that generates the config file, making sure to run the following commands and include any necessary changes:
$ nano ~/client-configs/make_config.sh
Then add:
KEY_DIR=~/openvpn-ca/keys
OUTPUT_DIR=~/client-configs/files
BASE_CONFIG=~/client-configs/base.conf

cat ${BASE_CONFIG} \
<(echo -e '') \
${KEY_DIR}/ca.crt \
<(echo -e '
\n') \
${KEY_DIR}/${1}.crt \
<(echo -e '
\n') \
${KEY_DIR}/${1}.key \
<(echo -e '
\n') \
${KEY_DIR}/ta.key \
<(echo -e '
') \
> ${OUTPUT_DIR}/${1}.ovpn

Change the permissions:
chmod 700 ~/client-configs/make_config.sh

Finally, it’s time to generate the client file:
cd ~/client-configs
./make_config.sh client_name

You should be able to access the client file:
ls ~/client-configs/files

Conclusion

Congratulations, you’ve successfully installed and updated OpenVPN on your node running Ubuntu 16.04 LTS. You’re now ready to run your OpenVPN instance and begin securely connecting and transmitting data over a variety of networks; make sure to update OpenVPN as needed or when critical updates are pushed. If you found this guide helpful, please share it with other users engaging in similar setups.

How to Achieve High Availability with Heartbeat & DRBD on Ubuntu 16

Heartbeat and DRBD can be used effectively to maintain high availability for MySQL databases on Ubuntu 16.04. Heartbeat is a network-oriented tool for maintaining high availability and managing failover. With heartbeat, you can make sure that a shared IP address, is active on one and only one server at a time in your cluster. DRBD (Distributed Replicated Block Device) synchronizes data between two nodes by writing the data to the primary server first and then writing to the secondary server. When used to provide high availability for MySQL databases, heartbeat and DRBD allow for a seamless transition from the primary to the secondary server when the primary server fails. This guide will take you through the steps necessary to create and initiate such a setup.

Getting Started

In order to follow this guide you will need to have the following in place:
• Two nodes (Cloud Server or Dedicated Server) on a single LAN with Ubuntu 16.04 installed. We will call these servers DB1 and DB2 below
• Three network cards on each server to allow for the establishment of all required IP addresses
• A second unpartitioned drive on each server to serve as the DRDB device
• Root access to the nodes

Throughout the tutorial, ALL commands must be done on both servers unless otherwise specified.

Tutorial

IP Assignment

You will need to assign public and private IP addresses to the servers, a floating IP address through which your MySQL databases will be accessed, and IP addresses for DRBD traffic. In this guide, we assume that the IP range for your public network is 173.209.49.66/29 and the IP range for your private network (LAN) is 10.119.0.0/24. We also assume you have a third network card on both servers with a cross-connect between servers for DRBD traffic. The LAN subnet for this cross-connect presumably covers the IP range 192.168.5.0/24.

The networking required for such a setup may be configured in a number of ways. We chose to use the easiest to understand. In the setup described below, the first LAN will be used for floating IP’s and heartbeat while the third network will be reserved for DRBD.

We have assigned the following IP addresses to the components we will use in this tutorial.

Server eth0 (wan) eth1 (lan) eth1 (drbd)
DB1 173.209.49.66 10.119.0.1 192.168.5.1
DB2 173.209.49.67 10.119.0.2 192.168.5.2
Floating None 10.119.0.10 None

Always makes sure your system is up to date before you install any software.

apt-get update
apt-get -y upgrade

Disable Ubuntu’s firewall (if it is installed) before you assign the above IP addresses.

ufw disable

Configure the hostnames for your servers. To do so run the following commands on

On DB1:

echo "db1.mydomain.com" > /etc/hostname
hostname db1.mydomain.com

On DB2:
echo "db2.mydomain.com" > /etc/hostname
hostname db2.mydomain.com

Add the hostnames you have created as entries in the host file:

nano /etc/hosts

Next, add the following entries on both servers. This will bind hostnames to the DRBD IP’s in the 192.168.5.0/24 range:

192.168.5.1 db1.mydomain.com
192.168.5.2 db2.mydomain.com

Your network is now set up.

Install and Configure DRBD and Heartbeat

With your network configuration in place, install DRBD and heartbeat. Configure your nodes so that both DRDB and heartbeat start when the servers boot:

apt-get -y install drbd8-utils heartbeat
systemctl enable drbd
systemctl enable heartbeat

You will need to set up a DRDB device for each server. Check that your second unpartitioned drive is available to use as your DRBD device:

fdisk -l

Below, we can see that we have a 16GB /dev/sdb drive in our setup. We will use this drive for DRBD.

Disk /dev/sda: 16 GiB, 17179869184 bytes, 33554432 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xcda6f1d3
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 976895 974848 476M 83 Linux
/dev/sda2 976896 4976639 3999744 1.9G 82 Linux swap / Solaris
/dev/sda3 4976640 33552383 28575744 13.6G 83 Linux
Disk /dev/sdb: 16 GiB, 17179869184 bytes, 33554432 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

The second drive can easily be partitioned using the simple formula below. This formula will create a single partition that occupies the whole disk.

echo -e 'n\np\n1\n\n\nw' | fdisk /dev/sdb

With the DRBD devices in place, it is time to configure DRBD on both nodes. First, create the DRBD configuration file on each node:

echo "" > /etc/drbd.d/global_common.conf
nano /etc/drbd.d/r0.res

Add the following into both DRBD configuration files:

global {
usage-count no;
}
resource r0 {
protocol C;
startup {
degr-wfc-timeout 60;
}
disk {
on-io-error detach;
}
syncer {
rate 100M;
}
net {
cram-hmac-alg sha1;
shared-secret "wXE8MqVa";
}
on db1.mydomain.com {
device /dev/drbd0;
disk /dev/sdb1;
address 192.168.5.1:7789;
meta-disk internal;
}
on db2.mydomain.com {
device /dev/drbd0;
disk /dev/sdb1;
address 192.168.5.2:7789;
meta-disk internal;
}
}

We will now edit the various files used to configure our servers for high availability. Remember to edit the files on both servers in exactly the same way except where we indicate there need to be differences.

The first file we will edit is the configuration file /etc/ha.d/ha.cf on both nodes. You may open the file like so:

nano /etc/ha.d/ha.cf

Enter the following parameters in the file. The bcast parameter line is of critical importance. In our case, we have 3 network interfaces. We need to put eth2 on this line. If you are using an existing network setup that is designed differently than our simple example you may need to enter a different value here:

# Check Interval
keepalive 1
# Time before server declared dead
deadtime 10
# Secondary wait delay at boot
initdead 60
# Auto-failback
auto_failback off
# Heartbeat Interface
bcast eth2
# Nodes to monitor
node db1.mydomain.com
node db2.mydomain.com

The next file to edit is the resources file /etc/ha.d/haresources

nano /etc/ha.d/haresources

We will enter only one line in this file. Inspect this entry carefully. It should include the hostname of your main active node (db1), the floating IP (10.119.0.10), the device (/dev/drbd0) and its mount point (/var/lib/mysql):

db1.mydomain.com 10.119.0.10/24 drbddisk::r0 Filesystem::/dev/drbd0::/var/lib/mysql::ext4::noatime

To secure your high availability setup you will need to define and store identical authorization keys on both nodes. To do so, open and edit /etc/ha.d/authkeys

nano /etc/ha.d/authkeys

This file should only contain the two lines below. Use the same password on both nodes. The password is the text immediately after the “sha1” statement.

auth1
1 sha1 e86b38f5075de0318548edad9566436423ada422

Using the partitions created above, create the DRBD disks. Start by entering the following command on DB1:

drbdadm create-md r0
systemctl restart drbd
drbdadm outdate r0
drbdadm -- --overwrite-data-of-peer primary all
drbdadm primary r0
mkfs.ext4 /dev/drbd0
chmod 600 /etc/ha.d/authkeys
mkdir /var/lib/mysql

If you get an error message along the following lines, “The file /dev/drbd0 does not exist and no size was specified,” check that your hostnames have been set properly.

Once the DRBD disk is created on DB1, you may create the DRBD disk on DB2:

drbdadm create-md r0
systemctl restart drbd
chmod 600 /etc/ha.d/authkeys
mkdir /var/lib/mysql

With both disks in place, you may now verify that the DRBD disk is connected and is properly syncing

cat /proc/drbd

The above command should yield the below output. In this output, you should see Primary/Secondary. This output indicates that the DB1 node is the master while the other node is the slave. It also shows that everything is syncing as expected.

root@db1:~# cat /proc/drbd
version: 8.4.5 (api:1/proto:86-101)
srcversion: D496E56BBEBA8B1339BB34A
0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----
ns:550488 nr:0 dw:135424 dr:416200 al:35 bm:0 lo:0 pe:11 ua:0 ap:0 ep:1 wo:f oos:16233752
[>....................] sync'ed: 3.3% (15852/16380)M
finish: 0:16:45 speed: 16,136 (16,932) K/sec

With properly configured synchronization in place between the primary and secondary nodes, it is time to enable the failover portion of our setup. To do so, we will simply start heartbeat on both nodes

systemctl start heartbeat

The DRBD partition should be mounted on DB1 only. Verify that this is so:

root@db1:/etc/ha.d# mount | grep drbd
/dev/drbd0 on /var/lib/mysql type ext4 (rw,noatime,data=ordered)

Once you verify that the floating IP is only bound to DB1 with the following command, you are ready to install your MySQL-type database service.

root@db1:/etc/ha.d# ip addr show | grep 10.119.0.10
inet 10.119.0.10/24 brd 10.119.0.255 scope global secondary eth1:0

Install MariaDB Database Service

Now is the time to install our database service on both servers. There are several variants of MySQL that were created as forks of the original MySQL source code. In the following, we have opted to install MariaDB because of its performance and good track record.

apt-get install mariadb-server

Percona DB and MySQL are two other options you might choose.

The database service on a given node should only start if that node is designated as the primary node at the time. To assure you don’t end up with MySQL running on both nodes simultaneously, disable auto-start for the MariaDB service on both nodes

systemctl disable mysql

By design, DB1 is primary when our setup is initiated. No database service should be running on DB2. So long as DB1 is primary, databases on DB2 should be created and populated through synchronization with DB1. Therefore, on DB2, you need to stop the MariaDB service and empty /var/lib/mysql. Perform this command ON DB2 ONLY:

systemctl stop mysql
rm -rfv /var/lib/mysql/*

Before you proceed further, configure the root password on DB1. To do so, simply run the wizard. Set a new root password and allow all other options to retain their default values for now:

root@db1:/var/lib# mysql_secure_installation
NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!
In order to log into MariaDB to secure it, we'll need the current
password for the root user. If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.
Enter current password for root (enter for none):
OK, successfully used password, moving on...
Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.
Set root password? [Y/n] y
New password:
Re-enter new password:
Password updated successfully!
Reloading privilege tables..
... Success!
By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them. This is intended only for testing, and to make the installation
go a bit smoother. You should remove them before moving into a
production environment.
Remove anonymous users? [Y/n] Y
... Success!
Normally, root should only be allowed to connect from 'localhost'. This
ensures that someone cannot guess at the root password from the network.
Disallow root login remotely? [Y/n] Y
... Success!
By default, MariaDB comes with a database named 'test' that anyone can
access. This is also intended only for testing, and should be removed
before moving into a production environment.
Remove test database and access to it? [Y/n] Y
- Dropping test database...
... Success!
- Removing privileges on test database...
... Success!
Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.
Reload privilege tables now? [Y/n] Y
... Success!
Cleaning up...
All done! If you've completed all of the above steps, your MariaDB
installation should now be secure.
Thanks for using MariaDB!

Copy the MySQL Maintenance configuration file from DB1 to DB2

rsync -av /etc/mysql/debian.cnf root@192.168.5.2:/etc/mysql/debian.cnf

Now we will create a root user for remote management of and access to the databases on the highly available MySQL instance. We will make use of wildcards to do so.

Our cluster architecture is set up so that all other servers on our LAN can reach the database at the floating IP 10.119.0.10. If you wish to enable users outside your LAN to access your highly available database you may bind a public IP in /etc/ha.d/haresources for database access as well, following the pattern set above in editing that file.

In our case, we have set up our high availability database servers to be accessible from other servers on the LAN that share the IP range 10.119.0.0/24.

mysql -u root -p

Enter the following commands to create the root user. Replace “my-password” with the MySQL root password you wish to assign to the remote access user:

MariaDB [(none)]> CREATE USER 'root'@'10.119.0.%' IDENTIFIED BY 'my-password';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON *.* TO 'root'@'10.119.0.%' WITH GRANT OPTION;
MariaDB [(none)]> FLUSH PRIVILEGES;
MariaDB [(none)]> QUIT;

Set the bind address for MySQL on both servers

sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/mariadb.conf.d/*.cnf

Initiate Heartbeat for MySQL Service

Add support for the MySQL service in our heartbeat instances on both servers. First, open the configuration file:

nano /etc/ha.d/haresources

Then, simply add mysql at the end of the line and save the file

db1.mydomain.com 10.119.0.10/24 drbddisk::r0 Filesystem::/dev/drbd0::/var/lib/mysql::ext4::noatime mysql

Once heartbeat is configured we need to restart it on both servers. heartbeat must be started on the primary server (DB1) first. Once heartbeat has started on DB1 allow at least 20 seconds before you restart heartbeat on DB2.

The command sequence is as follows. ON DB1 enter:

systemctl restart heartbeat

Wait 20 seconds or more and then enter the following on DB2:

systemctl restart heartbeat

The delay between initiating the heartbeat stack on DB1 and DB2 will prevent heartbeat from inadvertently initiating failover to DB2 upon startup.

Testing redundancy of our setup

Our goal throughout this tutorial has been to tie together our servers such that MySQL service will not be interrupted if the active server fails. Now that our setup is complete, we will perform a series of tests to verify that heartbeat will actually trigger a transfer from the active server to the passive server when the active server fails in some way. We will also verify that DRBD is properly syncing the data between the two servers.

Tests to Perform on DB1

First, we will verify that DB1 is the primary drbd node

root@db1:~# cat /proc/drbd
version: 8.4.5 (api:1/proto:86-101)
srcversion: D496E56BBEBA8B1339BB34A
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
ns:22764644 nr:256 dw:529232 dr:22248299 al:111 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

Next, we will verify that the DRBD disk is mounted

root@db1:~# mount | grep drbd
/dev/drbd0 on /var/lib/mysql type ext4 (rw,noatime,data=ordered)

The floating IP must be bound correctly for the setup to function properly

root@db1:~# ip addr show | grep 10.119.0.10
inet 10.119.0.10/24 brd 10.119.0.255 scope global secondary eth1:0

Check to make sure that MariaDB is running

root@db1:~# ps -ef | grep mysqld
root 7472 1 0 05:52 ? 00:00:00 /bin/bash /usr/bin/mysqld_safe
mysql 7617 7472 0 05:52 ? 00:00:00 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --user=mysql --skip-log-error --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/run/mysqld/mysqld.sock --port=3306
root 7618 7472 0 05:52 ? 00:00:00 logger -t mysqld -p daemon error

Use the remote access root user to test the MySQL connection directly on the floating (failover) IP and create a test database.

mysql -h 10.119.0.10 -u root -p
MariaDB [(none)]> create database failtest;
MariaDB [(none)]> quit

Restart heartbeat on DB1.

systemctl restart heartbeat

Heartbeat will interpret this restart as a failure of DB1 and should trigger failover to make DB2 the primary server. Ensure that DRBD is now treating DB1 as the secondary server:

root@db1:~# cat /proc/drbd
version: 8.4.5 (api:1/proto:86-101)
srcversion: D496E56BBEBA8B1339BB34A
0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r-----
ns:22764856 nr:388 dw:529576 dr:22248303 al:112 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

Having triggered the failover, we will now test that our setup is fully functional with DB2 acting as the primary server.

Tests to perform on DB2

Verify that DB2 is now the primary drbd node

root@db2:~# cat /proc/drbd
version: 8.4.5 (api:1/proto:86-101)
srcversion: D496E56BBEBA8B1339BB34A
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
ns:412 nr:20880892 dw:20881304 dr:11463 al:7 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

Verify that the DRBD disk is mounted on DB2

root@db2:~# mount | grep drbd
/dev/drbd0 on /var/lib/mysql type ext4 (rw,noatime,data=ordered)

Verify the floating IP is now bound to DB2 correctly

root@db2:~# ip addr show | grep 10.119.0.10
inet 10.119.0.10/24 brd 10.119.0.255 scope global secondary eth1:0

Check to make sure that MariaDB is running on DB2

root@db2:~# ps -ef | grep mysqld
root 7568 1 0 06:13 ? 00:00:00 /bin/bash /usr/bin/mysqld_safe
mysql 7713 7568 0 06:13 ? 00:00:00 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --user=mysql --skip-log-error --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/run/mysqld/mysqld.sock --port=3306
root 7714 7568 0 06:13 ? 00:00:00 logger -t mysqld -p daemon error

Use the remote access user to connect to the MySQL instance at the floating (failover) IP. If your setup is working properly, the following commands should enable you to view the test database we created earlier while DB1 was the primary server.

mysql -u 10.119.0.10 -u root -p
MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| failtest |
| information_schema |
| lost+found |
| mysql |
| performance_schema |
+--------------------+
5 rows in set (0.04 sec)
MariaDB [(none)]> exit

Restart Heartbeat.

systemctl restart heartbeat

As it did before, this restart should trigger failover. To check that it has done so, ensure that DRBD is now secondary on DB2

root@db2:~# cat /proc/drbd
version: 8.4.5 (api:1/proto:86-101)
srcversion: D496E56BBEBA8B1339BB34A
0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r-----
ns:508 nr:20881012 dw:20881520 dr:11475 al:7 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

Provided your tests yielded the expected results, your high availability MySQL setup should now be fully operational.

Conclusion

Congratulations, the high-availability MySQL setup you have created on your Ubuntu 16.04 servers will make your databases more reliable for mission critical functions. This frees you to create content while your users get nearly continuous data access.

How to encrypt a directory with eCryptfs on Ubuntu 16

Ecryptfs is a powerful but simple to use tool for encrypting directories. Perhaps you are keeping sensitive information in your home directory, and wish to secure those files from an attacker who gains access to your server but not your user credentials. Or maybe your database contains sensitive details that you wish to encrypt at rest. With Ecryptfs, it is easy to secure individual directories in a way that they cannot be accessed without a user logging into the account that owns the key. In this guide, we’ll encrypt the contents of a directory on an Ubuntu 16.04 server.

Getting Started

You’ll need the following in place before we begin:
• 1 server (Cloud Server or Dedicated Server), running a fresh installation of Ubuntu 16.04.
• Root access

Tutorial

Begin by installing the necessary packages.

apt-get install ecryptfs-utils -y

File encryption is a powerful tool, but its capabilities and limitations need to be understood before it is used for serious tasks. For purposes of illustration, we’ll create a test directory in /home so you can get a sense for how your encrypted filesystem will work.

mkdir /home/globotech

Now we’ll encrypt the contents of the globotech directory we’ve just made.

mount -t ecryptfs /home/globotech/ /home/globotech/

You’ll be prompted to choose a password, and to set an encryption type.

With these set, check if the contents of the directory are encrypted.

mount

[...]
cpu,cpuacct on /run/lxcfs/controllers/cpu,cpuacct type cgroup (rw,relatime,cpu,cpuacct,nsroot=/)
devices on /run/lxcfs/controllers/devices type cgroup (rw,relatime,devices,nsroot=/)
blkio on /run/lxcfs/controllers/blkio type cgroup (rw,relatime,blkio,nsroot=/)
name=systemd on /run/lxcfs/controllers/name=systemd type cgroup (rw,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd,nsroot=/)
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=101628k,mode=700)
/home/globotech on /home/globotech type ecryptfs (rw,relatime,ecryptfs_sig=9cff1b579bb64c22,ecryptfs_cipher=aes,ecryptfs_key_bytes=16,ecryptfs_unlink_sigs)

Next we’ll add a file with test content to this directory.

touch /home/globotech/file.txt
echo "Good Morning" > /home/globotech/file.txt

Unmount the encrypted globotech directory.

umount /home/globotech

With the directory unmounted, try to read the file you’ve just created.

cat /home/globotech/file.txt

z6<"3DUfw`M\`
W_65"I??_aO?EXgd+?+-   RK[a+?`,[-+=?Mec8 Td8Y  ?IV-[2d!fXMQYeQS+?!-SB
g7?%?¼hH+H?'F\++}H.+I;?2-/I!P[KE)
E
DFL'|Ug{_:4?2T0G-\H:
1q?X    vfq?,Xy*e~ox<lI619q2?~<   Q6):O%8 _&+)sMYW0lS!;0?n%#??5ÿ?D}F?j_sWNv
B
ZakBD
;T?t[IZlAOs]0??Q)N~Pp&hIbG@,?f
[...]

You’ll notice that the file is encrypted and the content is inaccessible. Without the password, an attacker cannot gain access to the file you’ve just made.

If you’d like access to your file again, run the same command you ran previously:

mount -t ecryptfs /home/globotech/ /home/globotech/

Use the same password to access your files. Please keep this password safe. If it is lost, no one will be able to regain access to your files, not even your service provider.

cat /home/globotech/file.txt

Good Morning

Conclusion

Encryption is a powerful way to protect your files in the event of a compromised server or stolen laptop. Everyone should encrypt their sensitive data, so share this article with anyone who may not know how easy encrypting directories can be. If you found this article helpful, feel free to share it with your friends and let us know in the comments below!

How to Achieve High Availability Load Balancing with Keepalived on Ubuntu 16

Keepalived is a daemon that can be used to drive a number of load balancing processes on Linux Virtual Servers (LVS) to maintain high availability. Load balancers enable two or more identical servers to provide service through a single floating IP address or set of IP addresses. When one or more of the servers is not functioning optimally, keepalived can shift more of the load to those servers that are more healthy. Perhaps the simplest configuration for the keepalived load balancer uses the daemon to maintain high availability by implementing the failover service. During failover, the entire load is shifted from a primary server to a secondary server upon failure of the primary. The present tutorial describes the implementation of such a highly-available load balancer setup based on Ubuntu 16.04 and keepalived. In this tutorial, we will configure the load balancers using a “floating IP” and an active/passive redundancy.

Getting Started

In order to follow this guide you will need to have the following in place:
• Two servers (Cloud Server or Dedicated Server), each running a fresh installation of Ubuntu 16.04. We will call these servers LB1 and LB2 below
• Root access to the nodes

Tutorial

For your reference, here are the servers, or load balancers, we’ll be working with, along with their respective public and private IP addresses. Where necessary, remember to replace with the IP addresses for your own servers.

LB1
Public:173.209.49.66
Private:10.119.0.1

LB2
Public:173.209.49.67
Private:10.119.0.2

The load balancers will make use of a “floating IP”, and we’ll configure active and passive redundancy as well.

Floating
Public:173.209.49.70
Private:10.119.0.10

Tutorial

Your first step when you install any software is to make sure your system is up to date by executing the following commands.

apt-get update
apt-get -y upgrade

The update ensures you will install the most recent (stable) packages available. The upgrade will install the most recent security patches and fixes.

Ubuntu’s firewall will have to be re-configured to allow for the configuration changes made to run keepalived. So, once your system has been updated disable Ubuntu’s firewall.

ufw disable

You are now ready to install keepalived and the necessary dependencies:

apt-get install linux-headers-$(uname -r) keepalived

Startup Keepalived on Boot

With keepalived installed, configure the server so that the daemon activates on boot. You will also need to enable the ipvsadm kernel module, which provides key underlying functionality keepalived uses for load balancing.

systemctl enable keepalived
modprobe ip_vs

Configure Keepalived

Create the keepalived configuration file folder on both servers:

echo "" > /etc/keepalived/keepalived.conf
nano /etc/keepalived/keepalived.conf

We will now set up keepalived so that it will use the Virtual Router Redundancy Protocol (VRRP) to determine when LB1 or LB2 should be the active router based on the health of LB1. To implement this step you will need to create and save the following script to your keepalived folder on LB1:

vrrp_instance VI_LOCAL {
interface eth1
state MASTER
virtual_router_id 51
priority 101
virtual_ipaddress {
10.119.0.10
}
track_interface {
eth0
eth1
}
}
vrrp_instance VI_PUB {
interface eth0
state MASTER
virtual_router_id 52
priority 101
virtual_ipaddress {
173.209.49.70
}
track_interface {
eth0
eth1
}
}
virtual_server 173.209.49.70 443 {
delay_loop 4
lb_algo sh # source hash
lb_kind NAT
protocol TCP
real_server 10.119.0.100 443 {
weight 1
TCP_CHECK {
connect_timeout 15
nb_get_retry 3
delay_before_retry 2
}
}
real_server 10.119.0.101 443 {
weight 1
TCP_CHECK {
connect_timeout 15
nb_get_retry 3
delay_before_retry 2
}
}
}
virtual_server 173.209.49.70 80 {
delay_loop 4
lb_algo wrr # weighted round robin
lb_kind NAT
protocol TCP
real_server 10.119.0.100 80 {
weight 1
TCP_CHECK {
connect_timeout 15
nb_get_retry 3
delay_before_retry 2
}
}
real_server 10.119.0.101 80 {
weight 1
TCP_CHECK {
connect_timeout 15
nb_get_retry 3
delay_before_retry 2
}
}
}

and create and save the following script to your keepalived folder on LB2:

vrrp_instance VI_LOCAL {
interface eth1
state BACKUP
virtual_router_id 51
priority 100
virtual_ipaddress {
10.119.0.10
}
track_interface {
eth0
eth1
}
}
vrrp_instance VI_PUB {
interface eth0
state BACKUP
virtual_router_id 52
priority 100
virtual_ipaddress {
173.209.49.70
}
track_interface {
eth0
eth1
}
}
virtual_server 173.209.49.70 443 {
delay_loop 4
lb_algo sh # source hash
lb_kind NAT
protocol TCP
real_server 10.119.0.100 443 {
weight 1
TCP_CHECK {
connect_timeout 15
nb_get_retry 3
delay_before_retry 2
}
}
real_server 10.119.0.101 443 {
weight 1
TCP_CHECK {
connect_timeout 15
nb_get_retry 3
delay_before_retry 2
}
}
}
virtual_server 173.209.49.70 80 {
delay_loop 4
lb_algo wrr # weighted round robin
lb_kind NAT
protocol TCP
real_server 10.119.0.100 80 {
weight 1
TCP_CHECK {
connect_timeout 15
nb_get_retry 3
delay_before_retry 2
}
}
real_server 10.119.0.101 80 {
weight 1
TCP_CHECK {
connect_timeout 15
nb_get_retry 3
delay_before_retry 2
}
}
}

The “virtual_router_id” needs to be unique for each VRRP instance defined. This ID must also be unique within the VLAN. The same ID should not be used on two clusters using the same physical switch or VLAN. The ID needs to match on both LB1 and LB2 for the same VRRP instance. Valid values are from 0 to 255.

Netfilter can use nf_conntrack to track the connections among your servers. Kernel parameters, such as IP addresses, can be immediately modified with the sysctl command. Once nf_conntrack is enabled and sysctl is configured as follows

modprobe nf_conntrack
nano /etc/sysctl.conf

keepalived will be able to track the connections between the servers and re-assign the floating IP addresses between LB1 and LB2 as necessary, depending on which should be active and which should be passive at the time.

To complete the configuration of the servers to run keepalived for high availability enter the following tweaks:

net.ipv4.ip_forward = 1
net.ipv4.ip_nonlocal_bind = 1
net.nf_conntrack_max = 1000000

Once you apply the tweaks as follows:

sysctl -p

your configuration should be complete and you can start keepalived.

systemctl start keepalived

Verify Keepalived’s Status

Now we need to ensure our keepalived instance is operating as expected. First, we’ll check that both floating IP addresses are assigned to the first keepalived instance. To do so, execute ip addr show and see if the floating IP addresses are present:

root@lb1:/etc# ip addr show

1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:8e:e4:2f brd ff:ff:ff:ff:ff:ff
inet 173.209.49.66/29 brd 173.209.49.71 scope global eth0
valid_lft forever preferred_lft forever
inet 173.209.49.70/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe8e:e42f/64 scope link
valid_lft forever preferred_lft forever
3: eth1: mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:8e:ea:2d brd ff:ff:ff:ff:ff:ff
inet 10.119.0.1/24 brd 10.119.0.255 scope global eth1
valid_lft forever preferred_lft forever
inet 10.119.0.10/32 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe8e:ea2d/64 scope link
valid_lft forever preferred_lft forever

Verify that 173.209.49.70 and 10.119.0.10 are assigned to LB1. The presence of these addresses indicates that LB1 is active and LB2 is passive. Now, if we shut down keepalived on LB1 those IP addresses should appear on the second server.

root@lb1:/etc# systemctl stop keepalived

Switch to LB2 and check the IP addresses:

root@lb2:~# ip addr show

1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:8e:ae:b8 brd ff:ff:ff:ff:ff:ff
inet 173.209.49.67/29 brd 173.209.49.71 scope global eth0
valid_lft forever preferred_lft forever
inet 173.209.49.70/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe8e:aeb8/64 scope link
valid_lft forever preferred_lft forever
3: eth1: mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:8e:ed:ba brd ff:ff:ff:ff:ff:ff
inet 10.119.0.2/24 brd 10.119.0.255 scope global eth1
valid_lft forever preferred_lft forever
inet 10.119.0.10/32 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe8e:edba/64 scope link
valid_lft forever preferred_lft forever

Verify that the floating IP addresses are now assigned to the second node. If so, LB2 is now active. The outwardly visible portion of the configuration has now been verified.

As a last quick check, confirm that the backends are well specified within keepalived:

root@lb1:/etc# ipvsadm

IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 173.209.49.70:http wrr
-> 10.119.0.100:http Masq 1 0 0
-> 10.119.0.101:http Masq 1 0 0
TCP 173.209.49.70:https sh
-> 10.119.0.100:https Masq 1 0 0
-> 10.119.0.101:https Masq 1 0 0

Provided all IP addresses show up as expected, keepalived should now work as expected.

Conclusion

Keepalived is now installed on your LVS cluster of two servers. Following the basic principles above, you can increase the size of your cluster if you wish to achieve even higher availability. Even with just two servers, your keepalived instance should make major downtime a thing of the past. If you found this article helpful, feel free to share it with your friends and let us know in the comments below!

How to Install Cassandra on Ubuntu 16

Cassandra is an open-source non-relational database management system. It was designed to be used with lots of redundancy, and it can be split up onto multiple machines so that if one part fails, the other parts continue to with little to no interruption to users. Ubuntu is a great operating system to host on as it won’t bog down the system. Even if the command line scares you at first, It’s not that hard to install. This article will guide you on how to install Apache Cassandra on Ubuntu 16.04 Server LTS.

Getting Started

To complete this guide, you will need the following:
• 1 server (Cloud Server or Dedicated Server) running Ubuntu 16.
• Root access

Gaining root permission is simple and prevents bugs from popping up during installation. Just make sure to exit out of the root when you’re done with setup.

Tutorial

Install Java
First we need to make sure that Java is installed, so add the repository. Repositories are basically groups or lists of programs that are useful together, and adding a repository is the best way to get a program. You may get a warning to use Java 8 instead of 9; don’t worry, continue on.

add-apt-repository ppa:webupd8team/java

After that’s added, update the list of repositories.

apt-get update

Install Java. Answer yes if you get a confirmation for storage and accept the license terms. Installing might take a few moments depending on your computer’s speed.

apt-get install oracle-java8-set-default

Confirm your version of Java. At the time of writing, the latest stable version is 1.8.

java -version

java version "1.8.0_101"
Java(TM) SE Runtime Environment (build 1.8.0_101-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.101-b13, mixed mode)

Install Cassandra

Just like with Java, we need to add the repository. Since these are protected files, you need to provide your email and password associated with Datastax. Depending on your exact computer setup, you may need to add a backslash (\) to parts of your email with special characters. Instead of
“Ma!lLover@Globo.Tech,” it would be “Ma\!lLover@Globo.Tech”. You might also have to replace “@” with “%40”.

echo "deb http://debian.datastax.com/community stable main" | sudo tee -a /etc/apt/sources.list.d/cassandra.sources.list

Also add the repository key to get past the extra layer of security.

curl -L http://debian.datastax.com/debian/repo_key | sudo apt-key add -

Update your list a third time and install Casandra the same way as the last two.

apt-get update
apt-get install cassandra -y

Start Cassandra up and configure it to your liking. You’ll most likely want to enable it to start on boot. In case of a power outage or maintenance, you won’t forget to start it back up after a reboot.

systemctl start cassandra
systemctl enable cassandra

Cassandra uses a separate command line to be controlled, so we need to make sure to activate that.

[root@cassandra ~]# cqlsh
Connected to Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.0.8 | CQL spec 3.4.0 | Native protocol v4]
Use HELP for help.
cqlsh>

You may want to check information about the node and cluster to get an idea of how to fix various issues or update information. You can do that using the “nodetool” command:

[root@cassandra ~] nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 127.0.0.1 222.14 KB 256 100.0% 2a0b7fa9-23c6-40d3-83a4-e6c06e2f5736 rack1
[root@cassandra ~]#

Conclusion

You should now have Java and Cassandra installed and ready to use. Make sure to share this guide with others looking for help.

install Ark: Survival Of The Fittest

How to install ARK: Survival of the Fittest server on Ubuntu 16

Ark: Survival Evolved is a popular action game that features survival-themed crafting, combat, and most notably, the ability to tame and ride dinosaurs. Ark: Survival of the Fittest is a special game mode with fast-paced rounds in which the last player standing wins.

Ark: Survival of the Fittest must be played using specific servers. This guide will walk you through the steps of setting up this server on Ubuntu 16.04 LTS.

Getting started

In order to follow this guide, you must have these requirements prepared beforehand:
• 1 server (Cloud Server or Dedicated Server) running a fresh installation of Ubuntu 16.
• This server must also have at least 2 vCores and 6GB of ram, which will be suitable for under 10 players. As more players will join your server you are to consider adding more RAM to your server.
• All commands will be run as root unless otherwise specified explicitly. A user will be created later for the Steam components.

Tutorial

Before beginning the installation process, wake sure your system is up to date. This is also a good time to install the basic dependencies.

apt-get update && apt-get upgrade -y
apt-get install nano wget tar lib32gcc1 -y

The first step is to create an user for Steam related content. For security reasons, you won’t want to use root for this.

adduser --disabled-login --disabled-password steam

Now it’s time to prepare your system with specific parameters to allow the game server to run. First, increase the amount of simultaneous system files that can be opened. You can do this by editing your sysctl.conf.

echo "fs.file-max=100000" >> /etc/sysctl.conf
sysctl -p

Append the following changes to your system limits configuration file.

echo "* soft nofile 1000000" >> /etc/security/limits.conf
echo "* hard nofile 1000000" >> /etc/security/limits.conf

Finally, enable the PAM limits module. Now your server is prepared to install Ark: Survival of the Fittest.

echo "session required pam_limits.so" >> /etc/pam.d/common-session

Now to proceed to the installation of the Ark server software.

cd /home/steam
/home/steam# su - steam
wget https://steamcdn-a.akamaihd.net/client/installer/steamcmd_linux.tar.gz
tar -zxvf steamcmd_linux.tar.gz
rm steamcmd_linux.tar.gz
./steamcmd.sh

This will be done through the Steam command line interface. To install your game server, you must execute the following commands:

login anonymous
force_install_dir ./arkserver
app_update 445400 validate

Wait for the software to be downloaded. Once you see the following, the installation is complete.

Update state (0x61) downloading, progress: 99.95 (3222988684 / 3224465090)
Success! App '445400' fully installed.

Now that the server is installed, simply type quit to get out of the steam command line interface.

The next step is to configure your newly-installed Ark server. You will need to use the root user for these next commands.

Let’s create an init script so that your Ark server will start automatically if the Ubuntu server were to reboot. Create the following file:

nano /lib/systemd/system/arkserver.service

Add the following contents into the file, making sure that the ExecStart line corresponds to what you had in the previous file.

[Unit]
Description=ARK Survival Server
[Service]
Type=simple
User=steam
Group=steam
Restart=on-failure
RestartSec=5
StartLimitInterval=60s
StartLimitBurst=3
ExecStart=/home/steam/arkserver/ShooterGame/Binaries/Linux/ShooterGameServer TheIsland?listen?SessionName=?ServerPassword=?ServerAdminPassword= -server -log
ExecStop=killall -TERM srcds_linux
[Install]
WantedBy=multi-user.target

Start and enable the Ark Survival Server on system boot.

systemctl --system daemon-reload
systemctl start arkserver.service
systemctl enable arkserver.service

You can check your server is actually running with systemctl. Below you’ll find the exact command and the expected output which confirms that the daemon is operational.

systemctl status arkserver.service
arkserver.service - ARK Survival Server
Loaded: loaded (/lib/systemd/system/arkserver.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2016-08-12 20:47:46 UTC; 4min 13s ago
Main PID: 24825 (ShooterGameServ)
CGroup: /system.slice/arkserver.service
??24825 /home/steam/arkserver/ShooterGame/Binaries/Linux/ShooterGameServer TheIsland?listen?SessionName=MyARKServer?ServerPassword=friendspassword?ServerAdminPassword=superadminpassword -server -log

Since this is a fresh Ubuntu server, its firewall service should be already enabled. The default settings are suitable but we will be adding a few rules for Ark server traffic. We’ll open up these ports.

ufw allow 27015/udp
ufw allow 7777/udp
ufw allow 32330/tcp

For your reference, here is the traffic that will be flowing through these ports:
• UDP 27015: Query port for Steam’s server browser
• UDP 7777: Game client port
• TCP 32330: RCON for remote console server access (optional)

Of course, you can also use the classical iptables method via these commands:

iptables -A INPUT -p udp -m udp --sport 27015 --dport 1025:65355 -j ACCEPT
iptables -A INPUT -p udp -m udp --sport 7777 --dport 1025:65355 -j ACCEPT
iptables -A INPUT -p tcp -m tcp --sport 32330 --dport 1025:65355 -j ACCEPT

Conclusion

With that, you should now have a properly installed ARK Survival Server ready to welcome players. Be sure to upgrade your server if you find yourself regularly approaching or exceeding 10 players a game. If you liked this article, please share it with your friends.

How to add new a node to a Galera Replication Cluster on Ubuntu 16

Galera Cluster is a powerful multi-master synchronous replication library for databases such as MySQL or MariaDB. Using it, you can scale and replicate databases for maximum reliability and minimal risk of data loss.

Unlike MySQL’s built in replication features, each node in a Galera Cluser can be read from and written to synchronously, in what’s known as a master-master setup. Failed nodes will automatically be updated by the other nodes on coming back online through what’s called a quorum, in which a plurality of nodes determines what data should be replicated.

This guide will lead you though the steps of adding a new server, or node, to a pre-existing Galera Replication Cluster.

Getting started

In order to perform the steps of this guide, you will need the following beforehand:
• A server running a fresh installation of Ubuntu 16.04 LTS
• Root access to the server

Step-by-step guide

For the purposes of this guide, our scenario is that we’re adding a node to the actual Galera setup with 2 nodes. This will make a cluster composed of 3 nodes total.

First, install the software-properties package on your system.

apt-get install software-properties-common -y
apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xF1656F24C74CD1D8

We’ll be using MariaDB for our transactional database. In order to install it, you will need to add the MariaDB 10.1 repository.

add-apt-repository 'deb [arch=amd64,i386,ppc64el] http://ftp.utexas.edu/mariadb/repo/10.1/ubuntu xenial main'

Update your repos list. Now you can install MariaDB-Server from the new repository.

apt-get update
apt-get install mariadb-server -y

You don’t want MariaDB to start yet, so make sure to stop it on node3 only. Keep it running on the other nodes.

systemctl stop mysql.service

We will now create the Galera config file. Here’s the example IP addresses we will be using:
• node1: 10.0.0.119
• node2: 10.0.0.120
• node3: 10.0.0.121

Replace with your own IP addresses as necessary.

nano /etc/mysql/conf.d/galera.cnf

[mysqld]
#mysql settings
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
innodb_doublewrite=1
query_cache_size=0
query_cache_type=0
bind-address=0.0.0.0
#galera settings
wsrep_on=ON
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_cluster_name="test_cluster"
wsrep_cluster_address=gcomm://10.0.0.119,10.0.0.120,10.0.0.121
wsrep_sst_method=rsync

Make sure to add all the IPs into the config file for the new node. You’ll also need to add the IP for the new node in the config files of node1 and node2.

Now, you can start the MariaDB daemon back up on node3. (Not the other two nodes, because it still should be running on those.)

systemctl start mysql.service

Check if node3 is linked to the cluster.

mysql -u root -p -e "show status like 'wsrep_cluster_size'"

If it is, you should see this output:

+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| wsrep_cluster_size | 3 |
+--------------------+-------+

The final step is to check that the database on the three nodes has been synced. Here are the commands for checking on the different nodes:

On Node1

mysql -u root -p

show databases like ‘galera_test’;
+------------------------+
| Database (galera_test) |
+------------------------+
| galera_test |
+------------------------+
1 row in set (0.00 sec)

On Node2

mysql -u root -p

show databases like ‘galera_test’;
+------------------------+
| Database (galera_test) |
+------------------------+
| galera_test |
+------------------------+
1 row in set (0.00 sec)

On Node3

mysql -u root -p

show databases like ‘galera_test’;
+------------------------+
| Database (galera_test) |
+------------------------+
| galera_test |
+------------------------+
1 row in set (0.00 sec)

Conclusion

Now that you have the basic steps down, you can use them to continue to add nodes to your cluster. It’s recommended to stick with an odd number of nodes as that will prevent conflicts in the case that a plurality cannot be achieved when a node goes down.

Ark Survival Evolved

How to install ARK: Survival Evolved Server on Ubuntu 16

Ark: Survival Evolved is a popular action game that features survival-themed crafting, combat, and most notably, the ability to tame and ride dinosaurs. This guide will walk you through the steps of setting up this server on Ubuntu 16.04 LTS.

Getting started

In order to install the Ark: Survival Evolved server software, you will need the following:
• 1 Node (Cloud Server or Dedicated Server) running a clean installation of Ubuntu 16.
• All commands must be entered as root with the exception of some performed by a user that we will create specifically for running Steam.
• 2 vCores and 6GB RAM. These are the minimum requirements for an Ark server that can handle up to 10 users.

Tutorial

Before beginning the installation process, wake sure your system is up to date. This is also a good time to install the basic dependencies.

apt-get update && apt-get upgrade -y
apt-get install nano wget tar lib32gcc1 -y

The first step is to create an user for Steam related content. For security reasons, you won’t want to use root for this.

adduser --disabled-login --disabled-password steam

Now it’s time to prepare your system with specific parameters to allow the game server to run. First, increase the amount of simultaneous system files that can be opened. You can do this by editing your sysctl.conf.

echo "fs.file-max=100000" >> /etc/sysctl.conf
sysctl -p

Append the following changes to your system limits configuration file.

echo "* soft nofile 1000000" >> /etc/security/limits.conf
echo "* hard nofile 1000000" >> /etc/security/limits.conf

Finally, enable the PAM limits module. Now your server is prepared to install Ark: Survival Evolved.

echo "session required pam_limits.so" >> /etc/pam.d/common-session

Now to proceed to the installation of the Ark server software.

cd /home/steam
su - steam
wget https://steamcdn-a.akamaihd.net/client/installer/steamcmd_linux.tar.gz
tar -zxvf steamcmd_linux.tar.gz
rm steamcmd_linux.tar.gz
./steamcmd.sh

This will be done through the Steam command line interface. To install your game server, you must execute the following commands:

login anonymous
force_install_dir ./arkserver
app_update 376030 validate

Wait for the software to be downloaded. Once you see the following, the installation is complete.

Update state (0x61) downloading, progress: 99.95 (3222988684 / 3224465090)
Success! App '376030' fully installed.

Now that the server is installed, simply type quit to get out of the steam command line interface.

The next step is to configure your newly-installed Ark server. You will need to use the root user for these next commands.

Let’s create an init script so that your Ark server will start automatically if the Ubuntu server were to reboot. Create the following file:

nano /lib/systemd/system/arkserver.service

Add the following contents into the file, making sure that the ExecStart line corresponds to what you had in the previous file.

[Unit]
Description=ARK Survival Server
[Service]
Type=simple
User=steam
Group=steam
Restart=on-failure
RestartSec=5
StartLimitInterval=60s
StartLimitBurst=3
ExecStart=/home/steam/arkserver/ShooterGame/Binaries/Linux/ShooterGameServer TheIsland?listen?SessionName=?ServerPassword=?ServerAdminPassword= -server -log
ExecStop=killall -TERM srcds_linux
[Install]
WantedBy=multi-user.target

Start and enable the Ark Survival Server on system boot.

systemctl --system daemon-reload
systemctl start arkserver.service
systemctl enable arkserver.service

You can check your server is actually running with systemctl. Below you’ll find the exact command and the expected output which confirms that the daemon is operational.

systemctl status arkserver.service
arkserver.service - ARK Survival Server
Loaded: loaded (/lib/systemd/system/arkserver.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2016-08-12 20:47:46 UTC; 4min 13s ago
Main PID: 24825 (ShooterGameServ)
CGroup: /system.slice/arkserver.service
??24825 /home/steam/arkserver/ShooterGame/Binaries/Linux/ShooterGameServer TheIsland?listen?SessionName=MyARKServer?ServerPassword=friendspassword?ServerAdminPassword=superadminpassword -server -log

Since this is a fresh Ubuntu server, its firewall service should be already enabled. The default settings are suitable but we will be adding a few rules for Ark server traffic. We’ll open up these ports.

ufw allow 27015/udp
ufw allow 7777/udp
ufw allow 32330/tcp

For your reference, here is the traffic that will be flowing through these ports:
• UDP 27015: Query port for Steam’s server browser
• UDP 7777: Game client port
• TCP 32330: RCON for remote console server access (optional)

Of course, you can also use the classical iptables method via these commands:

iptables -A INPUT -p udp -m udp --sport 27015 --dport 1025:65355 -j ACCEPT
iptables -A INPUT -p udp -m udp --sport 7777 --dport 1025:65355 -j ACCEPT
iptables -A INPUT -p tcp -m tcp --sport 32330 --dport 1025:65355 -j ACCEPT

Conclusion

With that, you should now have a properly installed ARK Survival Server ready to welcome players. Be sure to upgrade your server if you find yourself regularly approaching or exceeding 10 players a game. If you liked this KB article, please share it with your friends.

Ark Survival Evolved

How to install YetiForce CRM on Ubuntu 16

YetiForce CRM is an open and innovative Customer Relationship Management system that runs on any modern hosting platform. With YetiForce CRM, large and medium-sized businesses can coordinate marketing efforts, sales strategies, projects, and customer support, all via a modern and clean web-based interface. A module system ensures that any custom workflows or modifications can be easily integrated, and a vibrant community is available for support and other professional services. This tutorial will get YetiForce CRM up and running quickly on an Ubuntu 16.04 LTS server.

Getting Started

To complete this guide, you will need the following:
• 1 Remote server (Cloud Server or Dedicated Server) running CentOS 7.
• All commands should be run as the root user
• A LAMP stack configured

Tutorial

Begin by updating your package cache to prevent download errors. You’ll also need to install the unzip package, which is not part of the Ubuntu 16.04 base installation.

apt-get update
apt-get install unzip -y

YetiForce requires a few additional PHP modules not shipped as part of the standard LAMP stack. Install those here.

apt-get install php7.0-mysql php7.0-curl php7.0-json php7.0-cgi php7.0 libapache2-mod-php7.0 php7.0-mcrypt php7.0-gd php7.0-imap php7.0-ldap php7.0-xml php7.0-dom php7.0-zip -y

Next we’ll download the YetiForce CRM zip archive into your document root, and then uncompress it.

wget https://github.com/YetiForceCompany/YetiForceCRM/archive/3.1.0.zip
unzip 3.1.0.zip -d /var/www/html/
mv /var/www/html/YetiForceCRM-3.1.0 /var/www/html/yetiforce

Apache needs access permissions on the document root so it can modify the installation directory. Here we give Apache the necessary access.

chown -R www-data. /var/www/html/yetiforce

As with any LAMP app, YetiForce CRM requires that you create a database and a user that can modify it. In this step, we create the necessary database and access credentials.

mysql -u root -p

CREATE DATABASE yetiforce;
GRANT ALL PRIVILEGES ON yetiforce.* TO 'yetiforceuser'@'localhost' IDENTIFIED BY 'yetiforcepassword';
FLUSH PRIVILEGES;
Exit

With the required PHP modules installed, Apache must be restarted so they are available to the embedded interpreter. We’ll also restart MySQL.

systemctl restart apache2.service
systemctl restart mysql.service

Visit http://your_ip/yetiforce to continue the process in the online installer.

During installation, you’ll be told to adjust your php.ini. Do so as required.

You’ll also be prompted for your database name and its access credentials. Fill these in as created earlier.

Conclusion

YetiForce CRM is now available for access at http://your_ip/yetiforce. You can now begin entering your business’ information, installing modules, and customizing this installation as necessary to make it your own. If you know anyone who needs a good CRM solution, share this article with them so they can get up and running quickly.